• Quantifier Variance, Ontological Pluralism, and Other Fun Stuff


    I actually find the role that chess plays on this forum a bit bewildering.

    Funny enough, international bodies tried, and then gave up on developing a single canonical set of rules for chess, finding it too difficult. Differences in rules—variants aside—will tend to only affect high level play (e.g. how a draw is forced, etc.), but they are real differences that have not been settled.

    Is this just Wittgenstein playing out, with his assumptions that philosophy is the study of language and language is fundamentally a kind of "game"?

    I'd say it's the consequence of a certain use of Wittgenstein, one that tends towards totalizing and reductive explanations (which is ironically something he explicitly cautions against in PI).

    Particularly, in PI Wittgenstein is equivocal about use defining meaning in all cases. "For a large class of cases of the employment of the word ‘meaning’ — though not for all— this word can be explained in this way: the meaning of a word is its use in the language” (Philosophical Investigations 43, emphasis mine). Thing's like Kirpke's assertion that Robinson Caruso can't form or follow new rules despite knowing what rules are because he lives in isolation, or Davidson's claim that Swampman, the molecule for molecule replica of himself who carries out his exact behaviors has no content to his thought, are the sort of assertions you get when you try to squeeze a big set of phenomena into a tiny box of explanation. Carnap-Bar Hillel Information would be a similar example from the more positivist camp.

    I think you can lay some blame on Wittgenstein for the concept of aiming to reduce hard problems to "pseudo problems" though. If our goal becomes not to solve problems, but rather to dismiss them, we should not be surprised if problems begin to seem intractable. It is the difference between starting with the question: "how do I understand this?" and beginning with the assumption that the real question is: "why do I not need to understand this?" or "why is it impossible to understand this?" Perhaps some problems really are problems of language or pseudo problems. However, having discovered this, it will not do to view the aim of philosophy entirely as the project of discovering how problems are not really problems. It's a bit of the old: "discovering a hammer and deciding the world is made of nails."

    I think the move to viewing philosophy as a sort of "therapy" does have some strong points. There is a sense in which much classical and medieval philosophy is practically oriented, itself a type of "therapy." The ideal philosopher from these eras is a saint, even in the pagan tradition (e.g. Porphyry's Pythagoras or Philostratus' Apollonius of Tyana). They are not ruled over or disordered by desires and passions. They do what is right and just.

    However, it is odd when philosophy is offered up as a sort of "therapy" or "pragmatism" is invoked by schools of thought that deny the reality of the Good, making it either into something we "create" through some sort of sui generis power, or else an illusion, since everything is reducible to atoms in the void, etc. For, what is "pragmatism," when the Good, the object of practical reason, is itself either something that must be created according to "pragmatic" concerns, or else is illusory? I really don't like dismissing things as "incoherent," but this is one area where I think the vicious circularity might be real.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff


    I think we are largely on common ground then. Where we differ might be on this assumption:

    The question of which came first does not have application here. Nor is the historical development of these considerations relevant. Again, it's just what we do.

    I would say that "what we do" depends upon and evolves according to "what we know" about the world. Metaphysics, philosophy of mind, etc. are all part of that equation.

    The question of "what comes first," even if it is phrased in a misleading manner, is obviously of intense speculative interest. This makes it important for the simple reason that "all men by nature desire to know."

    But I also don't think speculative thought can actually be divorced from practical concerns. Metaphysics is always in the background; it affects how science is done. The anti-metaphysical movement just made it harder to question metaphysical presuppositions by dogmatically obscuring them. For example, we ended up with "unique substances" to explain heat, combustion, and life in the 19th century precisely because of the dominant corpuscular metaphysics of the day. Likewise, the very practical concern of mental health treatment is bound up in neuroscience, which is itself heavily influenced by things like the Computational Theory of Mind. Why is CTM so dominant? For plenty of reasons, but certainly one of them is how nicely it plays with popular metaphysical conceptions. The two realms don't stand neatly apart. The very practically useful idea of intrinsic and extrinsic properties in physics for example first crops up in Hegel of all places, who is not at all dealing with the "practical."

    IIRC the move to including distinct existential quantifiers is itself the result of Kant making a metaphysical argument vis-á-vis "existence" being a perfection (property) in response to St. Anselm's famous ontological proof.

    There is a way in which the answer to "Why do Bishops move diagonally?" is, that is just how the game is played, that its what we do. Seeking further explanation is redundant.

    I am not sure this is so obvious. What you think about the relationship between logic (or mathematics) and the world/being itself is going to affect what you think about the value of seeking further explanation here. The assumption that any digging here is redundant seems to carry with it its own assumptions.

    The question of "what is logic?" has historically three main flavors of answer:

    1. It is formal systems, essentially the systems (games) themselves or the study of the properties of all such games.

    2. It is the essential "rules of thought." Or in more deflationary terms, the rules that lead to correct judgement.

    3.Logic is a principle at work in the world, its overall order. Stoic or Christian Logos, although perhaps "disenchanted" (Hegel's objective logic, C.S. Pierce's "logic of being).

    Depending on which you lean towards, what counts as a full explanation will differ.
  • Civil war in USA (19th century) - how it was possible?


    Power grabs by elites are often a source of civil wars but they are by no means the only way such wars start. Ethnic conflict is a pretty common cause, particularly in the modern era due to electoral systems of governance taking root. Once you have an electoral system, shifting demographics result in shifting control of the state. See: Lebanon, Syria, Iraq, Yugoslavia, the collapse of the Austrian Empire, etc. Religion can play a similar role, e.g. by far and away the most destructive wars in French and German history (killing far larger shares of the population than both World Wars combined) centered around religious disputes (the Huguenot Wars/Thirty Years War).

    I don't really have a go to book on the American Civil War to recommend unfortunately. But, any way you cut it, slavery was very much the defining factor in the war. This is explicitly how secessionists framed their actions in their own words. The issue of slavery was filtered through institutions though. For instance, slave owning states had votes allocated to them based on their slave populations, which ties the issue to control of the legislature and presidency (since the US doesn't do popular vote presidential elections). Economic undercurrents mattered too, but again, these related deeply to slavery. And then obviously there was a strong movement of abolitionists who found the institution of slavery abhorrent who exerted influence as well.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff


    IE, it is a problem of circularity, in that there are two objects provided we have already determined that there are two objects

    :up:

    I do not think this is a vicious circle though. There is only one Being, and it includes both sides of the Nature/Geist distinction. As Heidegger points out, the world is always already with us. Plurality in being, particularly the plurality of distinct phenomenological horizons—minds—is a given since we only become aware that we are an "I" in the face of the "Thou."

    Rather, this circularity says something about being, what Ferdinand Ulrich, building on Heidegger, Aquinas, and Hegel, terms its "gift-like quality." Things are dependent on what lies outside of them for what they are. This is true from the naturalist frame and in terms of the content of concepts. For example, it's impossible to explain the natural, physical properties of something without any reference to how it interacts with other things, the context it is situated in, etc. Explanations of elements involve reference to universal fields, enzymes are defined in terms of what they do with other things, etc. Likewise, the concept "red" relies on the external concept "color."

    There is a fundemental sense in which, conceptually, things can be defined in terms of "what they are not." This is what allows us to play the game of "Twenty Questions," with any degree of success. On the face of it, narrowing down "anything someone can think of," the entire universe of potential entities, using just twenty yes or no questions seems doomed from the outset. In reality, it's quite doable, since knowing which category something falls into (e.g. real/fictional, living/non-living, etc.) excludes a huge swath of the universe of entities.

    Explanations then, will tend to overflow boundaries, including those of Nature versus Mind. If we don't fall into the trap of thinking that relationships between knowers and objects are in some way "less real," than relationships between objects and other mindless objects, I think we avoid a lot of the problems of this distinction (also at play here is the realization that not everything is infinitely decomposable in terms of analysis, e.g., structuralism).The relationship between a thing and a person who knows it is in a way the paradigmatic relationship, the place where a entity most "is what it is," since properties and potencies can be brought together and be made phenomenologicaly present "all at once," whereas in nature things are diffuse in time and space and mutable.

    In PhR Hegel makes the very Aristotlean claim that a dead hand that has been removed from a body is not an "actual," hand in that it no longer (fully) instantiates the coherence of what it is (likewise, a state that doesn't promote freedom is not an actual state). IIRC, Aristotle says something similar about a blind eye in De Anima. This is a fairly confusing statement at first glance, but I think it calls out the idea of necessity inherent in concepts. Concerns about the forms of "grue and bleen," or a distinct eidos for each individual pile of mud generally seem to miss the point that, if concepts unfold historically à la Hegel, they clearly do not do so arbitrarily. That we can imagine a vast horizon of potential concepts does not entail their historical actualization. That things' "intelligibility,"* might be described in infinitely many ways or that a thing can exist in infinitely contexts, doesn't preclude a knower having any grasp of its intelligibility.


    * Where we might think of "intelligibility" as the sum total of true things that can be said of a thing throughout the entire history of the "human conversation." This is clearly dependent on the evolution of concepts, languages, the sciences, etc. and also on Nature, the two being part of a single whole, Being, in which entities exist.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff


    Consider that berries grow, ripen, and then rot. Can you think of an edge case where it's not clear whether something counts as a berry?

    Absolutely, that's one of the great difficulties in philosophy. This sort of problem crops up even with normal exemplars of terms, e.g. "the Problem of the Many," https://plato.stanford.edu/entries/problem-of-many/

    It's also a problem because we tend to think of eidos as being immutable, but this causes a number of problems to crop up in terms of the mutable world (e.g. the Ship of Theseus, or the Clay and the Statue).

    For no being insofar as it is changing is its own ground of being. Every state of a changing being is contingent: it was not a moment ago and will not be a moment from now.Therefore the grasping of a being as changing is the grasping of it as not intelligible in itself-as essentially referred to something other than itself.

    Kenneth Gallagher - The Philosophy of Knowledge

    I think Hegel's Logic has a very good intuition here in that it is clear that our understanding of universals and concepts is not arbitrary, even if it is situated in culture and evolves over the course of human history, which both include a good deal of contingency. Explanations then are attempting to track down the necessity involved, and this would seem to involve both the realms of Nature and Giest (or Mind) rather than trying to reduce an explanation of how concepts emerge to one or the other.




    Why would they be counted as such? Because that's the way "eight" is used. How is this explained without reference to the 8 berries existing prior to counting? There are presumably eight berries before they are counted.

    Right, and so the reality of "discrete numbers of things" determines how things are counted. But then this seems to me to denote that facts about numbers exist outside of and prior to the human practice of counting, and indeed that facts about "numbers of things," "ratios in nature," etc. are instrumental in bringing about the practice of mathematics. Likewise, human beings wouldn't have come up with various different signs for different animal species but for the fact that distinct species existed prior to man. This doesn't mean that mathematics or language can be explained without reference to practices or in terms of some simple correspondence between terms and the world, it just means that one needs to look outside the ambit of human culture to fully explain the phenomena, since there is a bidirectional chain of influence between culture and practices on the one hand and the rest of existence on the other.

    But then pointing to the fact that the use of mathematics is a practice is itself not an explanation for why that practice exists. Likewise, an explanation of counting seems to require some mention of the fact that the world already has things that we can count in it. If this is the case though, there remains the question of the status of numbers in nature, or more perhaps better phrased, the question of "how the notion of number emerges?"

    Hegel's doctrine of the emergence of "notions" might be convoluted. However, it gets at least something right in that it appears to be a mistake to try to reduce explanations of concepts to either an objectivity that has no reference to mind or entirely to the subjective or social realm. Adequate explanations will wrap around both of these instead of trying to reduce to either of them. IMO, another crucial insight here is that we should not downgrade the ontological status of relationships involving minds; appearances are part of the reality of things.


    Language games are not just words, they are things we do in the world with words.

    Right, and they are part of the world and are shaped by it. So they aren't explainable in terms of only "words" and "actions," because there are facts that determine how language evolves and how people act that are not themselves reducible to words and actions.

    I find "language game," tends to stretch the meaning of "game," to the point where it loses any connection with the original insight about the ways in which language is sometimes used as a sort of game. There is a strong tendency in analytic philosophy to want to try to reduce things down to "just this one thing." Language is games all the way down, or it's explained by information theory all the way down, or it's about nothing but communicating internal mental states, or its about falsification and truth conditions for sentences, etc.

    It is, stepping back, a very strange way to approach a complex problem. Obviously we use language to communicate internal mental states. Given how information theory has allowed us identify commonalities and to a degree unify disparate fields from physics to biology to economics, and given it's foundational role in communications technology (and really in neuroscience too), it would be extremely strange if an adequate explanation of language didn't involve involve information theory at some stage. But the jump to totalizing theories seems to require saying bizarre things in every case. Partly, I think this problem is bound up in the tendency to work on various "problems," in isolation. The idea that such problems can be tackled in isolation from systematic thought— from say, metaphysics—itself presupposes a number of assumptions (e.g., that an adequate theory of universals isn't required to explain the use of universals as terms assumes a lot).

    I don't think "language games," can generally be thought of as discrete entities either. The term "game" tends to imply a fixity that doesn't really exist; neither scientific language nor everyday language is static in the way chess is. They are constantly evolving and affecting one another. Metaphysical assumptions and the language of metaphysics end up affecting scientific language for example. Phlogiston, caloric, élan vital, etc. all end up posited and making their way into the lexicon because assumptions about how sui generis substances must underlie different phenomena in the world. Likewise, it's incredibly common for people to reference their brains and hormones in everyday speech, self-help books, etc. or to use terms that only computer scientists knew forty years ago to describe what they are shopping for. I think it would be strange if the scientific understanding of the world didn't affect natural language, since surely an understanding of the natural world has always driven language evolution.




    And I do not want to say ""numerically discrete entities exist prior to counting", because that seems to be quite an odd thing to say.

    We say a lot of strange things in philosophy. "Numbers are something we do," or "carcinisation works," are also sort of weird things to say. For biologists and the laity, carcinisation isn't a sort of action we perform, but something that exists in our experience of the world and in the world itself. Numbers are spoken of in the same way generally.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff


    You will not be able to divide the berries between two people fairly. It will be functionally inadequate. It won't work.

    Right, because there are eight berries that exist. But if eight "just is" the act of counting, then there are only 8 berries when one counts them as such. Why would they be counted as such? How is this explained without reference to the 8 berries existing prior to counting? What I am objecting to is an explanation that seems to say that prior to an act of counting there is nothing that affects how counting is done.

    If you want to say, "people divide 8 berries into 4 and 4 evenly because there exists 8 berries in the world and dividing 8 by 2 gives you 4," that seems fine to me, but then it isn't the case that numbers just are actions, they also determine actions. And if you don't want to go as far as saying "numerically discrete entities exist prior to counting," it still certainly seems like they must be perceivable prior to counting (and then is must be explained why they are perceived).

    To put it otherwise, and bring my last two posts together, thinking you can start with eight berries and from that give five to each of two different folk is to misunderstand how "eight", "five" and "two" are used.

    And this use came into existence because...?
  • The essence of religion


    Don't some expression of phenomenology try to break down the mind/body problem with embodied cognition?

    Yup. Robert Sokolowski's "The Phenomenology of the Human Person," is one of my favorites and it "builds a bridge," with Husserl and Aristotle. Nathan Lyon's "Signs in the Dust," is another good one, but it's less phenomenological and works more with semiotics, particularly John Poinsot (John of St. Thomas), Cusa, and Aquinas—making it also an interesting blend of modern philosophy and some "deep(er) cuts" from a fairly neglected era. Edith Stein would seem to be another, I'm less familiar with her though.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff


    But if it isn't arbitrary and arithmetic must be the same for all peoples, what explains this? Plenty of other practices do vary widely across cultures.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff


    Right, exactly my point. If some society somehow stipulated that 8/2 = 5, we tend to feel we could give them a good demonstration of why this is not the correct way to do division. But if everything is just games and rules then it seems that you certainly can show that aesthetic, metaphysical, or religious claims are "wrong" in the same way that computations can be wrong. Error, in both cases, would consist solely in the fact that stipulated rules are violated. Trials for heresy would then be essentially the same sort of thing as disagreements about how to compute a harmonic mean, which doesn't seem to be the case to me.



    I understand what you're getting at, but it's a bit of a strawman, isn't it? "Unanalyzable primitive" doesn't seem to capture what philosophers mean when they talk about numbers and words as instances of human activity, though I suppose a deeply pragmatic view might support that.

    Right, and I don't mean for my criticism to apply to any theory that posits that activity or practice might be fundemental for defining mathematics or language. I am responding specifically to the assertion that there is no need to explain "causes" or "reasons" for why mathematics or language has the form it does. At the very least, the fact that subjective experience includes numerically distinct entities that can be categorized together (e.g., "I see many things and they are all rocks) has to come into the explanation. But then this element of perception does not seem bound up in stipulated practices, but is rather part of human nature; there are not any cultures where people fail to recognize numerically distinct entities.

    An example from language might make my objection more clear. The London Underground (arguably) has its own species of mosquito. It is descended from a wild mosquito, but it no longer behaves like that mosquito and will not mate with it in the wild—hence it being put forth as its own species. I find it implausible that one can explain the coining of the term "London Underground mosquito," without reference to the facts about what has occured in nature (i.e., there is a cause external to practices). Likewise, as I think Banno would acknowledge, carcinisation "points to," or "calls forth the intelligibility" of a process observed in nature. But an explanation of how the term develops then needs to include the existence of that phenomena as far as I'm concerned. Causes certainly seem to come into it.

    But then the question of numbers just seems like a more opaque version of the same sort of question. If the development of the term carcinisation needs to be explained in terms of the real existence of a process by which many disparate lineages developed crablike traits, it seems at least plausible that the development of numbers works similarly. Indeed, we have a number of good explanations like this; we can explain why humans delineate colors the way they do in terms of the photoreceptors in the human eye. If in the case of numbers it doesn't work similarly, i.e. numbers don't exist in nature in the way carcinisation, metamorphosis, evolution, etc. exist, then there should be a compelling alternative explanation of them.

    That the meanings of words is fixed by use is a good insight, but it's not a whole explanation of language. Use itself doesn't float free of the rest of the world.
  • A poll regarding opinions of evolution


    It wouldn’t occur to you as a useful project to link together the Enlightenmment philosophies of figures like Descartes and Locke with ways of thinking informing the music, art, literature, poetry, sciences and political theory of that era, and then to do the same with Aquinas and the cultural modalities of his era? If we take Rembrandt vs Giotto as an example, do you not see taking place in the historical gap between the two a substantial innovation in construing what makes the human human, an innovation that mirrors the move from the medieval to the modern period in philosophical, scientific and literary modes of thought?

    Absolutely, I didn't mean to suggest otherwise. What I was correcting was the idea that the notion that the subject-dependent nature of objective experience would be alien to medieval thinkers. If anything, the Medievals focus on this more than many of the early moderns. The latter tends to privilege "things-in-themselves," and focus on a reduction of the world to what can be quantified and plotted. In many ways, the early contemporary period is charting a path back to notions that were dominant prior to the modern era—correcting some of its excesses.

    This is precisely one of the areas where I think medieval philosophy and contemporary philosophy share a lot of common ground. It's why it's funny that it's also the most neglected era today, because, like I said, there are a number of ways in which they are the two most similar eras.

    You might even be able to add "being very dogmatic on certain issues," to the list there too.
  • A poll regarding opinions of evolution


    He didn't really need to leap that far. The assertion that only extension in space truly exists goes back to the Ionian materialists and concerns over solipsism go back just as far in both Greek and Hindu thought. St. Augustine, whom Aquinas was intimately familiar with, has several versions of the cogito. Descartes is, himself, riffing off Platonist skepticism (and indeed, the Academy had its own "skeptical period").

    The question: "do we understand the world or only our ideas of the world," comes up in a few places, Question 85 of the Summa Theologiae being the one that jumps to mind. Aquinas does not take up considerations that are identical to Locke, Berkley, or Hume, but the chain of reasoning is quite similar.

    I don't think the idea that "the notion of the subject-dependent nature of objective experience would be utterly alien to," either classical or, particularly, medieval thinkers, can survive contact with the material. Part of the problem here might be the common tendency of many modern thinkers to assume that the Protestant anti-rationalism and Catholic fideism of their day represented the "norm" for Christian thinkers centuries earlier. This is certainly a theme in Nietzsche, who seems to see all of Christianity through the 19th century Lutheran pietism he is familiar with and project it backwards. But anti-rationalism, like fundamentalism, is a modern movement. So, the myth that everyone was a naive realist vis-a-vis perception and morality until 1600 certainly shows up in otherwise worthwhile texts, but it's completely unmoored from reality.
  • The Idea That Changed Europe


    Well first, the Hebrews as an Asian people, so obviously it is defacto an Asian creation narrative. There are certainly similarities between Genesis and Sumerian and Babylonian creation narratives, but as minds like Jung and Joseph Campbell, or the perrenialists have shown, you can make a case for "great similarity" between essentially all such narratives.

    The ideas in Genesis are indeed very old and predate the Hebrew language. Verses from Numbers have been found in a sort of proto-Hebraic, while a version of the Ark story is among the oldest pieces of writing that have ever been recovered.

    However, it is impossible to say that African versions of this story are the originals. There is no written material coming out of SSA that is as old as the Mesopotamian sources. The Yeruba people didn't emerge until millennia later and the Asante are a good deal later than them. It is certainly possible that these stories preexist the split off of these (relatively) new ethnic groups, but wouldn't it be more plausible that they made it down from Egypt, which has had extensive trade networks moving down into SSA and a large Jewish population since antiquity? We know Christianity had taken root in Ethiopia centuries before the earliest of those two groups emerged.
  • In any objective morality existence is inherently good


    I wouldn't say that x should follow from y is the same as 'entails'. Should or ought in are words of intention or preference.

    I agree that uses of "should" such as, "if you add two odds together the result should be an even," leave something to be desired as to clarity.

    But then g. seems to equivocate on this usage.

    If it shouldn't exist, then the answer "No" objectively shouldn't exist thus contradicting itself.

    But if "it shouldn't exist," is taken as "it is not good for it to exist," I don't think there is a contradiction.

    Let me rewrite the argument without the equivocation:

    e. If it is the case that there is some objective moral standard that concludes that [it would not be good for] anything to exist, that objective moral standard must itself exist.

    f. But if [that objective standard] exists, then according to itself, it [is not good that it] exists.

    g. If it is [not good] for anything to exist then it is not good for that objective moral standard to exist.

    I am not seeing a straightforward contradiction here. "Everything is bad and it would be better if there was nothing," might be self-refuting in a way, but it isn't saying the equivalent of p and not-p.

    That's about the gist. So if there is an objective standard of goodness that exists, it cannot logically conclude that it ought not to exist. For if it did, then that logically means it would be good if the objective conclusion did not exist. If we got rid of the objective morality based on its own conclusions then, we are left with only one answer, that there ought to be existence.

    Arguably, yes, such a claim would be self-refuting. But presumably the standard is saying "it would be better if everything did not exist," not "it would be better if this standard alone did not exist." In particular, I don't see how we are left with "only one answer." If there is no standard for what ought to be the case then our answer might as well be arbitrary.
  • A poll regarding opinions of evolution


    :up:

    It's an interesting area because it ties in with the radical transformation of views on freedom. Freedom goes from primarily being defined in terms of actuality (the ability to do the Good) to bring primarily defined in terms of potency (the ability to choose anything). This has ramifications throughout philosophy. For example, the opposition to ontology in Derrida and Foucault on the grounds that it limits freedom, or Deleuze's suggestion that this can be bypassed via the recognition of ontology as "creative" relies on particular modern assumptions about freedom and the relation of knowledge to freedom.

    Were ontology indeed something we "discover," more than something we create, then a move to dispense with it or to assume we have more creative control over it than we do can't empower a freedom defined by actuality. Knowledge is crucial to actuality. Plotinus uses Oedipus as an example of this. Oedipus is in a way a model of freedom, a king, competent, wise, disciplined, etc. And yet he kills his father, the very thing he had spent his entire life trying to avoid, and so in a crucial way a truth that lays outside the compass of what he can fathom obviously makes him unfree.
  • In any objective morality existence is inherently good


    I think you might have an equivocation with your use of "should" here. "Should" can mean "ought," or "it would be good to..." but it can also be used as in "x should follow from y," where it is basically standing in for "x entails y."


    It seems possible that an objective standard could exist that says "things ought not to exist." This would simply mean that existence is not good, but it might still obtain anyhow. There is some self-reference at work here, in that the objective standard of good, by itself existing, is a bad thing, but this does not seem to be a contradiction.

    Now, it is the case that if nothing exists, then no standard of goodness can exist. If that's what you're getting at, that seems fine. But here, the term "exists" seems like it could also be equivocal. Do facts like 1+1=2 exist outside of created existence? Do they exist necessarily?

    Well, if they do exist in a way different from how chairs and tables exist, and the standard of good exists in the way necessary facts exist, then it seems possible for it to exist while also stating that created existence "ought not exist."
  • A poll regarding opinions of evolution


    By contrast, the classical/scholastic tradition hadnt yet arrived at a notion of subjective consciousness, and as a result, had nothing like the modern concept of the object. Therefore, the notion of the subject-dependent nature of objective experience would be utterly alien to them.

    I really don't think this is the case at all. For one, Aquinas pretty much constructs Locke's arguments re primary/secondary qualities and Berkeley's arguments re there being "nothing but," ideas. He just rejects both of these. Solipsism, subjectivist epistemic nihilism (and a version of it in fideism) , extreme relativism (Protagoras' "man is the measure of all things) were going concerns going back to the pre-Socrartics. The medievals were aware of these, they just rejected them by in large. They still engaged with them though; Thomas was railing against the Maniches 1,000+ years after there were any Manicheans.

    There is plenty of awareness of the subject-dependence of perception in medieval thought. They were aware of and write at length about color blindness, differences in standards of beauty, etc. This is why it hasn't been that difficult to re-adapt scholasticism to modern theories of perception (e.g. virtual realism, pansemiosis).

    For example, the entire idea behind "intentions in the media," is that the patterns that become sensory experience exist in the enviornment as potencies, but are only ever actualized by the presence of an observer, who brings their own potency to the interaction. Everything exists relationally in most medieval thought; "things in themselves," i.e., things as they interact with nothing, are epistemicaly inaccessible and make no difference to the world. Only God truly is (Exodus 3:14). Knowledge is always a union, the Platonic vision of the knowing subject ecstatically going beyond themselves in union mixed with the Aristotlean idea of the "mind becoming like," the known. Thus, "everything is received in the manner of the receiver."

    What they lack is the Cartesian subject who exists apart from anything else. For the medievals, the thou is always already there with the I. Things are defined by their interactions with other things, the "generosity of being," and we cannot exist outside of this communion (Ulrich's "gift of being.")

    This idea that the subject shapes the way in which a thing is known is absolutely central to theology and the anologia entis as well. In the Disputed Questions on Truth, truth itself is held up to be indefatigably bound up in the knowing subject. A key difference then is a preference for the "mind of God" as the gold standard of all knowledge, a view that sees all appearances and all reality, as opposed to the "view from nowhere," which ties to strip appearance away from reality. In medieval thought, appearance has to be part of the absolute, since it is real.

    Actually, I've thought a lot in general about how medieval thought is a lot more like post-1930 or so thought than classical or early modern thought. Both contemporary and medieval thought center around the academy to a much greater degree than the two other eras. They both tend to focus on a deep study of core "canonical" texts. They both focus far more on critique. They both are very focused on how the subject shapes knowledge. They both split into two camps, with one camp extremely focused on logic chopping, definitions, etc. They both generate their own impenetrable list of unique terms (e.g., Ulrich is impenetrable because you have to speak Hegelese, Heideggerian, and Scholastic).

    You can see the outlines of intertextuality in Al Farabi and Avicenna. Pansemiosis, far from being something new to Continental Philosophy, is all over Bonaventure, Poinsot, etc. The two eras have a lot in common, with both largely looking back to the prior era (classical and early modern respectively).
  • Which theory of time is the most evidence-based?


    Yes, these are all dealt with in detail in the book I mentioned. Wheeler's stuff on "many fingered time," and retrocausality would be another relevant avenue. Objections related to these arguments can all be made consistent with local becoming. In particular, the Andromeda Paradox is a weak argument for eternalism and fits fine with local becoming.

    One of the fun things about Arthur's book is that it shows how Gödel and Robb anticipated a lot of these later arguments, but ultimately rejected them (not unlike how Aquinas toys with the ideas of Locke and Berkeley). For Gödel, and I'm inclined to agree with him, the idea of every moment existing "all together," makes the very thing we're trying to explain incoherent to us. Events exist exactly where and when they occur and nowhere else. To ask if the future "already exists," is to have already abstracted yourself out of the manifold, such that you are no longer at a specific where and when within it. Whether or not the future is "already contained" in the present or not, is of course a different question, one of determinism, and there isn't a clear answer on that one either since different interpretations of quantum mechanics will have different things to say to us about this.

    If becoming is local, as it seems it must be if it exists at all, then it seems obvious that light from the same location will reach different locations at different times, causing disparate effects on what seems "simultaneous." This is no problem.

    Gisin makes an interesting argument that the preference for eternalism in physics is grounded in the Platonist assumptions in mathematics in Einstein's day. He has some interesting ideas about intuitionist mathematics being a better model for the sort of indeterminacy we actually see in physics, that, at first glance, would seem to flow well with local becoming.
  • Which theory of time is the most evidence-based?


    , "many philosophers have argued that relativity implies eternalism. Philosopher of science Dean Rickles says that, "the consensus among philosophers seems to be that special and general relativity are incompatible with presentism." Christian Wüthrich argues that supporters of presentism can salvage absolute simultaneity only if they reject either empiricism or relativity."

    I think that is vastly overstating the case. Often, a Newtonian version of presentism is hauled out as a strawman to make this case, but there is nothing in modern physics that precludes local becoming.

    Richard T. W. Arthur has a great book on this topic called "The Reality of Time Flow: Local Becoming in Modern Physics." It does a very good job explaining how the debate is largely grounded in philosophy, not "science."

    The preference for eternalism itself is partly the result of historical accident and positive feedback loops. Many physicists became convinced of eternalism based on philosophical arguments. Some of these were not good, particularly bad interpretations of the Twin Paradox were quite influential. This led to many physicists teaching and repeating these arguments in their books. For instance, Paul Davies, who is a favorite of mine, nonetheless uses one of the deeply flawed versions of the Twin Paradox to argue that eternalism "is what science says must be true," in one of his books. Then, because physicists wrote this sort of things in books, and analytical philosophy has had a tendency to preference to statements of scientists, you get philosophers pointing to arguments grounded in philosophy that have been repeated by scientists and saying "see, this is what the scientists say, it's not your role to disagree."

    Russell's push for presentism, which is tied up in his whole agenda, was another influential thread. But his arguments on the elimination of cause have essentially been rejected, even by those who consider themselves "neo-Russelleans" who want to salvage some of his insights. And yet arguments from that period still tend to haunt physics.

    I am not even sure how this is a question that could be empirically resolved.

    At any rate, it does seem to affect how science is done. For example, because eternalism is popular, there is this tendency to think that physics MUST be time reversible, since that has become an argument for the position. But currently, it doesn't seem to be. The discovery of the Higgs boson overshadowed a major breakthrough showing asymmetry back in the 2010s. And of course, physics isn't at all reversible at macro scales. Nor is decoherence and collapse reversible. So, eternalism, at least of the variety that relies on arguments from "the time symmetry of physics," also seems to require picking specific theories in quantum foundations and ruling out others, even though they all produce empirically identical results. If collapse actually happens, it appears to define the directionality of time in one of the most profound ways imaginable.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff


    If you want to say "nouns are a human invention," that seems like fair game. But there has to be some sort of explanation of their usefulness and development across disparate, isolated societies.

    Will we say that the world consists of objects, and we just give them names? Or will we say that the names are arbitrary, we just invent them?

    These are good questions. I would say they lie at the center of modern philosophy with its focus on the "escape from subjectivism," or contemporary philosophy with its focus on "the escape from the box of language," (or similarly to the widespread attempt to redefine everything as "pseudo problems only arising from the misuse of language.") And they lie at the center of the ancient problem of the One and the Many, which often just returns in alternate garb throughout the history of philosophy.

    I'm totally willing to accept that a dilemma between the two options is "false," that the two are not mutually exclusive. But just stating the trivial fact that "numbers are something humans use," or "words are things we say," as if this pivot to activity makes the explanation an unanalyzable primitive strikes me as essentially a non-explanation. Swimming is something people do, and it's useful, etc. I don't think an explanation of it that leaves out water and solely focuses on the fact that it's an "activity" that "works" amounts to much.

    You can just as easily turn all of truth into another pseudo problem, something that is merely defined by a game that "works"—something that both defies and needs no metaphysical explanation. But when we reach a point where Goodness, Truth, our words, and now even our own conciousness itself have all been "eliminated" or "deflated," so as to avoid pseudo problems, things start to look a lot like Protagoras (or at least Plato's caricature of him). If it's games and feelings of usefulness all the way down, no one can ever be wrong about anything
  • A poll regarding opinions of evolution



    Since any putative "director" logically must exist outside the system to be directed, and thus beyond our capacity to detect it, I think the more relevant question is as to whether we have any good reason to think evolution is directed.

    Generally speaking, the classical/scholastic view would be that God is both "inside" and "outside" the system. God is not a participant in being, something that sits alongside finite being and would tinker with it from the outside. You can't have a Porphyryean Tree where God's infinite being sits beside created being; there is no univocity of being. Deus est Ens, God is Being Itself. I believe it's St. Aquinas and St. Bonaventure who first start turning to this explicit formulation, but you can see it quite clearly in Patristic commentaries on Exodus 3:14.

    But given analogia entis, or anything like it (so Orthodox thought as well), it doesn't make any sense to talk about specifically "observing" the actions of God in "directing" anything in the way we would observe a pesticide causing cancer or one ball moving another. For that to make sense you need the Reformation shift to the univocity of being and a hard distinction between Providence and Nature; the sort of distinction that gets you to Hume's definition of miracles, where a miracle has to be some sort of violation of "the laws of nature."


    This is why Calvin would go on to have such a problem digesting Augustine. How can a person have any sort of freedom without constraining divine sovereignty if God sits over here and man over there? Here, Augustine's "God is closer to me than my most inmost self," degenerates into a mere metaphor, rather than being a sort of metaphysical statement.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff


    So where would causation fit here? I don't see that it does

    Presumably, humans in disparate environments did not have the idea of numbers spring into their minds out of the aether uncaused. Likewise, animals presumably did not develop some rudimentary mathematical reasoning "for no reason at all."

    Consider for instance the term “carcinization.”The term refers to a specific type of convergent evolution, whereby many different species come to evolve “crab-like” traits. The word carcinization itself has a history situated in human social practice. The way it is pronounced or spelled is, to some degree, arbitrary. However, the term also relates to the world. It calls forth a feature of the world, a property of several types of species, namely that diverse species have evolved a similar body shape, a tough outer shell, etc.

    This term developed as part of the “human conversation,” through discussions that spanned lecture halls, laboratories, and journal articles. People, wanting to know the truth of the world, took a very close look at the types of animals they found in it. Discovering things about DNA, gene sequencing, and natural selection allowed them to discover new things about these animals, namely that, despite sharing many traits, they evolved from diverse backgrounds. The term is bound up in social practice and history, but it relates to “how things are,” outside of human social practice and history.

    So, IMO, part of explaining why we developed and use the term carcinisation has to involve the the actual process of carcinisation in nature, which predates humanity by millions of years. If carcinisation hadn't occured (the counter factual), we wouldn't have a word for it. This implies that the natural process itself is in some way involved in causing the development of the term.

    Now, if you want to say "numbers are a human invention," that seems like fair game. But there has to be some sort of explanation of their usefulness and development across disparate, isolated societies. Pointing to mathematics as a social practice and then denying the meaningful explanations can be given for why they are a social practice just seems like a non-answer.



    I don't think we are any more justified in saying this than we are in saying the world is full of distinct objects. All we have is signal processing. Is the source one signal? Two? Two trillion? How can you tell when you're receiving and analysing them all at once? It makes a difference in your metaphysics, but in nothing else at all that I can see.

    I don't think metaphysics is all that separable from the rest of our attempts to know the world. The "anti-metaphysical" movement simply enshrined a very particular sort of metaphysics as the prevailing dogma for a time. But this dogma has implications for areas outside metaphysics.

    For one example, we might consider the effort to stop any work on quantum foundations up until the late 1990s (Becker's "What Is Real?" is a great book on this period). People were hounded out of their field for pursuing lines of inquiry that later provided the foundation for some Nobel Prize winning work in physics because it challenged the (supposedly non-existent) metaphysical orthodoxy.

    Likewise, after more than a century, the basics of chemistry has still not been reduced to physics. In turn, a number of people have argued that molecular structure is an example of strong emergence. However, one of the most compelling arguments against this makes a very interesting turn. It claims that chemistry can, in theory, be explained entirely by physics, but that it cannotbe reduced to atoms, protons, electrons, etc. alone, i.e., "molecules' 'constituents'." Rather, the enviornment, myriad interactions between atoms and the rather active "void," universal fields, is essential for explaining molecular structure (as opposed to just the particles conventionally thought to define a molecule). This is interesting because it defaults on the position that molecules just are the atoms that make them up, that H2O would have the same properties in any possible world, etc. Rather, the whole is defined by its context.

    At the very least, this would seem to cut against conventional superveniance physicalism where things are the sum of their fundemental particles(icle)s, but I think it also fits in better with a process view that avoids the need for strong emergence.

    Either way, it's clear that metaphysics is going to play a role in these discussions. If our background assumption is that things simply are what they are made of, this sort of solution to a problem is going to be a lot less obvious. If water just is H2O, you're never going to look beyond interactions between hydrogen and oxygen to try to determine its internal structure.

    Thus, my line would be that the "anti-metaphysical" stance simply allows calcified metaphysical assumptions to go unchallenged and unnoticed, even though these will invariably determine how we approach problems. Which in turn makes metaphysics relevant.

    I don't have time to respond to your other post, but I will agree that the computational view seems to get something very important right. However, the emergence of first person subjective experience, and an explanation of how decisions made as part of that experience can affect our actions, would seem to require some sort of paradigm shift here, something akin to Einstein's revision of space and time.

    Currently, this strictly mechanistic computational view would seem to preclude the idea that our subjective experiences ever have anything to do with our behavior (i.e., casual closure). E.g., we can never eat certain foods "because they taste good," etc. Aside from the major plausibility issue here, this would also suggest that characteristics of subjective experience can never be something that natural selection directly selects on (since behavior is never determined by subjective experience). This simply seems implausible given how many good evolutionary explanations of subjective experience there are.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff


    That it’s the kind of thing a Parmenides would say?

    Precisely, although arguments of this sort, made from contemporary physics, tend to also have a bit of Heraclitus too. Even "fundemental particles" are not truly "fundemental," having both beginnings and ends; particles are "shadows on the cave wall." Everything must be, in some sense, "One," since everything interacts with everything else and energy, information, and cause move across all "discrete" boundaries effortlessly. However, everything is also always changing. The One is a field of fields, a single continuous process.

    I think the Problem of the Many and the One is still central to contemporary debates about direct vs indirect realism, the existence of logic, Logos, and number "out there," etc. However, shifts in the way we talk about this obfuscate the close connection.

    The modern Problem of the Many seems to me to just be a sub problem of this general problem. This is the problem of dilneating discrete entities. E.g., a cloud is a collection of water droplets. You can draw a line around any ensemble of droplets and say: "this is Cloud A." But you could just as easily draw a line around a slightly different ensemble of droplets and it would be just as much a cloud, although with different but overlapping physical constituents. So do we have one cloud, or perhaps millions all nested on top of one another? The same problem shows up with cats and cars, since these are just "clouds of atoms," or perhaps a better way to put it would be "sub-processes in the universal process proceeding cat-wise and car-wise." Solutions to the Problem of the Many often deny any true part-whole relations, make them "brute facts," or have to settle for a sort of ontological vagueness.



    I don't see any reason to think that we carve up the world arbitrarily, but rather I see many good reasons to think that we are constrained by its actual structures.

    Exactly my thoughts. Although I do think the challenges to the existence of discrete entities (discussed above) are quite serious and might be part of revising metaphysics and epistemology.

    However, if one takes the position that all discrete entities are illusory, and our names for them and their properties "inventions," it seems that it is impossible for us to truly say anything about anything (something Parmenides gets at). But, there is a good argument to be made that these discrete things don't exist "outside minds," even if it is the case that minds do not create these identities ex nihilo or at all arbitrarily. To my mind, this should call into question the idea that "the view from nowhere/anywhere," should be the gold standard of knowledge. Rather, things most "are what they are," when known.

    Any physical system only manifests a tiny number of its properties across any interval. Properties are the result of interactions, so they are context dependent. A banana does not "look yellow," if no one looks at it, but properties that involve mind are in no way unique in relying on interaction in this way (and so they are not "less real" on this account, as Locke would have it). A banana peel also does not reflect light of the wavelengths corresponding to "yellow" in the dark. Salt doesn't dissolve in water without being placed in water. The only epistemicly accessible properties are interactions and any thing only interacts in one context over any given interval. It's only in the knowing mind that all of a thing's properties across disparate contexts are "present" (phenomenologicaly) at once. This makes the relation of "being known" a special sort, one where things most "are what they are," rather than it being a sort of "less real" relationship.



    The real problem I see with saying that universals are mind-independently existent or real is that no one has the foggiest notion of what kind of reality or existence they could enjoy.

    Well this is the big problem with universals. They are hard to understand and this has led to them often being explained as simply existing in a sort of "magical" realm outside space and time. This is often how Plato gets simplified, whereas Hegel's argument re universals (which I see as a sort of completion of Plato's) just gets passed over because the Logic is a bear. Universals are always going to seem implausible if they are sitting to the side being as a sort of magic counterpart to it. Here, there is a real tendency to mistake Plato's "images" for his myths.

    The Platonist and Hegelian arguments re universals and vertical reality are about necessity, not a special spirit realm. A rock is "less real," than triangularity in the sense that a rock is largely a bundle of external causes.

    And whereas I have never seen anyone manage to condense Hegel's view into a "soundbite," I think that Robert M. Wallace does a decent job at getting at the core of Plato's insight re self-determination and vertical reality:

    By calling what we experience with our senses less real than the Forms, Plato is not saying that what we experience with our senses is simply illusion. The “reality” that the Forms have more of is not simply their not being illusions. If that’s not what their extra reality is, what is it? The easiest place to see how one could suppose that something that isn’t an illusion, is nevertheless less real than something else, is in our experience of ourselves.

    In Republic book iv, Plato’s examination of the different "parts of the soul” leads him to the conclusion that only the rational part can integrate the soul into one, and thus make it truly “just.” Here is his description of the effect of a person’s being governed by his rational part, and therefore “just”:

    Justice . . . is concerned with what is truly himself and his own. . . . [The person who is just] binds together [his] parts . . . and from having been many things he becomes entirely one, moderate, and harmonious. Only then does he act. (Republic 443d-e)

    Our interest here (I’ll discuss the “justice” issue later) is that by “binding together his parts” and “becoming entirely one,” this person is “truly himself.” That is, as I put it in earlier chapters, a person who is governed by his rational part is real not merely as a collection of various ingredients or “parts,” but as himself. A person who acts purely out of appetite, without any examination of whether that appetite is for something that will actually be “good,” is enacting his appetite, rather than anything that can appropriately be called “himself.” Likewise for a person who acts purely out of anger, without examining whether the anger is justified by what’s genuinely good. Whereas a person who thinks about these issues before acting “becomes entirely one” and acts, therefore, in a way that expresses something that can appropriately be called “himself.”

    In this way, rational self-governance brings into being an additional kind of reality, which we might describe as more fully real than what was there before, because it integrates those parts in a way that the parts themselves are not integrated. A person who acts “as one,” is more real as himself than a person who merely enacts some part or parts of himself. He is present and functioning as himself, rather than just as a collection of ingredients or inputs.

    We all from time to time experience periods of distraction, absence of mind, or depression, in which we aren’t fully present as ourselves. Considering these periods from a vantage point at which we are fully present and functioning as ourselves, we can see what Plato means by saying that some non-illusory things are more real than other non-illusory things. There are times when we ourselves are more real as ourselves than we are at other times.

    Indeed, we can see nature as a whole as illustrating this issue of how fully integrated and “real as itself ” a being can be. Plants are more integrated than rocks, in that they’re able to process nutrients and reproduce themselves, and thus they’re less at the mercy of their environment. So we could say that plants are more effectively focused on being themselves than rocks are, and in that sense they’re more real as themselves. Rocks may be less vulnerable than plants are, but what’s the use of invulnerability if what’s invulnerable isn’t you?

    Animals, in turn, are more integrated than plants are, in that animals’ senses allow them to learn about their environment and navigate through it in ways that plants can’t. So animals are still more effectively focused on being themselves than plants are, and thus more real as themselves.

    Humans, in turn, can be more effectively focused on being themselves than many animals are, insofar as humans can determine for themselves what’s good, rather than having this be determined for them by their genetic heritage and their environment. Nutrition and reproduction, motility and sensation, and a thinking pursuit of the Good each bring into being a more intensive reality as oneself than is present without them.

    Now, what all of this has to do with the Forms and their supposedly greater reality than our sense experience is that it’s by virtue of its pursuit of knowledge of what’s really good, that the rational part of the soul distinguishes itself from the soul’s appetites and anger and so forth. The Form of the Good is the embodiment of what’s really good. So pursuing knowledge of the Form of the Good is what enables the rational part of the soul to govern us, and thus makes us fully present, fully real, as ourselves. In this way, the Form of the Good is a precondition of our being fully real, as ourselves.

    But presumably something that’s a precondition of our being fully real must be at least as real as we are when we are fully real. It’s at least as real as we are, because we can’t deny its reality without denying our own functioning as creatures who are guided by it or are trying to be guided by it.13 And since it’s at least as real as we are, it’s more (fully) real than the material things that aren’t guided by it and thus aren’t real as themselves.

    Whereas the Logic gets into the issue of necessity even for those things that are not self-determining in the way the men can be.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff


    number would be a real attribute of objects

    How does this square with this?

    but the numbers themselves would only be real as ways of thinking and dealing with objects, and also as elements in formalized systems of rules elaborated upon that basis.

    Do you mean numbers as abstracted from any particular instantiation if them?

    What do you think of the claim that discrete entities only exist as a product of minds? That is, "physics shows us a world that is just a single continuous process, with no truly isolated systems, where everything interacts with everything else, and so discrete things like apples, cars, etc. would exist solely as 'products of the mind/social practices.'"
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff


    :up:

    I think if you're going to argue for something more along the lines of "mathematics is invented/arbitrary," a compelling argument at least needs a good explanation of why such a practice arises, is so incredibly useful, and seems so certain. By way of analogy, swimming is also something we "do," but any decent explanation of how swimming works is going to involve, at the very least, mentioning water. Certainly, you don't need an in-depth explanation of "how swimming works," to swim, but swimming itself, or the fact that it is an activity, is not an explanation of swimming.

    But since mathematics underpins all of science, it's obviously going to be an area of intense curiosity, which is why...



    But leaving that to one side, isn't it enough that we want to share the six fruit equally amongst the three of us, to explain the need for counting?

    I would say no. Knowing what mathematics is seems like one of the biggest philosophical questions out there. Not to mention that a number of major breakthroughs in mathematics have been made while focusing on foundations, so it hardly seems like a useless question to answer either.

    As for causes, in this case I don't think you can do without them. If you want to say that "three" is something "stipulated" in the way that the concept of private property is, the obvious next question is: "ok, conceptions of property vary radically across time and space. Numbers do not, and they are stipulated the same way across cultures, including those that have been isolated from one another. Moreover, we have a very easy time imagining worlds were private property, marriage, etc. do not exist, but people have long thought it not incoherent to be asked to imagine a world where basic arithmetic works differently, where two and two make five. Why this huge difference?"
  • Christianity - an influence for good?
    Just an FYI, because popular understanding of this issue is often cloudy at best: indulgences go back to the ancient Church and still exist today (e.g., there was one for people who couldn't make it to confession because of the pandemic). They go back in the Orthodox tradition as well, although they haven't existed in anything like the current Catholic version since the 1200s or so. The general broad brush stroke picture of the abuse of indulgences gets at the core issue, but often in a fairly misleading way. The theology surrounding the Sacrament of Penance and the Treasury of Merit is complex. Indulgences are for temporal punishment not eternal judgement, and the times associated with them align to periods of earthly penance not "time off" in Purgatory. For the most part, indulgences didn't involve money, but instead acts of penitence, pilgrimage, etc. These were personal sacrifices and spiritual exercises that were supposed to aid in bringing a person back into communion with God and God's church following a breech. A main benefit of the indulgence is that it allowed priests more flexibility surrounding canonical penance (which was generally quite strict; think eating nothing, wearing sack cloth and ashes).

    The idea of indulgences being sold for money gets at the basic root of the controversy, although the issue was more about an inability/lack of political will to control the practice (also people wanting indulgences for all sorts of non-financial acts—a focus on "official recognition," essentially). There was never a theological position that you could "buy your way out of punishment after death," embraced as doctrine.

    Rather, there was a widespread abuse of indulgences such that they were essentially being sold by people taking advantage of their proliferation outside of financial contexts, including many people with no standing in the Church who were impersonating clergy to work as professional "pardoners." This also gets at the extremely fractured jurisdiction within the church (and the temporal authorities) in this era.

    What changed in the early 1500s was that the Pope forbade almsgiving or any other sort of financial donation to be associated with indulgences because of the wide room for abuse, rather than a shift in theology abolishing indulgences. However, there was also a theological move to get people to stop focusing on the indulgence itself, and instead on the intended spiritual/psychological purpose of penance. It probably helped that literacy boomed in this period with the advent of the printing press, which in turn dispelled a lot of the mystery around written certificates.

    A lot of stuff on indulgences gets conflated with the controversies surrounding chantries, which were a significantly larger social force. These were establishments that largely focused on masses for the dead (presumably in Purgatory), and were established by nobles, guilds, etc. This isn't all they did, but the criticisms during the Reformation largely focused on their role in saying masses for the dead or the idea of the mass as a "work" in general.

    Anyhow, it was very much a bottom up phenomena that was poorly managed due to perverse incentives rather than being an official sanctioned practice or doctrine.

    That all said, the Church has made major shifts in doctrine in plenty of other places. Utraquism would be a key example. The idea of the laity partaking in both the flesh and the blood of Christ (as opposed to just the host/body) was enough to motivate violent struggle; whereas post-Vatican II both are frequently given (it is at the discretion of the bishop IIRC). Clerical celibacy largely only for bishops originally. It became mandated largely to solve the issue of powerful nobles essentially bequeathing bishoprics as fiefdoms, a particularly pernicious sort of integralism.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff


    "Numbers are something we do," suggests the question: "why are numbers something we (and animals) do?" All activities have causes, right?

    IMO, attempting to answer that question is going to bring us back to questions about the nature of numbers, their ontic status, the "presence" of mathematics in nature, etc.

    It's the same thing with words and meaning. We can say words and their meaning are part of social practices, but there remains the questions: "why are social practices what they are? why do they evolve the way they do? etc."

    I don't see how an account that is social practice or activity "all the way down," is going to work.
  • Is a Successful No-Growth Economic Plan even possible?


    Economic growth doesn't just come from population growth or increased resources consumption. Increased human capital (e.g., education), new technologies, improved efficiency, etc. can all lead to GDP growth even when the population and the total amount of natural resources is decreasing.

    Just for an example, consider vehicles. Older commuter cars get worse gas milage than modern trucks. You're able to use half as much gasoline, sometimes significantly less than that with hybrids, to travel the same distance in similarly sized cars. And newer cars are vastly more reliable, generally traveling close to twice as many miles before they have to be junked. Likewise, lightbulbs today last far longer and use much less electricity. Emissions for the wealthiest countries are able to trend down because new technologies are significantly more efficient, requiring less electricity to be generated.

    So, our goal shouldn't be reduced growth per se. Something more specific, like reduced pollution, reduced natural resource consumption, increased land set aside from cultivation, etc. would be a better goal. Over consumption is certainly an issue, but you can reduce different sorts of problematic consumption without necessarily reducing the total value of goods and services produced in an economy (not easily maybe, but it's theoretically feasible).

    That makes it more of a question of trading off lower (perhaps at times negative) growth versus longer term benefits. Unfortunately, our institutions are not well geared for this sort of thing. Electoral politics in general seem to make it hard to do long term planning. For all of democracy's many benefits, I think this is actually one place where it makes solving a significant—perhaps existential—problem significantly harder. "Tighten your belt so people 80 years from now can live better," is simply not an appealing slogan for many voters. Likewise, the heavy focus on individual property rights in modern liberal democracies makes shifting patterns of consumption very difficult.

    You're mostly working with incentives to help nudge people away from externalities (e.g. a carbon tax) because an individual's right to consume as much as they can afford to consume is sacrosanct. At the same time, the fact that the poor tend to spend a much higher percentage of their incomes means that the burden of these incentives will tend to fall on those with lower incomes. Granted, you can redirect the revenues back to the low income if there is political buy-in, but mass migration seems to be eroding the support for welfare states across the West because the beneficiaries of redistribution are not seen to be "truly" a part of the society yet.

    The other thing is that central banks target moderate inflation for a reason. Even if there was no real GDP growth you would still want predictable 1-3% inflation. Why? Because if there is no inflation there is no reason for people to invest or take any risks and this dries up access to credit. This is exactly why deflation is kryptonite for economies. If savers aren't putting their money into the credit market then there aren't loans to start new businesses or invest in new technologies (which can in turn actually lead to higher resource consumption). People, firms, and governments that have already taken out debt for investments now have to pay back their loans in currency that is worth more than the original loan. Basically, the real expected interest rate on a loan is the nominal rate - expected inflation. If inflation goes negative, it has the same effect as a (surprise) jump in interest rates. This makes defaults more likely and makes it harder for anyone to pay back loans.

    And, if your economy is growing (again, this might occur even with shrinking resources consumption due to innovation) and your money supply is static, you now have the same number of dollars/euros/etc. chasing a larger number of goods. This leads to deflation, which then is essentially an interest rate hike. Deflation will also tend to benefit those with savings or those who are owed debt payments and hurt those with debt, so it tends to be a sort of regressive transfer of wealth (although a lot of wealthy people have a lot of debt too). It's just bad news, which is why central banks generally just want inflation low and, most of all, stable. In theory, even 10% inflation would be manageable if everyone knew it would stay 10%. It's the uncertainty that leads to prices and wages decoupling and loans being issued on bad terms.
  • Beautiful Things
    Which makes me think, who would you draft for your five man starting lineup for your philosophy super team?

    Nietzsche has the will to power, and would be a great, aggressive scorer. But he's also sort of like a Kyrie Irving, a bit of a lose canon who is going to pick up a lot of ad hominem fouls and might not play the best defense.

    Socrates is clearly going to be the best guy up in the paint, engaging in close range dialectical, but you also need shooters who can drop way into the back court of abstraction and sink threes like Hegel.

    Then you have Aristotle and Aquinas. They don't have the flashy prose dribbling of a Nietzsche or an Augustine. However, they are good all around, playing great D, slowing the offense down with definitions. They're not going to be exciting like a Nate Robinson, more like an Al Horford, picking up rebounds, crashing the offensive glass—but that's what wins games in the end!

    You also need to consider verticality. If you don't have players like Plato and Plotinus who can ascend, you're going to get killed in the paint and on the boards.
  • Beautiful Things
    A beautiful image of philosophy as transcendence and ascent. Reminds me Dante getting to talk to all the Pagan philosophers in the early Cantos.

    9onnif0yy7v5fowb.jpg


    A lot of star power in this picture, like an NBA superteam of thinkers!
  • Moral Subjectism Is Internally Inconsistent


    I'm not really sure what you're trying to say here. Maybe an example would help:

    I go for a job interview. For whatever reason, I am confident that I am going to get the job. As a result, I am very relaxed and personable, and this in turn is what helps me beat out another candidate. But suppose that if I thought I was unlikely to get the job I would have been much more nervous and flubbed the interview, in which case I wouldn't have gotten the job.

    In this case, my belief that I would get the job is not independent of my getting the job. It is a determinate factor.

    These sorts of situations come up all the time. I am not saying that the truth values of all propositions is dependant on beliefs about the truth values of those propositions. However, when it comes to propositions involving human behavior it seems like it will often be the case that beliefs about propositions will not be independent of the truth value of those propositions. Many events happen precisely because people believe they will happen.

    Arms races would be a good example vis-á-vis aggregate behavior. For example, say the Soviets don't think the US will slow down their production of nuclear weapons. Then because the Soviets have this belief, they don't slow down their own production. Yet this decision in turn ensures that the US doesn't slow down either (self-fulfilling prophecy). But in this example, it is not the case that the truth value of "the US will not slow down weapons production," is independent of the Soviet belief about this proposition.
  • Moral Subjectism Is Internally Inconsistent

    I think you are on the right track. Subjectivism tends to entail problems with dualism akin to Kant's two "stances/worlds" (noumenal/phenomenal). Yet, so long as it is not possible to give a reductive/mechanical explanation of subjectivity, I think this problem will remain. I don't think most "subjectivism," would like to say that moral beliefs are essentially uncaused, but neither does it seem that they are willing to embrace eliminitivism.

    So, on this point:

    One's belief in what one 'ought' to do is true in vitue of the fact that one believes it. This does, as Lionino point out, make it entirely arbitrary.

    Only provided that the reasons determining why people hold the moral beliefs they do is itself "entirely arbitrary." I think many moral anti-realists would probably disagree with this though, particularly those of a naturalist persuasion. The problem wouldn't be that these beliefs are arbitrary, but rather that they are determined by a biology, social and personal history, etc. that can be completely explained without any reference to "goodness," e.g., for the eliminitivist/epiphenomenalism, an explanation entirely in mechanical terms.

    However, folks like Harris have turned these highly naturalistic/mechanistic accounts into moral realist accounts without changing too much, so I think this is an issue people will still quibble over. Likewise, in the classical or Thomistic view point, it's going to be goodness itself that is determining beliefs and actions in the first place, and so what is at issue is the ontic status of goodness, i.e., realism re universals, the convertibility of being and goodness, etc.

    I think you've both highlighted the initial problem though, which is P1 here. It seems entirely possible that a belief could be related to the truth value of some proposition. This is exactly what we see with cognitive dissonance or "self-fulfilling prophecies." For example, Toyota's might last longer because they are more durable vehicles, but part of the reason they tend to last longer almost certainly has to do with the fact that people are more willing to shell out cash to repair them because they see a high milage Toyota as still having "plenty of life left" (and because they have a higher resale value because people believe this). But then the car stays on the road longer, making the belief true, precisely because the belief was held.

    When it comes the sort of self-reference at work in the OP though, this problem seems particularly acute. So, it seems that the truth value of a proposition can be more or less independent of beliefs about it. In some cases, they seem like there will be quite a bit of interdependence.
  • It’s Bizarre That These People Are Still Alive


    Iggy Pop is the one that gets me. I saw him when he was probably like 65 or so and he still put on a hell of a show.
  • We don't know anything objectively


    It might be helpful if you shared what your definition of "objective" is. The term is used in very many ways. I think I would be inclined to agree with you based on many definitions of "objective," since they reveal themselves to pretty much rule out objective knowledge as a possibility by definition.

    But in the sense that the concept is more generally applied in philosophy today, there can clearly be objective knowledge. E.g., "the US Declaration of Independence was signed in 1776," or "the Mets won the 1986 World Series," the correct spelling of English words, or even facts about attitudes such as: "Americans, on average, have less positive feelings towards stay-at-home-dads than stay-at-home-moms." These are "objective" in the sense that their truth does not rely on any one person's subjective experiences, and moreover they are facts readily accessible to all members of a community, without any particular bias associated with a single/group perspective.

    In terms of "objectivity" in the media, we would say a claim like "the Boston Celtics just knocked the Miami Heat out of the playoffs, winning their series 4 games to 1," is objective. It states a simple fact. Whereas something like "Boston didn't really deserve to win that series. Tatum and Brown don't have the heart to lead a championship team, and if Jimmy Butler was healthy they would not have won. The Heat have much more spirit," would be more subjective since it deals with personal preference, claims about the likelihood of events that appear to be influenced by subjective preferences, etc. Objective/subjective is generally not thought to be bivalent the way truth is; a statement can be more or less objective. The statement that "all else equal, the Heat would probably have done better if Jimmy Butler was healthy," isn't necessarily true. Sometimes teams' bench out preform their starters in a series. However, it's fairly objective that having significantly higher preforming players on the court tends to mean you are more likely to win games.

    Where "objectivity," becomes impossible, and where it seems like you might be coming from, is in a view like Locke's. For Locke, "objective" properties are properties that "objects have themselves." It's things that are true without reference to subjectivity, (which in some versions excludes any objective facts involving culture). Objective knowledge is then knowledge of what things are like without any reference to a knower or even any perspective—a "view from nowhere /anywhere." For instance, for Locke, "extension in space," would be a "primary quality," that exists in objects, whereas color would be "secondary," because color only exists for some observer seeing color.

    There are two main problems with the Lockean version, which result in such "objective" properties being epistemicly inaccessible. First, there is the problem pointed out by Kant. The mind shapes how we experience everything, and so, like you say, it seems impossible for any of our knowledge of things to be "objective," in this sense, which in this context would seem to require "conceiving of things the way one would without a mind." "Objective" here becomes equivalent to the "noumenal," which is, IMO, very unhelpful.

    Why? Because we already have a word for noumenal, whereas "objective" is used in many other contexts. Plus, it's more obvious, thanks to Kant's work, that we shouldn't take "noumenal" to be equivalent with "true." Science is systematic knowledge (justified true belief) vis-á-vis the phenomenal world, and it is objective in the first sense I mentioned. But thanks to the legacy of positivism, there is still a widespread sense that objectivity is equivalent with truth at the limit (more objective = more true), which leads to all sorts of bad conflations when "objective" comes to stand in for "noumenal" and "true."

    The second problem is already identified in ancient philosophy and Thomism, but also in Hegel and later process philosophers. Objective knowledge, if we adopt the Lockean sense of the term, turns out not to be the "gold standard" of knowledge. Rather, it is impossible for reasons aside from those Kant mentioned, and it is essentially useless knowledge that could never tell us anything about our world, even if we had it.

    Why? Because objects only reveal their properties through their interactions, either with other things/processes or through interactions with parts of themselves. It's true that nothing "looks green," without a seer. But it's also equally true that nothing "reflects green wavelengths of light," without light waves bouncing off its surface. That is, the physics and metaphysics of interactions that don't involve minds have all the same problems as those that do, neither end of being "objective." Nothing reflects any color of light wave if it is in an environment without light waves. Salt only dissolves in water when it is placed in water. "In themselves," properties that involve no interaction are:

    A. Forever epistemicaly inaccessible.
    B. Cannot make any difference in how our world is.

    So knowledge of them would always be sterile and would tell us anything about the world. It's a useless sort of knowledge since a thing/property that interacts with nothing else might as well be its own sort of sui generis type of being that doesn't interact with ours. The existence of non-existence of such properties is always and forever indiscernible for all possible observers (barring some supernatural sort of knowledge). Such "in-themselves," properties only show up in philosophy as bare posits (e.g. substratum theories in metaphysics, the pure haecciety of things).

    Anyhow, given these two problems, I would question the usefulness of defining objectivity in this way. In particular, the way gradations of "objective/subjective," are used in media analysis seem to get at something important; yet this distinction gets flattened out in the Lockean version of objectivity. Further, declaring that all knowledge is subjective, given such a definition of objective, ends up just being trivial. If objective knowledge is knowledge without reference to a mind, then it follows that no knowledge could ever be objective. But in turn, it makes no sense to have a dichotomy where one side is empty and the label "subjective" applies equally to everything. It's just like it doesn't make any sense to have a "reality/appearance," distinction if everything is always appearance. If there is only appearance then appearance is simply reality.

    The Oxford "A Very Short Introduction to Objectivity," is really great on this topic (and very short lol).

    (Lastly, I will just note that the more common form of objectivity I mentioned still has some serious problems. It often tends to hold to Hume's guillotine—that there can be no facts about "oughts," that are objective. I would just say here that this requires certain metaphysical assumptions to be true, and I don't think those assumptions are at all obviously the case.

    The other issue is that people will still like to declare that any fact involving culture or historicity must be "subjective." I don't think this makes sense either. The rules of chess or the way words are spelled are "objective," in an important sense. These seem most often to be motivated by a desire to somehow maintain moral nihilism without epistemic nihilism. I don't think these attempts are generally helpful; most arguments for moral nihilism are also arguments for epistemic nihilism. People want one without the other, but I don't think they are easily separable. And, if defenses of moral realism are often charged with "being motivated by emotion," this seems to be at least as much the case for moral nihilism. "Nothing you do is ever wrong and any guilt you feel is ultimately misplaced," is prima facie preferable in many ways to "you have to be good or else you will suffer evil as its own sort of punishment."
  • Is Nihilism associated with depression?



    I understand the case to be exactly the opposite to this - we in fact can quite literally simulate chemistry using nothing but quantum mechanics. Chemistry is one of the most explicitly reducible things there are.

    No, far from it. Even the biggest advocates for "reducibility in theory," wouldn't claim it has been reduced. There are all sorts of ad hoc work around in quantum chemistry and you can't derive periods from physics, etc.

    https://plato.stanford.edu/entries/chemistry/
  • A poll regarding opinions of evolution


    I wasn't thinking of deism at all. I was thinking of the understanding of the relationship between God, Providence, and nature in ancient and medieval Christianity and Judaism.
  • Is Nihilism associated with depression?


    The entire concept of "strong emergence," only makes sense in a metaphysics where things are the sum total of their parts though. But that's the very idea that doesn't go along with pancomputationalism or process metaphysics more broadly—there is no need for "strong emergence," to explain the sort of phenomena strong emergence is normally brought in to explain. The concept itself requires that you already accept some other metaphysical presuppositions, namely that thing just are their constituent parts (and so their parts must act differently for them to act differently).

    You might also consider Hendry and Primas contentions about molecular structure being strongly emergent. At the very least, a century on, chemistry certainly has not been reduced and seems very unlikely to be in the medium term. But if reduction has failed for a century straight such that the basics of chemistry, and even some aspects of physics itself are given as examples of "strong emergence," I am not sure how that is supposed to denote strong evidence that reductionism is true. The big response I've seen to these claims re: molecular structure rely on the environment interacting with molecules to fix their structure. Yet even if this solution ends up working out, it paints a picture of a reduction where things' properties are not reducible to their constituent fundemental parts. Rather, a thing's relation to external entities remains essential to what they are and explaining what they do — things properties to do inhere in their constituents.

    Likewise, emergent fusion in entanglement neither fits with definitions of "strong emergence" nor with the view that all phenomena are totally explainable in terms of their discrete parts. This in turn calls into question the entire substance metaphysics/superveniance based framework, which is partly why you get a shift away from supposing that those sorts of models should be "assumed true until proven otherwise." In general, I think it remains the "default" sort of view only because no one paradigm yet exists to replace it, and it's considered "good enough for the laity." But it has considerable consequences for how people see the world.
  • A poll regarding opinions of evolution


    How is it a "cop out?" It seems to flow naturally from panentheism and the classical understanding of Providence. The idea that nature itself is a theophany, organized in accordance with Providence according to the Divine Will and that nothing happens miraculously "for no reason," is pretty much the standard in the classical/medieval tradition. Evolution seems to fit in there fine, except for the time table, but even among the Patristics there is plenty of disagreement about the plausibility of a strict 144 hour interpretation of creation.

    Only in the post-Reformation world where nature is essentially a distinct, subsistent entity and God is no longer being itself does it make sense to talk about the creation of man as a sort of Humean miracle where God acts in creation in a sui generis manner that is distinct from God's acts in nature. In such a view, God is less than fully transcedent and becomes an entity that sits outside the world. In this view, God is to some degree is defined by what God is not, and indeed is defined in terms of finitude (Hegel's bad infinite), and this also causes follow on problems for the interaction of freedom and Providence.

    Notably, the way the question is framed here implies this distinction. Evolution occured "naturally" or God guided it. I don't think this is a proper distinction for Origen, St. Maximus, St. Thomas, etc. St. Paul's writings frequently invoke the same event twice with human agents the focus of one telling and God's agency in the center. This makes sense from the frame of a God who is "within everything but contained in nothing," (St. Augustine) but becomes very different if it is taken as a sort of causal relation between a God who lies outside the world acting in response to nature (e.g. in Romans 1 we have people abandoning God for idols and then apparently God gives them over to the idols they have already turned to — God for some reason is making people do what they are already choosing to do, something that crops up in many places). None of the thinkers mentioned thought God formed man out of clay using hands, or that God had a body that walked across the Earth. "Since we are God’s offspring, we should not think that the divine being is like gold or silver or stone" (Acts 17:29).

    This seems pretty essential to the metaphysics, not something ad hoc; it is God "in whom we live and move and have our being," (Acts 17:28, repeated every Mass). And this jives with interpretations of the two creation stories in Gen 1 and Gen 2, with Gen 1 describing the birth of eidos through Divine Logos in Gen 1 (Object/Ground - Logos) and the second as focusing on the material creation and introduction of spirit (Object/Ground - Interpretant/Spirit).

    The Jewish tradition contains these sorts of conceptions as well. IIRC, Rashi also has it that Gen 1 relates to forms.
  • Is Nihilism associated with depression?


    :up:

    Yes, that's why I tried to clarify with the reference to methodological reductionism and smallism. Like I said, I think some sort of broadly defined "reduction," ends of being essential due to the age old problem of "the One and the Many." Also because of the very nature of our intellect and finite limits on comprehension—there is a sense in which plurality has to be reduced in order for an explanation to be helpful to our understanding.

    In general, when people attack reductionism, what they seem to focus on is the smallism it generally has packaged with it, and what this entails. E.g. "everything is atoms, atoms lack intentionality and experience, therefore intentionality and experiences never play any causal role in the world (causal closure)."

    I think there are a host of problems with this position, not the least being that the empirical support for this flavor of reduction seems not particularly strong— certainly not strong enough to be assumed true until proven otherwise, which is what advocates often want to presume.
  • Is Nihilism associated with depression?


    IDK, my reading might be biased, but I do read a lot of popular physics. Smallism doesn't always come in for explicit attacks (although it certainly does in several places I can think of), but generally the view of fundementality laid out isn't consistent with it. The role of information theory in physics seems to play a fairly large role here. Within that context, not only does process seem more essential, but context is also essential in defining "things." By contrast, I've seen a lot more heartburn in biology over the introduction of information theory into the field, with outright denials that it is useful to speak of "biological information," precisely because it might introduce teleology, perspective, or mind into the mix.

    Thus, a core difference here seems to be with comfort in abandoning the "view from nowhere/view from anywhere," in favor of a view were perspective is essential. Because of work in quantum foundations and the influence of information theory, a sort of perspectivism seems to be somewhat widely accepted, if not particularly well defined. Whereas in biology, qualms with "information," arise in large part due to difficulties squaring it with both the "view from nowhere," and ideas tied to smallism and substance metaphysics.

    I'm not sure entirely how to sum up the difference, but one way might be contrasting "things are what they are made of," which tends to present discrete things "in-themselves," and "things are what they do," which tends to bring in external context in defining entities. There is also the difference between "more is different," and the "more is just more that can be arranged differently," that one gets when comparing computational versus "building block," models (e.g. https://www.sciencedirect.com/science/article/pii/S2405471222003106). And still another major difference would be "perspective is something that emerges sui generis to minds and will ultimately not play a role in explaining nature," versus "perspective (and context) is, in a way, essential to all interactions."

    Maybe a helpful parallel might be Hegel's ontology. We could say all being gets contained in a single concept, the Absolute, but this is the most developed concept. Things proceed from lower levels, following on necessity, but this is more of an "ascent" than a reduction, even though it is an explanation that tries to get at "the most general principles" and a sort of fundementality/necessity. IDK, maybe more confusing that helpful, but I wouldn't consider Hegel a reductionist in any sense, even though he is looking for unity through "the most general."
  • Is Nihilism associated with depression?


    All explanations of the world are going to be ontologically reductive in some ways, because you invariably face the problem of "the One and the Many." There is obviously a plurality of things in the world, most obviously a plurality of minds. However, it's also obvious that everything that exists interacts with everything else. Indeed, if there was some sort of second sort of being that didn't interact with our sort, it would be forever epistemologically cut off from us, and its existing or not existing could make no difference to us.

    So, explanations need to somehow explain the unity of being, and this means there will always be a sort of reductionism in the ontological dimensions. However, they also need to explain the plurality.

    When I said "the physical sciences are less reductionist," I meant that they are far less inclined to think that the ontological reduction can be done by pointing to "basic" building blocks that define all plurality.

Count Timothy von Icarus

Start FollowingSend a Message