Comments

  • Two ways to philosophise.


    A feeling is generally seen as something that happens to us, whereas an activity is generally seen as something we do. To define feelings as activities is a bit like saying, "Internal things that happen to us without our doing anything are things that we do."

    Well, a person's passions are their passions. They are also something we can have more or less control over, through the cultivation of habits (virtues/vices) and the will's ability to overcome the passions.

    So, I don't think I would locate the passions outside of us, or we wholly something "we don't do." However, I would at least locate some of them outside the will. For instance, when a man cheats on his wife, even though he wished he hadn't (giving in to an appetite/passion), we say he has suffered from weakness of will, and perhaps even that his act was not fully voluntary. Whereas, when a man doesn't cheat on his wife because he sees this as truly worse, we don't say that he suffers from "weakness of passion."

    The passions are properly ordered to the will and intellect. When they "happen to us," as often is the case, the opposite is happening to some degree. I wouldn't describe this as "coming from without" though, but rather, as Plato does, as a lack of unity. That is, what we have is warring parts, and a whole that is less unified in its aims (less perfected).
  • Where does logic come from? Some thoughts


    I'm not sure if I'd agree that lift is a cause of flight.

    Fair enough, I'd agree in a sense. A principle is something that unifies a diverse number of causes. It is what makes many instances of lift, natural selection, etc. the "same" whilst obviously being different in each instance, thus allowing for "the many" to be known through a unifying "one" (e.g. entomologists can know insects well, even though there are 60 million individual insects for each human, and one never closely observes even a tiny fraction of these).

    The particulars need not have absolute priority though (either epistemically or ontologically). For instance, the difficulty in saying that "infection" is never the cause of infectious disease, but only individual interactions between viruses/bacteria and cells, is that this itself can be further broken down. We could also have the demand that virus/cell interaction is always really caused by molecular interactions. This is the drive towards reductionism/smallism. Yet it has to make certain assumptions, for instance, that wholes are always nothing more than the sum of their parts, else the continued decomposition ceases to be warranted. And, while smallism is not prima facie anymore reasonable than "bigism" from an ontological point of view, it is also unwarranted from an epistemic point of view, given that even the basics of molecular structure cannot be reduced to physics.

    I'll thow out here the difference between linear (temporal) causal series, which are accidental, and hierarchical causal series. The first is the classic example of one domino knocking over another, or a ball breaking a window. The second is the example of a book resting on a table, or a chandelier hanging from a ceiling. For the book to be on the table, the table had to be there. This has to be true at every moment or interval; there is a vertical—as opposed to horizontal—element to efficient causation.

    Likewise, the chandelier hangs due to its linkage with the ceiling at each moment. Neither the ceiling nor the table are dependent upon the book or chandelier sitting/hanging on them, but there is dependence (priority) in the other direction. So even efficient causes have these different elements of priority and posteriority that help our analysis. The plane is generating lift at each interval, unless it is stalling (this is a larger principle of fluid dynamics). And at each interval it has to be the case that fluid dynamics is such that lift works in this way (formal causality). Or, for another example, we could consider human decisionmaking. Man being man (a particular whole) is always prior to man making a decision as man, and this is a sort of vertical priority that affects both efficient and formal causes.

    Not necessarily. I am a determinist and a free-will Libertarian. How do I reconcile the two? I see freedom as having access to as much information as possible. By having access to as much information as possible, you are able to make more informed decisions. By having access to more information, you might choose differently, or you at least have the power to choose differently than you would have if you didn't have the information.

    Many people make this assertion that determinism implies that you have the feeling of being forced into something you didn't want to. I say that determinism implies that you have a feeling of naturally choosing what decision is best. Your decisions and actions would feel natural, not forced, if determinism is the case. You always make the best decision with the information you have at that moment. It is only your fear of the consequences that you cannot foresee that make it feel forced. Thinking that you should have chosen differently only comes after the consequences have been realized (after you have more information).

    Why would this not be comptiablism?

    Anyhow, you highlight a very important element that is missing from many considerations of freedom, both the idea that ignorance is a limit on freedom, and the idea that freedom involves understanding why one acts. I tend to want to frame liberty in terms of (relative) self-determination and self-governance (as opposed to being undetermined).
  • On Matter, Meaning, and the Elusiveness of the Real


    This is something I have noticed too. I'm not totally sure why, since historically arguments for God rely on exactly the opposite sort of pitch, and they largely still do for religious thinkers who are part of traditional religions—everything being very well nailed down. I think it has something to do with the particular allure of New Age spirituality as a sort of freedom from both traditional religious practice and secular materialism. But I also think the challenging of epistemic norms helps to empower the individual, and "freedom as authenticity" is very important there too.
  • Where does logic come from? Some thoughts
    I am pretty sure I had almost this same conversation re reasons versus causes with , using the stop sign example. Maybe it was a stop light :rofl: .

    I would just suggest that a difficulty here is that "causes" is often used very narrowly, as always referring to a linear temporal sequence (either as extrinsic ordering, or a sort of intrinsic computation-like process), but also very broadly as encompassing the former, but also all "reasons." Or, causes might also be used narrowly in a counterfactual sense. "Reasons" often tend to include a notion of final and formal causality that is excluded from more narrow formulations of "cause."

    So, it's tricky. Lift is a "cause of flight," but you won't find the "principle of lift" as an observable particular in any instance of flight. Likewise, moral principles are causes of people's actions, but you won't find them wandering about the world.

    It's probably the one of the most challenging disambiguations. There was once an extremely influential book called the Book of Causes that is hardly ever taught any more because no one knows who wrote it (and it is derivative of Proclus' Elements). I mention it because it's a great example of how extensive the understanding was in former epochs. Essentially, anything that did not occur spontaneously for "no reason at all," (which was presumed to be nothing) was considered to have a cause, and indeed many causes, because proximate causes were arranged under more general principles. Which is interestingly, not a position that demands any particular ontology, and works as well for idealism as physicalism despite its rather stark decline.
  • The decline of creativity in philosophy


    I think it is, strangely, at least partially a problem of too much creativity in academia, which in turn leads to stagnation through a poor signal to noise ratio. There is an incentive for radical rereading, radical critiques—novelty for the sake of novelty, etc.—because this gets attention in a massively oversaturated market. Even saying something absurd can be a good way to get citations. It's publish or perish, and even fields like classics have seen this weird phenomena where publications and the number of journals soared even as enrollment plummeted and Greek and Latin vanished from most high schools, along with even translations of the classics. This is particularly evident in some fields, where consensus oscillates wildly based on the same old evidence (Biblical studies being a prime culprit).

    This is, in part, an effort to replicate the style of the technical and natural sciences. "Research" becomes the key output of the academic. So too, there is the idea of "progress" borrowed from technical fields, which was generally given a political tilt in philosophy and the humanities, which in turn led to siloed echo chambers. The push for political progress paired with the drive to novelty leads to inanity and, at the limit, insanity.

    That's part of it. There is also the fact that, at a certain point, you cannot get anymore radical. You reach maximal nihilism or relativism, or maximal authoritarianism in the case of fundamentalism. You can't keep making your art more abstract after a certain level. So, in terms of the general modern push towards "creativity" envisioned as a sort of "freedom as potency," we seem to have approached a sort of limit.

    The other thing is that philosophy is more professionalized now. Philosophy was also more stable (less "creative") in late antiquity and the later middle ages. I don't think this was wholly a bad thing. It made for more rigorous thought.

    It was more dynamic in the early modern period and earlier in antiquity. You have far more new movements starting in these periods. You do get more creativity, but also more bad, even widely damaging philosophy.

    I also think the new movements at least began vastly less sophisticated and they often did quite poor justice to what came before them in the early modern case. There is a huge democratization that comes with the printing press (i.e. "who can sell the most pamphlets" versus "who can win enough admiration to be hand-copied at great expense by other lifelong contemplatives"), which happened to occur during the massive socio-political firestorm of the Reformation, which created a drive to just tear everything down and destroy it (regardless of if it had been understood) in order to create something new. That's obviously very broad, but I think it's generally true. Early modern thought is an explosion of creativity and also hugely historically forgetful.

    Late modernity is more akin to late antiquity than the medieval period though. There is a sort of fixed plurality that seems to have calcified. It's more of a similar historical moment too. That said, the focus of the philosophers of late antiquity tended towards the contemplative as time went on, which is quite the opposite of today.
  • Two ways to philosophise.


    Be that as it may, let's suppose someone claims that there is an overarching standard and that Jake has violated it. Does it follow that the person has a thin and not a thick conception of paradigms, or that Jake is being asked to put down baggage? I don't see why it would. All that is needed is a common thread running through every paradigm, from which the standard can be derived. The paradigms can be as thick as you like.

    Yeah, that's one of the points I wanted to make. There are certain assumptions that need to be made for it to be the case that all general epistemic principles (or any at all) must require a standpoint outside any paradigm to achieve. I don't think those are good assumptions though.

    Consider Plato's "being ruled by the rational part of the soul," as an epistemic meta-virtue. The basic idea that, ceteris paribus, one will tend towards truth if one prefers truth to falsity and one's pursuit of truth is not derailed by other concerns (passions and appetites) that are prioritized above truth, doesn't seem to require any move to a paradigmless space. It is rather a statement about all paradigms, made in the context of a particular thinker. But unless it is impossible to make statements about paradigms from without, this doesn't preclude its being true. Plato might be wrong, but he isn't wrong because his claim requires standing outside "Platonism."

    Note too that, if it is asserted that Plato can only speak about what is true for his own paradigm (that truth is bound to paradigms), this charge would apparently refute itself, since it would itself also be limited to a particular paradigm. Yet this objection would also seem to rest on the same absolutization of paradigms into "what we know" instead of "how we know."

    But this is also not an appeal to an axiomatized system or "rules," or epistemology as a system. Indeed, Plato has a marked skepticism towards language and sensible realities (including cultural institutions) as a whole, not a preference for a "perfected system" that exists within the context of these.
  • Two ways to philosophise.


    I think @Count Timothy von Icarus is especially interested in being in position to tell someone that they *should* put down some baggage they're carrying. The grounds for saying so would be (a) that this particular burden does not help you in making rational judgments, and (b) that Tim can tell (a) is the case by exercising rational judgment. (Stop thinking you need to sacrifice chipmunks to the river every spring so it will thaw, would be a typical Enlightenment example.)

    I'm not sure how close that is to your view (or if it is in fact Tim's), but that's the sort of thing I imagine is on the table when people say they want an overarching standard.

    Sure, in a very broad sense. If epistemology can never identify better or worse ways to achieve knowledge it is useless. Or, if knowledge is always wholly defined and contained within some paradigm, such that "sacrificing chipmunks truly thaws rivers just so long as you're a member of a certain community that currently accepts this," it seems perhaps to be equally pointless.

    There do seem to be some epistemic "rules" that it is quite hard to think of counter examples for, e.g. "just making up observations to support your claims," is not a reliable way to achieve knowledge. More extreme, "intentionally sabatoging your research program" is not a reliable way to attain knowledge. Not accepting arguments from premises known to be false, or where the conclusion doesn't follow from the premises, might be others.

    Likewise, there are habits that seem to be more or less conducive to attaining knowledge, i.e., "epistemic virtues." The opposite of this claim would be the idea that no habits (e.g. epistemic humility) can be said support epistemic success in general.

    But note that this does not require the Enlightenment conception of reason as primarily a sort of discursive rule following, or a sort of "method," "system," or "game." Later critiques of reason tend to leave this assumption firmly in place and rely on it heavily. I don't think it's an adequate notion of reason, as demonstrated by where it leads.

    To address your earlier question about the limits of reason, I would point out that the claim that reason cannot adjudicate between paradigms or world-views is, of course, a gnostic claim. One presumably knows this if one claims it to be so. Yet to have recognized a boundary is to already have stepped over it.

    Now, if we claim that reason is in a sense isolated within "world-views and paradigms," we face the odd situation where some world-views and paradigms resolutely deny our claim. They instead claim that knowing involves ecstasis, it is transcendent, and always related to the whole, and so without limit—already with the whole and beyond any limit. And such views have quite a long history.

    Our difficulty is that, if reason just is "reason within a paradigm," then it seems that this view of reason cannot be so limited, for it denies this limit and it is an authority on itself. Our criticism that this other paradigm errs would seem to be limited to our own paradigm.

    The positive gnostic claim, to have groked past the limits of intelligibility and seen the end of reason from the other side faces an additional challenge here if we hold to the assumption that any such universal claim must be "from nowhere," and itself issued from "outside any paradigm, " since it is also generally being claimed that precisely this sort of "stepping outside" is impossible. But perhaps this is simply a misguided assumption. Afterall, one need not "step out of one's humanity" to know that "all men are mortal." One can know this about all men while still always being a particular man.

    So, that's my initial thoughts on the idea that reason cannot adjudicate between paradigms. It seems this must remain true only for some paradigms, and one might suppose that being limited in this way is itself a deficiency. After all, what is left once one gives up totally on reason as an adjudicator? It would seem to me that all that remains is power struggles (and indeed , some thinkers go explicitly in this direction). Further, the ability to selectively decide that reason ceases to apply in some cases seems obviously prone to abuse (real world examples abound)—in a word, it's misology.

    But none of this requires stepping outside paradigms, except in the sense that reason may draw us outside our paradigms (and indeed this happens, MacIntyre—RIP—was drawn from Marxism to Thomism). To know something new is to change, to have gone beyond what one already was. That's Plato's whole point about the authority of the rational part of the soul. The desire for truth and goodness leads beyond the given of current belief and desire, and hence beyond our finitude.

    I'll just add that the absolute, to be truly absolute, cannot be "objective" reality as set over and against appearances, but must encompass the whole of reality and all appearances. Appearances are moments in the whole, and are revelatory of the whole. Appearances are then not a sort of barrier between the knower and known, but the going out of the known to the knower—and because all knowing is also in some sense becoming—the ecstasis of the knower, their going out beyond what they already are in union with the known.
  • Two ways to philosophise.


    And the two metaphors combine naturally: how do you know if some place is a place you'd like to go until you've been there? Do you decide based on what other people have said about it or what?

    I think all human beings have experience of knowledge, error, and being aware of one's own ignorance. So, there is already an epistemic orientation. If there wasn't this sort of orientation, then there wouldn't be anything to differentiate epistemology as a distinct pursuit.

    There is a Meno Paradox element of: "how do you know what you're looking for in a specific case," of course. I don't think this applies to "why not prefer falsity over truth or ignorance over knowledge?" though.

    They all lead somewhere, but is it where we want to go?

    Is this still supposed to be a metaphor for epistemology? Or is it philosophy more generally? Presumably, in the case of epistemology we want to head in the direction of knowledge and not ignorance, right?

    So I am not sure how there are multiple destinations, or what that would represent. Are the many destinations sui generis "types of knowing?' Would that suggest many different, incommensurate truths or types of truth (which would mean many different incommensurate beings)?

    It seems to me that, even were this so, the goal would still be all of the destinations (not from a practical standpoint, but from a theoretical one). Whether or not this is achievable would be a different question.

    Whereas, even if there is only one destination, the question of if any road leads to the destination would remain.
  • Iran War?


    Iran has used "proxies" (often actually Iranian forces) to carry out 170 attacks on US bases just since the start of the Gaza War. There were also Iranian forces involved in firing Iranian rockets at Israeli cities from almost the outset of the war until Hezbollah was militarily defeated, so, while this is certainly an escalation, the two have effectively been at war the whole time.

    Unsurprisingly, counterstrikes have killed IRGC officers because they are directing the attacks.

    I am not saying anything about US policy being good BTW, just that this is part of a larger pattern involving pretty much the entire region, and that from a strategic perspective it seems stupid from the Iranian side to keep doubling down.
  • Iran War?


    Only partially. The fact is that Iran, due to the type of foreign policy it pursues, has been involved in combat with the Taliban, significant combat in Iraq, including attacks on US forces over the last several years, major combat operations in Syria, has also been bombed by Pakistan because it allows safe haven to terrorist groups, and IIRC has also had clashes with Azerbaijan, while being in a Cold War that occasionally goes hot with the rest of its neighbors, a proxy war with the Saudis in Yemen, as well as ongoing struggles with secessionist forces and its own populace.

    And now, it seems to want to triple down on its failures, given the threat to engage in an overt act of war against its remaining neighbors as a means of somehow indirectly getting back at Israel and the US instead of just accepting that supporting Hamas has been a disastrous policy and accepting defeat (which is already here, and they've already lost Assad and had Hezbollah turn to a separate peace after losing their entire leadership).

    Iran pursues something like the regional foreign policy equivalent of the Soviet Union or US at the peak of the Cold War, while having the actual economic and military strength of... well, Iran. From a realpolitik view it's almost incomprehensible.
  • Iran War?


    If one night of air strikes or exchanging some artillery fire makes a "war" then Iran is apparently at war with Afghanistan, Pakistan, Israel, the US, and itself, and seems to be leaning towards declaring war on the rest of its neighbors as its main strategic response (i.e., closing off all its most powerful neighbors main income stream and attacking their civilians at sea) immediately after losing its entire air force, most of its air defenses, and a significant portion of its standoff weapons.

    I'm just saying, I'm not going to lose sleep over the people who dream this stuff up not having nuclear weapons, particularly when their country has seemed to teeter on the verge of revolution or civil war the past few years.
  • Two ways to philosophise.


    Assuming this is honest, it shows how very, very far Tim is from understanding what I have been suggesting. It would be somewhat extraordinary for someone to suppose that I would argue that "no one is ever wrong", given that almost all my posts are about how folk are wrong! I think many would see it as my modus operandi!

    How can Tim be so thoroughly mistaken? Do we supose his case is different to others here, who display less intelligence but more ill-will and aggression? Is Tim in the position of someone in St. Louis trying to describe what it is like to live in Kansas City? Is he just saying that there are better Jazz clubs in St Louis? (Never having been to either, I'm guessing...)

    What is the point of me exchanging several long PMs on this where I clarified this point to you in detail and asked you for clarification in each of them, refusing to offer up that clarification, and then posting this here?

    I am still confused about what your point on undecided statements was. I have explained though precisely why it seemed to me that the counter example you were pursuing was bizarre.

    And note that this entire line of posting was started by your conflation of "all narratives are true or they aren't" that is "all x are y or not all x are y," as being equivalent "each x is either y or not-y." That was supposedly "my theory." Whereas, what has offended you, is merely my asking an incredulous question. But I was incredulous because the counter example for having "missed" the undecided option for "all statements are true or not all statements are true," would be the strange objection that one has unfairly ruled out the possibility that "all statements are neither true nor not true." Likewise, even if I had said "each statement is either true or false," the objection that I have missed the undecided option only seems to have purchase in this context if it is inappropriate to leave the door open on their being no false statements.

    I don't really know what the point was supposed to have been otherwise, hence my asking.
  • Two ways to philosophise.
    There is also to consider the Hegel-Fukuyama-"Whig history" descriptive element to consider. More effective epistemologies lead to greater economic, technological, and military success. These promote the survival of the "media" that epistemologies exist in. Certainly, people buy into this. The PRC's decision to allow liberalization in China was based on the idea that it would increase safety by fostering the technological and economic growth needed for a strong military, and thus state and cultural survival.

    That's a sort of evolutionary thesis that could also be framed in information theoretic terms. But of course, evolution doesn't preclude the formation of genetic diseases either.

    There are problems there, but I think it gets something right.
  • Two ways to philosophise.


    Yeah, I think we're falling into Enlightenment categories. I don't think anyone here favors Enlightenment rationality (except perhaps when @J channels Nagel).

    The difficulty, given my preferences, is that, while much of later 20th century philosophy is a rejection of the "view from nowhere," it still continues to use it as a sort of dialectical pole, and in fact, to accomplish this, it tends to project the preferences of early-20th century empiricists back onto the whole of philosophical history. So, the position is rejected, but it still "looms large," and it becomes difficult to step out of its shadow. Indeed, I think helpful elements of the past get occluded by this formidable shadow.



    Yeah, I probably should have addressed that, I just wanted to clarify the idea behind the initial disjunct of "all narratives are true or not all narratives are true."

    I'll have to think about that analogy. I can see how it is apt in some ways, particularly the difficulty of knowing a locale before going there, but I also think that practical reason (better or worse) differs from theoretical reason (more or less true/accurate) in substantial ways, that an analogy might have to reflect.

    To be put it very shortly, if knowledge is our grasp of being, truth the "adequacy of thought to being," then I am not sure if the idea of many different cities works. Would this denote many different beings (plural)? (Or I suppose just different places to do your research from, but then there seems to be a "progress" element that we need to account for; the difficulty is that "better" seems to open up cases where something is better for reasons unrelated to epistemology and knowledge).

    That is, given the assumption of one being, one world we are all a part of, I might want to adjust it to something like there being many roads to the same location. For instance, driving to the Grand Canyon. But the Grand Canyon is also huge and looks very different from each rim, or from the bottom, or from Horseshoe Bend, etc. So there is both a question of which roads even lead there, which are best to travel on (which might vary depending on your "vehicle"), and which angel you'll be seeing the Canyon from. Presumably, once you are "closer" it should be easier to get to other close views.

    Maybe this works, maybe not. I suppose one benefit here is that someone is always somewhere in proximity to the Canyon, although it may be possible to drive in the wrong direction. Because it does seem to me that, in being normative, there has to be a sort of "progress" in any analogy, although I suppose cities being "better" might capture this. The case where all locations are potential destinations would seem to me to suggest a sort of anti-realism, or at least something in conflict with most notions of epistemology as goal-directed along a particular axis. I am not sure about multiple discrete destinations though, what that would have to imply about knowledge.
  • Two ways to philosophise.


    We landed at some point on questions like this: Are all narratives acceptable? I think it's clear no one wants to say that, but they mean different things when they answer. I understand the impulse of the question; when I was young and discovered Science, or when I was somewhat older and discovered Logic, I thought they were tools especially useful for ruling things out. But I'm older now, and I can't help but read that question and ask, acceptable to whom? in what context? for what purpose? And I understand the question as intending to be taken as "acceptable full-stop," or, if need be, "acceptable to Reason." And I can't help but wonder if anyone is ever in a position to stand nowhere and choose which town to go to ...

    That's not really how I intended it. I was trying to remain as broad as possible. Hence, not using "true" but the cumbersome "true/correct/acceptable/etc." The idea is that any sort of epistemology has to make judgements of some sort, and presumably judgements that are non-arbitrary, and so which make some sort of appeal to "reasons" in a broad sense.

    I personally think it's a mistake to conflate "good," or "useful," with "true," since then we have the questions of "good or useful for whom?" and "truly useful, or just currently assumed to be useful?" etc., but I didn't want to foreclose on that either.

    It seems to me that there must be judgements of some sort, that there must be something like "reasons" to avoid the charge of arbitrariness, and that, if reasons are sui generis in every instance, or potentially so, it is hard to see how arbitrariness can be kept out. That was the basic idea.

    Now, there is also an issue of separating the normative from the descriptive. The cultural construction of standards as a descriptive claim can stand alongside an understanding of superior/inferior normative standards. It would be something like the common moral anti-realist genetic fallacy argument to claim that there aren't superior and inferior ways to developed knowledge because such standards emerge from contingent social processes (although, I'd also challenge that such processes are ever wholly contingent).

    At best, the descriptive observation might support something like a debunking argument to attack any warrant for claims of normative epistemic standards. Which is just to say that I haven't seen any way the normative question can be foreclosed on. And indeed, if it was foreclosed on entirely, and we said there were absolutely no better or worse epistemic methods, that seems to me to be courting a sort of nihilism. But neither does the existence of the normative question require "contextlessness" to address.
  • Two ways to philosophise.
    Looking at these recent responses, I don't think it's useful to set up a dialectical between "contextlessness" as a "view from nowhere/everywhere" on the one hand, and admitting the relevance of context on the other. This sort of thinking is, as far as I can tell, something that largely emerges in the 19th century and had cracked up by the mid-20th century. It relies on certain metaphysical presuppositions that are endemic to much modern thought, but which I don't think hold water.

    I think it is more helpful to think in terms of priority. For instance, human nature is prior to human culture. This is not to say "temporally prior." No humans have ever existed outside of a culture. Human culture always shapes psychology and people's understanding of the world. And yet human culture requires that there be humans, and "what humans are" necessarily always shapes every human culture.

    Even thinkers who were at pains to deny human nature in their quest for "freedom as potency/power," end up having to bring some notion of it back in, e.g. for Sartre, faciticity, modes of being, etc. Because obviously cows don't learn French, only people do, nor do people reach puberty, spin themselves into a chrysalis, and emerge weeks later with wings. Human beings are something determinant, and what they are always plays a role in what they do, e.g. epistemological efforts. People might disagree on exactly how this works, the degree to which we can know what is prior, etc., but it seems hard to dismiss any notion of man as a certain sort of being. Yet the sort of being man is shapes all of man's cultural pursuits, hence "priority."

    I think of reason and principles of knowledge in analogous terms to this example, not as a dialectic where one pole is "contextless." This means looking for unifying principles. For instance, the principle of lift is in some ways the same in different sorts of insect wings, bird wings, bi-planes, drones, fighter jets, etc. and yet it is clear that these are all very different and require a unique understanding. Likewise for principles in complexity studies that unify phenomena as diverse as heart cell synchronization, fire fly blinking, and earthquakes. Identifying a common principle is not a claim to have stepped outside a consideration of fire flies and heart cells, but rather a claim to have found a "one" that is present in "many." If such principles didn't exist, I don't know how knowledge would be possible.

    Importantly there is a move in Descartes, Kant, etc. to have mental representations become "what we know" instead of "how we know." This gets carried forward into philosophy of science and philosophy of language, such that theories, models, paradigms, etc. are all primarily "what we know," instead of means of knowing. I think this is a pretty fatal error, but since it is popular, I think it's worth pointing out that it plays into the demand for the "view from nowhere."

    On this view, the mental representation, theory, paradigm, etc. represent a sort of impermeable barrier between the knower and the known, and hence we always know the barrier and not what is on the other side of it. I think this is based on bad metaphysical assumptions that, because they are common, often go unacknowledged. I think it's an improper absolutization of the old scholastic addage that "everything is received in the mode of the receiver," and a neglect of the dictum that "act must follow on being," for being to be meaningful.

    That's a whole different topic, I just wanted to throw out the idea that the "view from nowhere" need not be a pole of opposition, and indeed wasn't for most of philosophical history. I'd frame it instead in terms of principles and priority.

    A good analogy is a light that passes through many panes of glass. From where we sit, we might always have to look through different panes, with different tints. So there is never a case of "looking directly at the light." And yet I wouldn't want to say here that man simply cannot ever see the light, but only "the light as filtered through the panes." The light "as filtered," is still the light. The panes are transparent to some degree. They let in more or less light, and one can move around to look through different panes, and some are further back, and so more fixed in our field of vision than others.

    The critique of the thing-in-itself of modern process philosophers is relevant here. The thing-in-itself is not only epistemically inaccessible, because knowledge relies on interaction, but also entirely sterile, since how a thing is when it is interacting with absolutely nothing else, and no parts of itself, is irrelevant. To even formulate such a sterile being requires some dicey assumptions.
  • The passing of Vera Mont, dear friend.
    That's very sad to hear. She will be very missed, she was a great contributor.
  • Two ways to philosophise.


    I'm really not sure what is supposed to be disingenuous here. It seems to me that if a principle holds with no imaginable counter examples, it's a solid principle. That's my only point.

    We seem to be in agreement that there are no imaginable counter examples, so I don't know why the principle isn't valid?

    So does whichever standards triangulate properly represent good standards, even if the community doesn't agree to them, or does the community have to agree to the standards and they have to triangulate?

    I guess now I am trying to tell if the standards exist in virtue of triangulation prior to the community accepting them. If they do, then I would just say that those are what I mean by principles, and we have found some agreement.
  • Two ways to philosophise.


    No, and that is exactly the point!


    So there are no examples where just making up your data consistently leads towards knowledge, but it still isn't a valid epistemic principle to not just make up your data? Why isn't it a valid principle?

    But it would be if the community says so?
  • Two ways to philosophise.


    IDK, seems like grounds for a principle to me.

    Can you give an example where just making up your data consistently leads towards knowledge? I would accept that as a strong counter example.
  • Two ways to philosophise.


    If the way the world is requires that epistemic communities follow certain standards to avoid false conclusions, that sounds a lot to me like the grounds for a principle.

    What's the objection here? "The way the world is makes it so that falsifying your data and lying isn't a good way to reach knowledge, but that doesn't make not just making up your observations a valid epistemic principle because..."

    What's the end of that sentence? The only one I can think of is "because sometimes making up your observations to fit your conclusions is a consistent way to achieve knowledge," which seems clearly false.

    Yep. That's not down to the community failing to accept a principle, but a mismatch between what the community says is the case and what is the case. It's a failure of triangulation, not of principle.

    Yes, a mismatch that wouldn't exist if they had accepted the principle.

    Also, even if you imagine some bizarre case where people just so happen to consistently make up fake observations that support true conclusions, this still wouldn't be a good counter example because presumably holding true opinions in virtue of observations you just lied about isn't constitutive of knowledge.
  • Two ways to philosophise.


    Galileo’s telescopic observations, Newton’s bucket, Eddington’s eclipse photos—all involved choices that wouldn’t survive a modern methods review.

    Sure, and many similar moves led to things like the approval of drugs that led to birth defects, toxic chemicals in kid's drinking water, etc. That invalid arguments can sometimes have true conclusions doesn't make them valid arguments, and that falsified data can sometimes support true conclusions doesn't make falsifying data good practice. And indeed, since it makes more sense to falsity data when one cannot support one's claims because they are false, it's probably more likely in cases where the conclusion is false.

    As an objection to "falsifying data is always wrong," this is the same conflation of normative standards and actual practice I mentioned above. It's the equivalent to objecting to "rape is wrong," because "but there are rapists," or "sometimes good people are born from rape."

    The issue isn’t that anything goes, but that what counts as "okay" or "not okay" is itself historically and contextually shaped. There is no algorithm for scientific legitimacy, but a community negotiating standards as it goes.

    I will grant that there is a gray area in some fields. In political science, people often select case studies on the dependent variable. But the quantitative analysis is also often window dressing.

    However, I think it's inappropriate to conflate "anything that wouldn't pass a current methods board," with "knowingly just making shit up" (which does happen, see recent news). I find it hard to imagine how the latter is ever good inquiry, even if we might justify it ethically on other grounds using some bizarre counterexample. But counter examples don't disprove principles, they merely show that there can be things worse than the violation of a principle that justify its violation.

    But here is the bigger issue:

    There is no algorithm for scientific legitimacy, but a community negotiating standards as it goes.

    Would it be the case that, so long as a community agrees, making up data is good inquiry and a path towards knowledge?

    I don't think it would be. So, the issue isn't just about what some community agrees. If some community does agree that falsification is ok, they're going to tend to come to false conclusions.

    Community agreement doesn't keep out the case where the community agrees with falsifying data. But that isn't a bizarre counter example, there are plenty of real world examples, e.g. some "race realists," some anti-vaccine advocates, etc., who consider themselves a sort of epistemic community and are so convinced of the truth of their conclusion that they don't see any problem with falsifying data. This was also true with "socialist genetics." Not only was falsification allowed, people were positively pressured into it. And the result was a great deal of deaths.

    So what would be the objection here: "but that's not the real community, the real community is the broader scientific community?" How is that claim justified? If community practice is the ultimate justification of all standards, then communities define themselves, and are presumably infallible within themselves.
  • Two ways to philosophise.


    Asking a question is now misrepresentation? I didn't get what the point of the example was.

    I'm not talking about a final theory, merely basic principles like "it's not ok to just make up your observations" or "good arguments don't start with false premises and are valid ." Or, considering your objection just now, how about, "it isn't acceptable to misrepresent positions?"

    Do those not always obtain? If not, when wouldn't they obtain?
  • Two ways to philosophise.


    "Brownian motion" as the only alternative here is yet another either/or binary, about as useful as "absolute" and "arbitrary." Couldn't we allow that something in between is more characteristic of how such practices actually work?

    Sure. Did you have a principle in mind in between?

    It's not a binary. It's only down to Brownian motion if one denies any determinant principles that guide discourse whatsoever. For instance, is simply faking your data bad inquiry, or is it sometimes acceptable? If even this is ruled out as a principle because everything is case by case consensus building, I'm not sure how there can be any determinant structure to good discourse.

    "Reasonableness" is at least something. But how do we know when someone is "reasonable." From what I can tell, this is also up to case by case consensus building. So it seems like individual opinions groping towards consensus all the way down.

    One thing to note here is that epistemology is in some sense normative. It's about what leads towards knowledge. So, when you object that "well, real world discourse sort of looks like this," that's like objecting to "rape is wrong," because "in the real world there are rapists." Nor do normative standards entail "forcing belief." Indeed, belief cannot be coerced in many cases, only assent. The point is not that all people follow epistemic standards, but that some forms of discourse are more likely to lead to knowledge than others. If it wasn't the case that some kinds of discourse were more likely to lead to knowledge than others, then I can hardly see how being "reasonable" would matter.



    Is that supposed to be a counter example? As in, "if Galileo did it and he had a good theory, then sometimes it's ok to just lie and make up observations?"

    I don't know if it counts as an "algorithm,' but sure, good scientific inquiry means not falsifying your data, not arguing from false premises, and not using invalid arguments.

    Arguments of the form:
    All squirrels are mammals.
    All dolphins are mammals.
    Therefore, some squirrels are dolphins.

    For instance, don't demonstrate their conclusion. They don't become part of good inquiry because the conclusion is true.

    There might, in some cases, be other ethical concerns that warrant falsifying data perhaps, but that wouldn't be the same thing as it being part of a good epistemology.
  • Two ways to philosophise.


    I don't think that's accurate. The position strikes me more as a sort of virtue epistemology in search of clear virtues. It isn't against argument and reasons, it just denies overarching standards for them, or even general principles. My thoughts are that it describes good discourse, but in very vague way that doesn't rule out bad discourse.

    The difficulty I see is different. First, a very robust pluralism insulates claims from challenge. This is sort of the opposite of democratization; it's atomization. If someone issues a successful challenge to your position, or if you are unable to make any good arguments against theirs, you can simply fall back on: "well, there are many equally valid narratives, even if they contradict one another, so there is nothing to worry about here."

    The issue of an inappropriate democratization of truth comes up in two ways if there is vagueness though.

    A. There is no way to exclude anyone. We can say "we only allow reasonable objections," but if the standard for "reasonable" is weak or different in each instance, then we will have difficulties justifying exclusion unless everyone already agrees with us.

    This is precisely what has let "race realists" be so effective in their evangelism. They come ready to fight with their studies, government statistics, etc. This is not to say they always act in good faith, but they do make appeals to evidence. And then they get rejected out of hand as beyond the pale. Yet, due to other difficulties, the culture generally lacks the resources to show why they are beyond the pale in a credible way. The result is that it looks to would-be-converts that the race realists' case is stronger than it really is because the opposing side seems to be forced into abandoning argument in favor of authoritarianism (and indeed actual censorship in academic settings).

    B. Given certain assumptions about language, what you're describing can happen. Because if what we're talking about is primarily words and not being, and words just mean 'how they are currently used," then everything is necessarily democratized in a way. Socrates' debates about the true meaning of justice make no sense if "justice" is just a token in a game, and others don't use the token the way he does. Clearly, he isn't using the token right, so he must be wrong. At best, he can propose that it is useful to the community to use the token differently.

    But that's very different from excluding reasons. Reasons are discussed. I suppose though that reasons arguably lose their purchase without any clear principles. "You're just engaged in post hoc rationalization, political bias, appeals to emotion, contradicting yourself, your premises are false, your argument isn't logically valid," etc. doesn't necessarily work as a "reason" if these are not considered to be illegitimate in general, but only illegitimate on a case by case basis.

    @Moliere has given us the best example here. If falsifying your data and lying isn't always bad discourse, but only bad on a case by case basis, then the response to "you just faked that data," can plausibly be: "sure, so what?" So to for "your premises are false," or "your argument is not logically valid." And yet, if there are no general principles, these would presumably have to be appropriate in at least some cases.

    But I do not think @J and @Banno are likely to agree on that one. I have to imagine that "it isn't ok to just make up fake evidence to support your claims," is going to be something most people can agree upon, granted that, on the anti-realist view that good argument is simply that which gets agreement, and all knowledge claims are simply power battles, it's hard to see how justify this since it would seem that faking data is fine just so long as it works.
  • Two ways to philosophise.


    I think this is more in your imagination than true -- capitalism is deeply hierarchal. "inferior" and "superior" are the words you wouldn't use on the basis of the faux-equality of liberal-capitalism, but the hierarchical relationship is there. And I'd equate, in our day and age, liberalism with capitalism

    Indeed. It's sort of a weird mix of both though. Because of the washing out of cultural norms and institutions by capitalism and liberalism, and liberalism's banishment of non-liberal approved virtues (which are quite thin) from education and public life, and the presumption of "equality" and a "classless society," people become hyper focused on a narrow range of status symbols. Fukuyama addresses this. When everyone becomes "equal," people strive for new ways to rise above the crowd. But not all of these will be healthy.

    Hence, the heavy focus on wealth, elite education, status, and increasingly, sex as one of the last forms of validation not to be fully commodified or driven from the culture. As participation in civic organizations, churches, unions, etc. have collapsed, and become more precarious, people have fewer sources for thymos outside the market (or the "sexual market"). In my whole career I have had a single person ask me for a raise on the grounds that they needed more money due to personal issues. I have had countless people ask for raises with seemingly no concern for the money, because they took it as a sign of worth (and indeed, self-help books for careerists stress this heavily).

    I read an article in the Atlantic a while back on this sort of thing. Cultural balkanization and the replacement of America's particular blend of Protestant Christianity with a secular liberal capitalist ethos had eroded its pretensions to be a classless society because all appeals to excellence were slowly boiled down to market power. Even the cachet of elite taste has increasingly waned. The exemplar tech bro or stock trader is not possessed of the elite tastes of past ruling classes.

    This is problematic in the way all status obsessed societies are, but more so. First, because of capitalisms' intense short-termism and insatiable hunger for growth, which courts ecological disaster, but also on wholly aesthetic grounds. Homer already knew his arete obsessed Greeks were missing something, that the quest for glory was ultimately hollow. But at least there was some beauty in it. With capitalism we get the ugly reign of quantity, without the thymotic glory of warrior cultures, or more appealingly, the sublimation of that violence into high art, or things like the Japanese tea ceremony. The knight, samurai, or the hoplite certainly has more going for them than the Wall St. suit.

    I partially blame firearms and high explosives for making war increasingly a crap shoot that is no longer dependent on physical abilities. It's a sort of kinetic Marxism! The Japanese had the right of it in banning guns in favor of the katana. :rofl: :cool:

    Kinetic Marxism will be the topic of my upcoming thread where we argue for the return of the battle axe and flail—a wholly unique argument for gun control.
  • Two ways to philosophise.
    Does foundationalism and completeness lead to authoritarianism? I've considered that it might be precisely the opposite. Consider that one almost never sees appeals to authority in basic arithmetic. If there is disagreement, it is almost always over ambiguous notation. But one never needs to appeal to one's job title, involvement in practice, virtues, etc. in justifying the answer to 6 × 87 or 112 ÷ 8.

    There is no need for appeals to authority because the answer can be made obvious. You can, if you really want, separate 112 beans into groups of 8. It is clear when the emperor wears no clothes. Whereas appeals to standing practice and consensus open to door to authoritarianism precisely because authority can manufacture both of these.

    Consider the classical image of Justice. She is not presented as exceptionally virtuous (hard to do in a statue). She is not surrounded by a crowd who agrees with her—rather she stands alone. She is not looking to some crowd, or upwards to some authority. Rather, she wears a blindfold. She carries no membership card, but rather scales. And the scales decide the issue, not her. Nor does she pull out different scales for the rich, for women, for the foreigner, nor use different scales in each instance. If she used different scales in each instance, we might worry that the choice of scale decides the issue. And it is for precisely this reason that she is justified in carrying her sword.

    glob9z6l4z6bn0re.jpg


    This is not to say that appeals to authority, practice, virtue, or consensus are never warranted, just that they aren't ideal. They are needed for where principles have less clear purchase.
  • Two ways to philosophise.


    To be fair, by this logic, it wouldn't necessarily be bad to simply lie about one's position for advantage here. :cool:



    The rest of this seems unrelated to the basic principle that intentionally lying and falsifying is not good for inquiry.

    But yes, falsification has its weaknesses. Newton was also almost immediately falsified, but instead of rejecting the theory people posited additional massive bodies (the outer gas giants) at the edge of the solar system to explain the irregular orbits of (then) outer planets. And we did indeed find those planets eventually. Mach famously declared the atom to be unfalsifiable and the quark was derided in similar grounds, as well as the anti-particles.

    Yet there is a certain sort of falsifiability that I think is a very general principle. Chesterton explains this sort very well in terms of paranoid delusions:

    The madman's explanation of a thing is always complete, and often in a purely rational sense satisfactory. Or, to speak more strictly, the insane explanation, if not conclusive, is at least unanswerable; this may be observed specially in the two or three commonest kinds of madness. If a man says (for instance) that men have a conspiracy against him, you cannot dispute it except by saying that all the men deny that they are conspirators; which is exactly what conspirators would do. His explanation covers the facts as much as yours. Or if a man says that he is the rightful King of England, it is no complete answer to say that the existing authorities call him mad; for if he were King of England that might be the wisest thing for the existing authorities to do. Or if a man says that he is Jesus Christ, it is no answer to tell him that the world denies his divinity; for the world denied Christ's.

    Nevertheless he is wrong. But if we attempt to trace his error in exact terms, we shall not find it quite so easy as we had supposed. Perhaps the nearest we can get to expressing it is to say this: that his mind moves in a perfect but narrow circle. A small circle is quite as infinite as a large circle; but, though it is quite as infinite, it is not so large. In the same way the insane explanation is quite as complete as the sane one, but it is not so large... Now, speaking quite externally and empirically, we may say that the strongest and most unmistakable mark of madness is this combination between a logical completeness and a spiritual contraction.


    But sometimes whole movements have fallen into this, e.g. where any criticism of Marxism is just evidence of Marxism.
  • Two ways to philosophise.


    None?

    So falsifying your data so that you can gain fame and wealth is can sometimes good practice vis-á-vis good inquiry?
  • Two ways to philosophise.


    Historically, these examples were resolved by an appeal to principles considered valid across the lines of the presumed disciplines though. That is, appeals to standards of objectivity, that post hoc rationalization is not good reasoning, the notion that the political or ethnic identity of the scientist is accidental to the science's subject matter, appeals to the principle of non-contradiction when consensus/authorities in different fields contradicted one another, standards of valid arguments, etc.

    If there is a bad consensus and bad practices, they don't just work themselves out through discourse as a sort of random brownian motion. Or at least, they haven't historically, and they wouldn't do so quickly. The replication crisis for instance spanned many fields because the idea was that the principles that were being poorly applied in social psychology were relevant for other fields because they were general. Likewise, the Sokal Affair and later replications weren't taken to apply only to specific journals are reviewers, but represented a problem in practice.
  • Where does logic come from? Some thoughts


    Sure, but the fact that some particular process led to man's desire for truth as such doesn't preclude the fact that man can now desire truth for its own sake. That is, man can seek truth for the sake of truth and not for the sake of evolutionary advantage.
  • [TPF Essay] Dante and the Deflation of Reason


    Re Kant, as mentioned before, I didn't have space to really take on many thinkers in depth. But here is why I think Hume is a good model:

    - The anthropology of economics holds to this mold. To be sure, in theory the intellectual appetites and irascible appetites are contained in "utility," which encompasses all desire, but this tends towards a deflation towards pleasure/pain. Smith was, of course, a close friend of Hume. But economics has been massively influential in public policy and in the development of the established (now global) social order.

    - Computational theory of mind suggests something very much like "all thought is ratio." The model of the Turing Machine does not include any notion of intellectus, quiddity, etc. Obviously, some proponents try to get around this with appeals to emergence. However, CTM tends in this direction and has been the dominant paradigm in cognitive science, etc., for a long time. Bayesian Brain theories would be another example; everything is mere induction. Eliminitive materialism would be another. Marxist anthropology shows some similarities here too.

    -The dominant, now hegemonic political ideology of the West, and now the whole global order is liberalism. But liberal theorists tend towards very thin anthropologies that avoid the intellectual appetites (or "bracket them out"). Obviously, there are continental objections to this, I cite Han. Yet while perhaps more widely read, I think they are less influential. The Anglo-empiricist model is influential not through philosophy but through economics, liberal political theory, public policy, cognitive science, etc. Utilitarianism has also had huge influence, and makes similar suppositions.

    Just for example, Rawls has a very procedural notion of reason, and his anthropology is still thicker than much earlier utilitarianism. Nozick had a slightly thicker anthropology, but it's still thin. Fukuyama brings in thymos, but not really the intellectual appetites, and arguably he brings it in hamstrung by his other commitments.

    Another thing the paper is missing is an expansion on the idea that the more a particular participates in the universal "stream" (to use imagery from Dionysius the Areopagite) the more fully it is particular. The intellect's participation in universals, and intellectual knowledge, makes it more particular, because knowledge And understanding are required for self-determination, which is required for true unity (particularity). Consider that all amoebas are quite similar. All dogs are more similar than all men. Angels are more particular than men, being each an individual species (in Aquinas).

    The same idea is in play with knowledge and virtue. All those plagued by vice and ignorance are in some ways similar. Dante is able to respect history and particularity in a way similar Sufi texts like The Conference of the Birds don't, because of the recognition that perfection makes people more fully persons. More perfected persons are more particular because they are more free, more self-determining, and more fully what they are and not a bundle of warring external causes (and idea going back to Plato's psychology). There is a sort of ascent of the particular in the stream of the universal.

    This helps Dante's philosophy of history in that Providence is not at odds with freedom and particularity, but its full realization. Solovyov is useful here because he takes some of the insights of Hegel re the rational unfolding of history, and has a similar view to Dante on the telos of history, and yet he isn't committed to a sort of providential over-determination. History is the meeting ground of truth and falsehood, the stage of cosmic drama, and not merely a proving ground for individuals. There is an element of perfection and freedom that involves the communication of goodness to others (agape descending, eros ascending) that is realized in history, through human beings' mutual empowerment of one another in the ascent Dante lays out. This is clearest in the Commedia in the roles of Beatrice, St. Lucy, and Mary. But this can be more or less actualized in the polis, and obviously a thin anthropology would be a barrier to any notion like this, even a secularized one.
  • Where does logic come from? Some thoughts


    Perhaps our differences only reflect a difference in our understanding of the definitions of “knowledge” and “intellect.

    Yes, we are. Intellect in the older faculty psychology refers specifically to the understanding of universals, of form. It's not the same thing as memory or what gets called the estimate/cogitative power that allows for problem solving and inductive pattern recognition. There is a whole big literature on this and if animals can "use language" in the ways that even young toddlers can.



    I agree that from an empirical perspective we encounter particulars first, and then abstract the form. But I wonder whether that perspective risks treating the form as derivative —something we derive from the object. In the Platonic (and arguably Aristotelian) sense, form is not something posterior to the object, but that in virtue of which the object is what it is.

    That is, form isn’t just a feature we discover by experience—it’s the condition that makes experience possible. It's because of the reality of the form that we can identify the particular. It’s ontologically prior, even if not temporally so. This is where I’d place form in a “vertical” rather than horizontal order—closer to what Neoplatonism or even certain strains of phenomenology suggest.

    I wonder whether framing form as something abstracted from sensible experience is more of an empiricist perspective (e.g. J S Mill) than Aristotelian.

    I think the bolded is very important to keep in mind. And yes, I'd agree that the form that has been abstracted by the intellect has to be posterior to the form in what is known. The empiricists are copying Aristotle so they do sound similar, but for them "abstraction" has become a sort of inductive pattern recognition, whereas for Aristotle it's the active/agent intellect making the form of what has been sensed (form being communicated through the senses) come to be present in the intellect.

    When I speak of “undifferentiated givenness” or the in-itself, I don’t mean it as some kind of vague or latent actuality, waiting to be identified. To say it must have “some sort of actuality” is already to try to give it form—to insert it into the order of knowable, nameable things, to say what it is. But the point is: we can’t do that without distorting what we’re trying to indicate. Here is where 'apophatic silence' is precisely correct.

    That’s why I describe it as “neither existent nor non-existent.” It’s not an actualised thing, but it’s also not mere nothingness. This is something I’ve taken primarily from the Madhyamaka tradition in Buddhist philosophy, which insists on the middle way (hence the name) - between reification (it is something!) and nihilism (it doesn't exist). In that framework, we are dealing with what is empty of intrinsic existence, but not therefore non-existent. It’s not a substance, but nor is it nothing. It’s a kind of ontological openness. That is the meaning of śūnyatā.

    Interesting stuff. I am not familiar with it. It reminds me of the chora in the Timaeus or some versions of matter. Eriugena has the distinction of nothing through privation and nothing on account of excellence. But then latter would in some sense be the fullness or all possibility, total actuality. One image I like is a sound wave of infinite amplitude and frequency, which of course leads to every infinite peak and trough canceling each other out. The result is a silence, but a pregnant silence. I mean, it's an imperfect example. Dionysius and Eriugena don't think God is a sound wave. It's more about the fullness defying finite description.

    This “in-between” condition—neither purely empirical nor purely intelligible—is what makes the Platonic view so compelling in discussions like this. It avoids collapsing ideas into mere mental projections, while also refusing to treat them as physical facts. They’re real, but their reality is of a different order—something we participate in rather than simply observe.

    :up:
  • Two ways to philosophise.


    Are you seriously advancing the epistemic position that no one is ever wrong but that the two options would be: "yes I agree," and "I don't know?"

    "My epistemology isn't 'anything goes' but in it absolutely no one is ever wrong." Sounds like "anything goes," to me

    Second, I think you're also conflating multiple senses of "undecided" here. There is:

    1. We personally do not know the answer.

    2. The positive statement that one knows that no one can know the answer.

    3. The positive statement that one known that the position in question is neither true nor false.

    These are three different things. When people have resistance to "I don't know," it is normally not on account of 1, but on account of 2, generally when there is equivocation between 1 and 2 and it is used to advance some sort of positive claim.

    So for instance, if I don't know anything about molecular biology, it would make no sense for me to demand that my local school district not teach theories in molecular biology. Likewise, objecting to creationism being taught in schools only makes sense if one thinks it is likely false, or at least unlikely to be true, not because one "doesn't know" if it is likely to be true or not. But it's popular to equivocate between 1 and 2 on this issue.

    "I don't know" is objected to because an appeal to one's own ignorance, masquerading a "modesty," is often used to advance positive claims as decisive. For instance, "I don't know issues related to the human good, therefore we should "bracket out" everything I don't accept and advance my liberal political theory, anthropology, and ideology on the whole of society." Or "I don't know if realism or nominalism is true so we'll have to 'bracket' and just presuppose my preference for nominalism is true." That sort of thing. If people don't use their own professions of ignorance to justify claims, then I think "I don't know," is only going to annoy people when it's obvious intransigence.





    Since this word "arbitrary" has come up so consistently, I'm wondering if possibly some of us are using it to mean different things. But I'm going to use it to mean "not based on any particular reasons; like a throw of the dice." On that understanding, I would answer the second question this way: "It doesn't, but if the discipline is longstanding and has smart, experienced practitioners, quite quickly the demand for good reasons will channel the discussion away from arbitrary and unfounded practices. Furthermore, just about no one presents their views in this way."

    Right, so this is an appeal to a sort of virtue epistemology. Virtues are principles, so I can get behind that. However, I don't think "smart" and "experienced," are necessarily good virtues here. Consider the examples of Aryan physics, socialist genetics, phrenology, etc., which were created by intelligent, experienced scientists.

    Practices have to be open to external critique by some additional standard or else there is no way to identify pseudoscience. You get all the issues of the hermetically sealed magisterium otherwise.

    Here is the classic answer: sciences are based on per se predication, what is essential to things. And it is not essential to living beings that they are observed on Tuesdays, so we do not have a sui generis "Tuesday biology," nor is it essential to physical processes that they are researched by Jewish scientists so we cannot have a "Jewish physics." This isn't a silver bullet, but it captures most of the egregious examples.

    But from your response, it seems like what you really have is just loose criteria for "when people deserve a hearing," or are "reasonable" and not really anything about correctness or truth per se. Yet might this preference for current practice and what is deemed "reasonable" tend towards ruling out radical critique? If we were having this conversation just 100 years ago, perhaps a bit more, the proposition that "women and Africans have comparable mental abilities to European men" would be considered "unreasonable" by many, including smart, thoughtful people engaged in relevant practices.

    Hence, it seems that there are general principles here vis-á-vis various sorts of bias that are inappropriate. And these issues are still with us. One of the things the replication crises have exposed is that fields that became ideological echo chambers became very willing to accept and teach prima facie very hard to believe conclusions as "well supported" if they just so happened to support ideological agendas.
  • Where does logic come from? Some thoughts


    "Curiosity killed the cat," right? There is a sort of anthropological/metaphysical question of if animals can "know" as in, intellection, but obviously they can know in different ways, e.g. "sense knowledge," memory, etc. Both sensing and knowing involve a sort of union with the known.

    But the quote is from Slick Ari at the opening of the Metaphysics and he only mentions man because that's his focus.
  • Iran War?


    No, I would imagine the decision-making process is that they had a window of opportunity to act while Iran's proxies were largely defeated and when they would not face significant additional pressure from third parties for carrying out the strikes. Obviously, internal politics could be another factor.

    My guess is that they will keep degrading Iran's air defenses and missile capabilities, trying to reach a point where they can keep inflicting costs on Iran without facing significant costs in return. Iranian missile volume is already down significantly. The goal is probably to signal an open ended commitment to destroying their military and oil infrastructure until they agree to negotiate a settlement that ends the nuclear program. Barring that, there is the idea of the US destroying the main enrichment facility or Israel doing it with a cargo plane after having fully degraded Iran's air defenses (which is probably risky and might not work), or, unlikely, an air assault operation to destroy it.

    Iran was already in a precarious political situation due to major, violent unrest across the country aimed at the government. Iran is only about 60% Persian and there are separatist groups Mossad has been able to use in the past for additional pressure.

    In the past year or so Iran has also had open battles with the Taliban over water rights and been bombed by Pakistan, so they sort of have issues in all fronts due to the type of foreign policy they have pursued, but obviously the successful decapitation strike on Hezbollah that forced them to make a separate peace, the collapse of the Assad regime, loss of influence in Iraq, and now this has made them significantly more vulnerable, which explains the timing as well.
  • Where does logic come from? Some thoughts


    This also speaks to our curiosity. We always want to know what is over the horizon. We are natural explorers. It is in our nature to see the world more openly - to seek out new worlds and new civilizations - to boldly go where no man has gone before, because you never know what part of reality might be useful for something

    Or simply because "men by nature desire to know," or because they desire the glory of achieving the difficult.
  • Two ways to philosophise.


    Also, if you want to make an appeal largely to "reasonableness," this would suggest something like a virtue epistemology (which would be my preference in many respects). But a virtue epistemology has to have virtues, the virtues cannot be different in every instance. And virtues presumably have to be virtues for some reason. Normally, intellectual virtues are considered virtues because they keep false judgements out and help someone attain to truth.
  • Two ways to philosophise.


    That makes sense. I was thinking "binary" in terms of 2, because this seems to be the objection.

    I might add:

    5. If one claims standards are wholly unique in every instance then one cannot keep arbitrariness out.

    That's a little trickier. But 5 is obviously false as a descriptive claim. To use the example of economics given earlier, it is not the case that economists use different epistemic standards for every question. They do not complete peer review by judging each submission by entirely different standards. And so too for philosophy of science and epistemology.

    This gets at one of the unaddressed issues, which is identifying pseudoscience.

    And the idea that standards are wholly different in each instance is at odds with the idea that authoritarianism is always inappropriate in epistemology or that only reasonable narratives need be considered.
  • Two ways to philosophise.


    The troublesome word is again "correctly.

    I had considered you to be laying out criteria for correctness there because you wrote:

    Some narratives are acceptable, true, or valid for one sort of reason; some are so for another sort; some for a third sort; etc.

    And then gave your overarching standards for those reasons. The context for this was that I pointed out that if someone declares that their epistemology is not "anything goes," but then says they can give absolutely no reasons for when something "doesn't go," they have offered an obviously unsatisfactory response.

    If you're denying these as standards then we're back to: "my epistemology is not "anything goes,' but I can give no explanation of why some narratives 'don't go.'" Or "my reasons for denying some narratives are sui generis in each instance." How does this keep arbitrariness out?



    I think what you're imagining is a kind of meta-conversation in which some issue about epistemological standards could be put to rest for all time, and one version deemed correct. I don't believe this happens. I believe philosophers (and scientists) come to points of agreement, are better able to frame their disagreements, and then move on.


    I'm not. This seems like yet another attempt to set up a false dichotomy between declaring oneself infallible and having any epistemic criteria at all. But there are many fallibilists and yet very few who subscribe to "there are no epistemic principles" or "epistemic criteria are sui generis in every instance." Indeed, I would guess virtually none, because both of those positions have obvious problems.

    The idea is that there are better or worse epistemic principles. That doesn't mean we necessarily know them or know them with certainty. The denial that there are better or worse epistemic principles would seem to allow for arbitrariness, for how would it keep it out?

    In a more concrete actual practice, we see the consensus-driven model in constant operation, and able to produce perfectly satisfactory results. I think in the human sciences generally there is a great toleration for unresolved questions and disagreements, and very little belief that some day, some way, all these ways of interpretation will be put aside in favor of a single correct version. As for hard science, I have to rely on the discussions by others, but something similar seems to be in play, though possibly there's more hope for an ultimate TOE -- don't know if that's still on the table these days.


    This is the same false dichotomy.

    I guess an issue here might be "correct/incorrect" as another over-simplified binary. Again, I have to pose the obvious question: If a view about science or epistemology can be shown to be correct, as you mean it, why has this not ended the discussion? How is it possible that the debate is still vigorous?


    See above. I never said it was a binary. I said that if one claims that one's epistemology is not "anything goes," then not all narratives can be equally correct. But if not all narratives are equally correct then in virtue of what is this judgement made? Nothing about that requires a binary, claims of infallibilism, etc., it simply requires the observation that if one can give no reasons for their standards then their standards are open to arbitrariness.

    Likewise, if one claims one's standards are always different in each instance, then presumably the meta-standard by which they judge standards appropriate in each instance is also different in each instance. And the meta-meta-standard will also be unique in each instance. And so on. If this is not arbitrariness, it certainly cannot keep arbitrariness out. For by what metric would any standard be deemed poor in any particular instance?

Count Timothy von Icarus

Start FollowingSend a Message