Comments

  • Two ways to philosophise.


    And the two metaphors combine naturally: how do you know if some place is a place you'd like to go until you've been there? Do you decide based on what other people have said about it or what?

    I think all human beings have experience of knowledge, error, and being aware of one's own ignorance. So, there is already an epistemic orientation. If there wasn't this sort of orientation, then there wouldn't be anything to differentiate epistemology as a distinct pursuit.

    There is a Meno Paradox element of: "how do you know what you're looking for in a specific case," of course. I don't think this applies to "why not prefer falsity over truth or ignorance over knowledge?" though.

    They all lead somewhere, but is it where we want to go?

    Is this still supposed to be a metaphor for epistemology? Or is it philosophy more generally? Presumably, in the case of epistemology we want to head in the direction of knowledge and not ignorance, right?

    So I am not sure how there are multiple destinations, or what that would represent. Are the many destinations sui generis "types of knowing?' Would that suggest many different, incommensurate truths or types of truth (which would mean many different incommensurate beings)?

    It seems to me that, even were this so, the goal would still be all of the destinations (not from a practical standpoint, but from a theoretical one). Whether or not this is achievable would be a different question.

    Whereas, even if there is only one destination, the question of if any road leads to the destination would remain.
  • Iran War?


    Iran has used "proxies" (often actually Iranian forces) to carry out 170 attacks on US bases just since the start of the Gaza War. There were also Iranian forces involved in firing Iranian rockets at Israeli cities from almost the outset of the war until Hezbollah was militarily defeated, so, while this is certainly an escalation, the two have effectively been at war the whole time.

    Unsurprisingly, counterstrikes have killed IRGC officers because they are directing the attacks.

    I am not saying anything about US policy being good BTW, just that this is part of a larger pattern involving pretty much the entire region, and that from a strategic perspective it seems stupid from the Iranian side to keep doubling down.
  • Iran War?


    Only partially. The fact is that Iran, due to the type of foreign policy it pursues, has been involved in combat with the Taliban, significant combat in Iraq, including attacks on US forces over the last several years, major combat operations in Syria, has also been bombed by Pakistan because it allows safe haven to terrorist groups, and IIRC has also had clashes with Azerbaijan, while being in a Cold War that occasionally goes hot with the rest of its neighbors, a proxy war with the Saudis in Yemen, as well as ongoing struggles with secessionist forces and its own populace.

    And now, it seems to want to triple down on its failures, given the threat to engage in an overt act of war against its remaining neighbors as a means of somehow indirectly getting back at Israel and the US instead of just accepting that supporting Hamas has been a disastrous policy and accepting defeat (which is already here, and they've already lost Assad and had Hezbollah turn to a separate peace after losing their entire leadership).

    Iran pursues something like the regional foreign policy equivalent of the Soviet Union or US at the peak of the Cold War, while having the actual economic and military strength of... well, Iran. From a realpolitik view it's almost incomprehensible.
  • Iran War?


    If one night of air strikes or exchanging some artillery fire makes a "war" then Iran is apparently at war with Afghanistan, Pakistan, Israel, the US, and itself, and seems to be leaning towards declaring war on the rest of its neighbors as its main strategic response (i.e., closing off all its most powerful neighbors main income stream and attacking their civilians at sea) immediately after losing its entire air force, most of its air defenses, and a significant portion of its standoff weapons.

    I'm just saying, I'm not going to lose sleep over the people who dream this stuff up not having nuclear weapons, particularly when their country has seemed to teeter on the verge of revolution or civil war the past few years.
  • Two ways to philosophise.


    Assuming this is honest, it shows how very, very far Tim is from understanding what I have been suggesting. It would be somewhat extraordinary for someone to suppose that I would argue that "no one is ever wrong", given that almost all my posts are about how folk are wrong! I think many would see it as my modus operandi!

    How can Tim be so thoroughly mistaken? Do we supose his case is different to others here, who display less intelligence but more ill-will and aggression? Is Tim in the position of someone in St. Louis trying to describe what it is like to live in Kansas City? Is he just saying that there are better Jazz clubs in St Louis? (Never having been to either, I'm guessing...)

    What is the point of me exchanging several long PMs on this where I clarified this point to you in detail and asked you for clarification in each of them, refusing to offer up that clarification, and then posting this here?

    I am still confused about what your point on undecided statements was. I have explained though precisely why it seemed to me that the counter example you were pursuing was bizarre.

    And note that this entire line of posting was started by your conflation of "all narratives are true or they aren't" that is "all x are y or not all x are y," as being equivalent "each x is either y or not-y." That was supposedly "my theory." Whereas, what has offended you, is merely my asking an incredulous question. But I was incredulous because the counter example for having "missed" the undecided option for "all statements are true or not all statements are true," would be the strange objection that one has unfairly ruled out the possibility that "all statements are neither true nor not true." Likewise, even if I had said "each statement is either true or false," the objection that I have missed the undecided option only seems to have purchase in this context if it is inappropriate to leave the door open on their being no false statements.

    I don't really know what the point was supposed to have been otherwise, hence my asking.
  • Two ways to philosophise.
    There is also to consider the Hegel-Fukuyama-"Whig history" descriptive element to consider. More effective epistemologies lead to greater economic, technological, and military success. These promote the survival of the "media" that epistemologies exist in. Certainly, people buy into this. The PRC's decision to allow liberalization in China was based on the idea that it would increase safety by fostering the technological and economic growth needed for a strong military, and thus state and cultural survival.

    That's a sort of evolutionary thesis that could also be framed in information theoretic terms. But of course, evolution doesn't preclude the formation of genetic diseases either.

    There are problems there, but I think it gets something right.
  • Two ways to philosophise.


    Yeah, I think we're falling into Enlightenment categories. I don't think anyone here favors Enlightenment rationality (except perhaps when @J channels Nagel).

    The difficulty, given my preferences, is that, while much of later 20th century philosophy is a rejection of the "view from nowhere," it still continues to use it as a sort of dialectical pole, and in fact, to accomplish this, it tends to project the preferences of early-20th century empiricists back onto the whole of philosophical history. So, the position is rejected, but it still "looms large," and it becomes difficult to step out of its shadow. Indeed, I think helpful elements of the past get occluded by this formidable shadow.



    Yeah, I probably should have addressed that, I just wanted to clarify the idea behind the initial disjunct of "all narratives are true or not all narratives are true."

    I'll have to think about that analogy. I can see how it is apt in some ways, particularly the difficulty of knowing a locale before going there, but I also think that practical reason (better or worse) differs from theoretical reason (more or less true/accurate) in substantial ways, that an analogy might have to reflect.

    To be put it very shortly, if knowledge is our grasp of being, truth the "adequacy of thought to being," then I am not sure if the idea of many different cities works. Would this denote many different beings (plural)? (Or I suppose just different places to do your research from, but then there seems to be a "progress" element that we need to account for; the difficulty is that "better" seems to open up cases where something is better for reasons unrelated to epistemology and knowledge).

    That is, given the assumption of one being, one world we are all a part of, I might want to adjust it to something like there being many roads to the same location. For instance, driving to the Grand Canyon. But the Grand Canyon is also huge and looks very different from each rim, or from the bottom, or from Horseshoe Bend, etc. So there is both a question of which roads even lead there, which are best to travel on (which might vary depending on your "vehicle"), and which angel you'll be seeing the Canyon from. Presumably, once you are "closer" it should be easier to get to other close views.

    Maybe this works, maybe not. I suppose one benefit here is that someone is always somewhere in proximity to the Canyon, although it may be possible to drive in the wrong direction. Because it does seem to me that, in being normative, there has to be a sort of "progress" in any analogy, although I suppose cities being "better" might capture this. The case where all locations are potential destinations would seem to me to suggest a sort of anti-realism, or at least something in conflict with most notions of epistemology as goal-directed along a particular axis. I am not sure about multiple discrete destinations though, what that would have to imply about knowledge.
  • Two ways to philosophise.


    We landed at some point on questions like this: Are all narratives acceptable? I think it's clear no one wants to say that, but they mean different things when they answer. I understand the impulse of the question; when I was young and discovered Science, or when I was somewhat older and discovered Logic, I thought they were tools especially useful for ruling things out. But I'm older now, and I can't help but read that question and ask, acceptable to whom? in what context? for what purpose? And I understand the question as intending to be taken as "acceptable full-stop," or, if need be, "acceptable to Reason." And I can't help but wonder if anyone is ever in a position to stand nowhere and choose which town to go to ...

    That's not really how I intended it. I was trying to remain as broad as possible. Hence, not using "true" but the cumbersome "true/correct/acceptable/etc." The idea is that any sort of epistemology has to make judgements of some sort, and presumably judgements that are non-arbitrary, and so which make some sort of appeal to "reasons" in a broad sense.

    I personally think it's a mistake to conflate "good," or "useful," with "true," since then we have the questions of "good or useful for whom?" and "truly useful, or just currently assumed to be useful?" etc., but I didn't want to foreclose on that either.

    It seems to me that there must be judgements of some sort, that there must be something like "reasons" to avoid the charge of arbitrariness, and that, if reasons are sui generis in every instance, or potentially so, it is hard to see how arbitrariness can be kept out. That was the basic idea.

    Now, there is also an issue of separating the normative from the descriptive. The cultural construction of standards as a descriptive claim can stand alongside an understanding of superior/inferior normative standards. It would be something like the common moral anti-realist genetic fallacy argument to claim that there aren't superior and inferior ways to developed knowledge because such standards emerge from contingent social processes (although, I'd also challenge that such processes are ever wholly contingent).

    At best, the descriptive observation might support something like a debunking argument to attack any warrant for claims of normative epistemic standards. Which is just to say that I haven't seen any way the normative question can be foreclosed on. And indeed, if it was foreclosed on entirely, and we said there were absolutely no better or worse epistemic methods, that seems to me to be courting a sort of nihilism. But neither does the existence of the normative question require "contextlessness" to address.
  • Two ways to philosophise.
    Looking at these recent responses, I don't think it's useful to set up a dialectical between "contextlessness" as a "view from nowhere/everywhere" on the one hand, and admitting the relevance of context on the other. This sort of thinking is, as far as I can tell, something that largely emerges in the 19th century and had cracked up by the mid-20th century. It relies on certain metaphysical presuppositions that are endemic to much modern thought, but which I don't think hold water.

    I think it is more helpful to think in terms of priority. For instance, human nature is prior to human culture. This is not to say "temporally prior." No humans have ever existed outside of a culture. Human culture always shapes psychology and people's understanding of the world. And yet human culture requires that there be humans, and "what humans are" necessarily always shapes every human culture.

    Even thinkers who were at pains to deny human nature in their quest for "freedom as potency/power," end up having to bring some notion of it back in, e.g. for Sartre, faciticity, modes of being, etc. Because obviously cows don't learn French, only people do, nor do people reach puberty, spin themselves into a chrysalis, and emerge weeks later with wings. Human beings are something determinant, and what they are always plays a role in what they do, e.g. epistemological efforts. People might disagree on exactly how this works, the degree to which we can know what is prior, etc., but it seems hard to dismiss any notion of man as a certain sort of being. Yet the sort of being man is shapes all of man's cultural pursuits, hence "priority."

    I think of reason and principles of knowledge in analogous terms to this example, not as a dialectic where one pole is "contextless." This means looking for unifying principles. For instance, the principle of lift is in some ways the same in different sorts of insect wings, bird wings, bi-planes, drones, fighter jets, etc. and yet it is clear that these are all very different and require a unique understanding. Likewise for principles in complexity studies that unify phenomena as diverse as heart cell synchronization, fire fly blinking, and earthquakes. Identifying a common principle is not a claim to have stepped outside a consideration of fire flies and heart cells, but rather a claim to have found a "one" that is present in "many." If such principles didn't exist, I don't know how knowledge would be possible.

    Importantly there is a move in Descartes, Kant, etc. to have mental representations become "what we know" instead of "how we know." This gets carried forward into philosophy of science and philosophy of language, such that theories, models, paradigms, etc. are all primarily "what we know," instead of means of knowing. I think this is a pretty fatal error, but since it is popular, I think it's worth pointing out that it plays into the demand for the "view from nowhere."

    On this view, the mental representation, theory, paradigm, etc. represent a sort of impermeable barrier between the knower and the known, and hence we always know the barrier and not what is on the other side of it. I think this is based on bad metaphysical assumptions that, because they are common, often go unacknowledged. I think it's an improper absolutization of the old scholastic addage that "everything is received in the mode of the receiver," and a neglect of the dictum that "act must follow on being," for being to be meaningful.

    That's a whole different topic, I just wanted to throw out the idea that the "view from nowhere" need not be a pole of opposition, and indeed wasn't for most of philosophical history. I'd frame it instead in terms of principles and priority.

    A good analogy is a light that passes through many panes of glass. From where we sit, we might always have to look through different panes, with different tints. So there is never a case of "looking directly at the light." And yet I wouldn't want to say here that man simply cannot ever see the light, but only "the light as filtered through the panes." The light "as filtered," is still the light. The panes are transparent to some degree. They let in more or less light, and one can move around to look through different panes, and some are further back, and so more fixed in our field of vision than others.

    The critique of the thing-in-itself of modern process philosophers is relevant here. The thing-in-itself is not only epistemically inaccessible, because knowledge relies on interaction, but also entirely sterile, since how a thing is when it is interacting with absolutely nothing else, and no parts of itself, is irrelevant. To even formulate such a sterile being requires some dicey assumptions.
  • The passing of Vera Mont, dear friend.
    That's very sad to hear. She will be very missed, she was a great contributor.
  • Two ways to philosophise.


    I'm really not sure what is supposed to be disingenuous here. It seems to me that if a principle holds with no imaginable counter examples, it's a solid principle. That's my only point.

    We seem to be in agreement that there are no imaginable counter examples, so I don't know why the principle isn't valid?

    So does whichever standards triangulate properly represent good standards, even if the community doesn't agree to them, or does the community have to agree to the standards and they have to triangulate?

    I guess now I am trying to tell if the standards exist in virtue of triangulation prior to the community accepting them. If they do, then I would just say that those are what I mean by principles, and we have found some agreement.
  • Two ways to philosophise.


    No, and that is exactly the point!


    So there are no examples where just making up your data consistently leads towards knowledge, but it still isn't a valid epistemic principle to not just make up your data? Why isn't it a valid principle?

    But it would be if the community says so?
  • Two ways to philosophise.


    IDK, seems like grounds for a principle to me.

    Can you give an example where just making up your data consistently leads towards knowledge? I would accept that as a strong counter example.
  • Two ways to philosophise.


    If the way the world is requires that epistemic communities follow certain standards to avoid false conclusions, that sounds a lot to me like the grounds for a principle.

    What's the objection here? "The way the world is makes it so that falsifying your data and lying isn't a good way to reach knowledge, but that doesn't make not just making up your observations a valid epistemic principle because..."

    What's the end of that sentence? The only one I can think of is "because sometimes making up your observations to fit your conclusions is a consistent way to achieve knowledge," which seems clearly false.

    Yep. That's not down to the community failing to accept a principle, but a mismatch between what the community says is the case and what is the case. It's a failure of triangulation, not of principle.

    Yes, a mismatch that wouldn't exist if they had accepted the principle.

    Also, even if you imagine some bizarre case where people just so happen to consistently make up fake observations that support true conclusions, this still wouldn't be a good counter example because presumably holding true opinions in virtue of observations you just lied about isn't constitutive of knowledge.
  • Two ways to philosophise.


    Galileo’s telescopic observations, Newton’s bucket, Eddington’s eclipse photos—all involved choices that wouldn’t survive a modern methods review.

    Sure, and many similar moves led to things like the approval of drugs that led to birth defects, toxic chemicals in kid's drinking water, etc. That invalid arguments can sometimes have true conclusions doesn't make them valid arguments, and that falsified data can sometimes support true conclusions doesn't make falsifying data good practice. And indeed, since it makes more sense to falsity data when one cannot support one's claims because they are false, it's probably more likely in cases where the conclusion is false.

    As an objection to "falsifying data is always wrong," this is the same conflation of normative standards and actual practice I mentioned above. It's the equivalent to objecting to "rape is wrong," because "but there are rapists," or "sometimes good people are born from rape."

    The issue isn’t that anything goes, but that what counts as "okay" or "not okay" is itself historically and contextually shaped. There is no algorithm for scientific legitimacy, but a community negotiating standards as it goes.

    I will grant that there is a gray area in some fields. In political science, people often select case studies on the dependent variable. But the quantitative analysis is also often window dressing.

    However, I think it's inappropriate to conflate "anything that wouldn't pass a current methods board," with "knowingly just making shit up" (which does happen, see recent news). I find it hard to imagine how the latter is ever good inquiry, even if we might justify it ethically on other grounds using some bizarre counterexample. But counter examples don't disprove principles, they merely show that there can be things worse than the violation of a principle that justify its violation.

    But here is the bigger issue:

    There is no algorithm for scientific legitimacy, but a community negotiating standards as it goes.

    Would it be the case that, so long as a community agrees, making up data is good inquiry and a path towards knowledge?

    I don't think it would be. So, the issue isn't just about what some community agrees. If some community does agree that falsification is ok, they're going to tend to come to false conclusions.

    Community agreement doesn't keep out the case where the community agrees with falsifying data. But that isn't a bizarre counter example, there are plenty of real world examples, e.g. some "race realists," some anti-vaccine advocates, etc., who consider themselves a sort of epistemic community and are so convinced of the truth of their conclusion that they don't see any problem with falsifying data. This was also true with "socialist genetics." Not only was falsification allowed, people were positively pressured into it. And the result was a great deal of deaths.

    So what would be the objection here: "but that's not the real community, the real community is the broader scientific community?" How is that claim justified? If community practice is the ultimate justification of all standards, then communities define themselves, and are presumably infallible within themselves.
  • Two ways to philosophise.


    Asking a question is now misrepresentation? I didn't get what the point of the example was.

    I'm not talking about a final theory, merely basic principles like "it's not ok to just make up your observations" or "good arguments don't start with false premises and are valid ." Or, considering your objection just now, how about, "it isn't acceptable to misrepresent positions?"

    Do those not always obtain? If not, when wouldn't they obtain?
  • Two ways to philosophise.


    "Brownian motion" as the only alternative here is yet another either/or binary, about as useful as "absolute" and "arbitrary." Couldn't we allow that something in between is more characteristic of how such practices actually work?

    Sure. Did you have a principle in mind in between?

    It's not a binary. It's only down to Brownian motion if one denies any determinant principles that guide discourse whatsoever. For instance, is simply faking your data bad inquiry, or is it sometimes acceptable? If even this is ruled out as a principle because everything is case by case consensus building, I'm not sure how there can be any determinant structure to good discourse.

    "Reasonableness" is at least something. But how do we know when someone is "reasonable." From what I can tell, this is also up to case by case consensus building. So it seems like individual opinions groping towards consensus all the way down.

    One thing to note here is that epistemology is in some sense normative. It's about what leads towards knowledge. So, when you object that "well, real world discourse sort of looks like this," that's like objecting to "rape is wrong," because "in the real world there are rapists." Nor do normative standards entail "forcing belief." Indeed, belief cannot be coerced in many cases, only assent. The point is not that all people follow epistemic standards, but that some forms of discourse are more likely to lead to knowledge than others. If it wasn't the case that some kinds of discourse were more likely to lead to knowledge than others, then I can hardly see how being "reasonable" would matter.



    Is that supposed to be a counter example? As in, "if Galileo did it and he had a good theory, then sometimes it's ok to just lie and make up observations?"

    I don't know if it counts as an "algorithm,' but sure, good scientific inquiry means not falsifying your data, not arguing from false premises, and not using invalid arguments.

    Arguments of the form:
    All squirrels are mammals.
    All dolphins are mammals.
    Therefore, some squirrels are dolphins.

    For instance, don't demonstrate their conclusion. They don't become part of good inquiry because the conclusion is true.

    There might, in some cases, be other ethical concerns that warrant falsifying data perhaps, but that wouldn't be the same thing as it being part of a good epistemology.
  • Two ways to philosophise.


    I don't think that's accurate. The position strikes me more as a sort of virtue epistemology in search of clear virtues. It isn't against argument and reasons, it just denies overarching standards for them, or even general principles. My thoughts are that it describes good discourse, but in very vague way that doesn't rule out bad discourse.

    The difficulty I see is different. First, a very robust pluralism insulates claims from challenge. This is sort of the opposite of democratization; it's atomization. If someone issues a successful challenge to your position, or if you are unable to make any good arguments against theirs, you can simply fall back on: "well, there are many equally valid narratives, even if they contradict one another, so there is nothing to worry about here."

    The issue of an inappropriate democratization of truth comes up in two ways if there is vagueness though.

    A. There is no way to exclude anyone. We can say "we only allow reasonable objections," but if the standard for "reasonable" is weak or different in each instance, then we will have difficulties justifying exclusion unless everyone already agrees with us.

    This is precisely what has let "race realists" be so effective in their evangelism. They come ready to fight with their studies, government statistics, etc. This is not to say they always act in good faith, but they do make appeals to evidence. And then they get rejected out of hand as beyond the pale. Yet, due to other difficulties, the culture generally lacks the resources to show why they are beyond the pale in a credible way. The result is that it looks to would-be-converts that the race realists' case is stronger than it really is because the opposing side seems to be forced into abandoning argument in favor of authoritarianism (and indeed actual censorship in academic settings).

    B. Given certain assumptions about language, what you're describing can happen. Because if what we're talking about is primarily words and not being, and words just mean 'how they are currently used," then everything is necessarily democratized in a way. Socrates' debates about the true meaning of justice make no sense if "justice" is just a token in a game, and others don't use the token the way he does. Clearly, he isn't using the token right, so he must be wrong. At best, he can propose that it is useful to the community to use the token differently.

    But that's very different from excluding reasons. Reasons are discussed. I suppose though that reasons arguably lose their purchase without any clear principles. "You're just engaged in post hoc rationalization, political bias, appeals to emotion, contradicting yourself, your premises are false, your argument isn't logically valid," etc. doesn't necessarily work as a "reason" if these are not considered to be illegitimate in general, but only illegitimate on a case by case basis.

    @Moliere has given us the best example here. If falsifying your data and lying isn't always bad discourse, but only bad on a case by case basis, then the response to "you just faked that data," can plausibly be: "sure, so what?" So to for "your premises are false," or "your argument is not logically valid." And yet, if there are no general principles, these would presumably have to be appropriate in at least some cases.

    But I do not think @J and @Banno are likely to agree on that one. I have to imagine that "it isn't ok to just make up fake evidence to support your claims," is going to be something most people can agree upon, granted that, on the anti-realist view that good argument is simply that which gets agreement, and all knowledge claims are simply power battles, it's hard to see how justify this since it would seem that faking data is fine just so long as it works.
  • Two ways to philosophise.


    I think this is more in your imagination than true -- capitalism is deeply hierarchal. "inferior" and "superior" are the words you wouldn't use on the basis of the faux-equality of liberal-capitalism, but the hierarchical relationship is there. And I'd equate, in our day and age, liberalism with capitalism

    Indeed. It's sort of a weird mix of both though. Because of the washing out of cultural norms and institutions by capitalism and liberalism, and liberalism's banishment of non-liberal approved virtues (which are quite thin) from education and public life, and the presumption of "equality" and a "classless society," people become hyper focused on a narrow range of status symbols. Fukuyama addresses this. When everyone becomes "equal," people strive for new ways to rise above the crowd. But not all of these will be healthy.

    Hence, the heavy focus on wealth, elite education, status, and increasingly, sex as one of the last forms of validation not to be fully commodified or driven from the culture. As participation in civic organizations, churches, unions, etc. have collapsed, and become more precarious, people have fewer sources for thymos outside the market (or the "sexual market"). In my whole career I have had a single person ask me for a raise on the grounds that they needed more money due to personal issues. I have had countless people ask for raises with seemingly no concern for the money, because they took it as a sign of worth (and indeed, self-help books for careerists stress this heavily).

    I read an article in the Atlantic a while back on this sort of thing. Cultural balkanization and the replacement of America's particular blend of Protestant Christianity with a secular liberal capitalist ethos had eroded its pretensions to be a classless society because all appeals to excellence were slowly boiled down to market power. Even the cachet of elite taste has increasingly waned. The exemplar tech bro or stock trader is not possessed of the elite tastes of past ruling classes.

    This is problematic in the way all status obsessed societies are, but more so. First, because of capitalisms' intense short-termism and insatiable hunger for growth, which courts ecological disaster, but also on wholly aesthetic grounds. Homer already knew his arete obsessed Greeks were missing something, that the quest for glory was ultimately hollow. But at least there was some beauty in it. With capitalism we get the ugly reign of quantity, without the thymotic glory of warrior cultures, or more appealingly, the sublimation of that violence into high art, or things like the Japanese tea ceremony. The knight, samurai, or the hoplite certainly has more going for them than the Wall St. suit.

    I partially blame firearms and high explosives for making war increasingly a crap shoot that is no longer dependent on physical abilities. It's a sort of kinetic Marxism! The Japanese had the right of it in banning guns in favor of the katana. :rofl: :cool:

    Kinetic Marxism will be the topic of my upcoming thread where we argue for the return of the battle axe and flail—a wholly unique argument for gun control.
  • Two ways to philosophise.
    Does foundationalism and completeness lead to authoritarianism? I've considered that it might be precisely the opposite. Consider that one almost never sees appeals to authority in basic arithmetic. If there is disagreement, it is almost always over ambiguous notation. But one never needs to appeal to one's job title, involvement in practice, virtues, etc. in justifying the answer to 6 × 87 or 112 ÷ 8.

    There is no need for appeals to authority because the answer can be made obvious. You can, if you really want, separate 112 beans into groups of 8. It is clear when the emperor wears no clothes. Whereas appeals to standing practice and consensus open to door to authoritarianism precisely because authority can manufacture both of these.

    Consider the classical image of Justice. She is not presented as exceptionally virtuous (hard to do in a statue). She is not surrounded by a crowd who agrees with her—rather she stands alone. She is not looking to some crowd, or upwards to some authority. Rather, she wears a blindfold. She carries no membership card, but rather scales. And the scales decide the issue, not her. Nor does she pull out different scales for the rich, for women, for the foreigner, nor use different scales in each instance. If she used different scales in each instance, we might worry that the choice of scale decides the issue. And it is for precisely this reason that she is justified in carrying her sword.

    glob9z6l4z6bn0re.jpg


    This is not to say that appeals to authority, practice, virtue, or consensus are never warranted, just that they aren't ideal. They are needed for where principles have less clear purchase.
  • Two ways to philosophise.


    To be fair, by this logic, it wouldn't necessarily be bad to simply lie about one's position for advantage here. :cool:



    The rest of this seems unrelated to the basic principle that intentionally lying and falsifying is not good for inquiry.

    But yes, falsification has its weaknesses. Newton was also almost immediately falsified, but instead of rejecting the theory people posited additional massive bodies (the outer gas giants) at the edge of the solar system to explain the irregular orbits of (then) outer planets. And we did indeed find those planets eventually. Mach famously declared the atom to be unfalsifiable and the quark was derided in similar grounds, as well as the anti-particles.

    Yet there is a certain sort of falsifiability that I think is a very general principle. Chesterton explains this sort very well in terms of paranoid delusions:

    The madman's explanation of a thing is always complete, and often in a purely rational sense satisfactory. Or, to speak more strictly, the insane explanation, if not conclusive, is at least unanswerable; this may be observed specially in the two or three commonest kinds of madness. If a man says (for instance) that men have a conspiracy against him, you cannot dispute it except by saying that all the men deny that they are conspirators; which is exactly what conspirators would do. His explanation covers the facts as much as yours. Or if a man says that he is the rightful King of England, it is no complete answer to say that the existing authorities call him mad; for if he were King of England that might be the wisest thing for the existing authorities to do. Or if a man says that he is Jesus Christ, it is no answer to tell him that the world denies his divinity; for the world denied Christ's.

    Nevertheless he is wrong. But if we attempt to trace his error in exact terms, we shall not find it quite so easy as we had supposed. Perhaps the nearest we can get to expressing it is to say this: that his mind moves in a perfect but narrow circle. A small circle is quite as infinite as a large circle; but, though it is quite as infinite, it is not so large. In the same way the insane explanation is quite as complete as the sane one, but it is not so large... Now, speaking quite externally and empirically, we may say that the strongest and most unmistakable mark of madness is this combination between a logical completeness and a spiritual contraction.


    But sometimes whole movements have fallen into this, e.g. where any criticism of Marxism is just evidence of Marxism.
  • Two ways to philosophise.


    None?

    So falsifying your data so that you can gain fame and wealth is can sometimes good practice vis-á-vis good inquiry?
  • Two ways to philosophise.


    Historically, these examples were resolved by an appeal to principles considered valid across the lines of the presumed disciplines though. That is, appeals to standards of objectivity, that post hoc rationalization is not good reasoning, the notion that the political or ethnic identity of the scientist is accidental to the science's subject matter, appeals to the principle of non-contradiction when consensus/authorities in different fields contradicted one another, standards of valid arguments, etc.

    If there is a bad consensus and bad practices, they don't just work themselves out through discourse as a sort of random brownian motion. Or at least, they haven't historically, and they wouldn't do so quickly. The replication crisis for instance spanned many fields because the idea was that the principles that were being poorly applied in social psychology were relevant for other fields because they were general. Likewise, the Sokal Affair and later replications weren't taken to apply only to specific journals are reviewers, but represented a problem in practice.
  • Where does logic come from? Some thoughts


    Sure, but the fact that some particular process led to man's desire for truth as such doesn't preclude the fact that man can now desire truth for its own sake. That is, man can seek truth for the sake of truth and not for the sake of evolutionary advantage.
  • [TPF Essay] Dante and the Deflation of Reason


    Re Kant, as mentioned before, I didn't have space to really take on many thinkers in depth. But here is why I think Hume is a good model:

    - The anthropology of economics holds to this mold. To be sure, in theory the intellectual appetites and irascible appetites are contained in "utility," which encompasses all desire, but this tends towards a deflation towards pleasure/pain. Smith was, of course, a close friend of Hume. But economics has been massively influential in public policy and in the development of the established (now global) social order.

    - Computational theory of mind suggests something very much like "all thought is ratio." The model of the Turing Machine does not include any notion of intellectus, quiddity, etc. Obviously, some proponents try to get around this with appeals to emergence. However, CTM tends in this direction and has been the dominant paradigm in cognitive science, etc., for a long time. Bayesian Brain theories would be another example; everything is mere induction. Eliminitive materialism would be another. Marxist anthropology shows some similarities here too.

    -The dominant, now hegemonic political ideology of the West, and now the whole global order is liberalism. But liberal theorists tend towards very thin anthropologies that avoid the intellectual appetites (or "bracket them out"). Obviously, there are continental objections to this, I cite Han. Yet while perhaps more widely read, I think they are less influential. The Anglo-empiricist model is influential not through philosophy but through economics, liberal political theory, public policy, cognitive science, etc. Utilitarianism has also had huge influence, and makes similar suppositions.

    Just for example, Rawls has a very procedural notion of reason, and his anthropology is still thicker than much earlier utilitarianism. Nozick had a slightly thicker anthropology, but it's still thin. Fukuyama brings in thymos, but not really the intellectual appetites, and arguably he brings it in hamstrung by his other commitments.

    Another thing the paper is missing is an expansion on the idea that the more a particular participates in the universal "stream" (to use imagery from Dionysius the Areopagite) the more fully it is particular. The intellect's participation in universals, and intellectual knowledge, makes it more particular, because knowledge And understanding are required for self-determination, which is required for true unity (particularity). Consider that all amoebas are quite similar. All dogs are more similar than all men. Angels are more particular than men, being each an individual species (in Aquinas).

    The same idea is in play with knowledge and virtue. All those plagued by vice and ignorance are in some ways similar. Dante is able to respect history and particularity in a way similar Sufi texts like The Conference of the Birds don't, because of the recognition that perfection makes people more fully persons. More perfected persons are more particular because they are more free, more self-determining, and more fully what they are and not a bundle of warring external causes (and idea going back to Plato's psychology). There is a sort of ascent of the particular in the stream of the universal.

    This helps Dante's philosophy of history in that Providence is not at odds with freedom and particularity, but its full realization. Solovyov is useful here because he takes some of the insights of Hegel re the rational unfolding of history, and has a similar view to Dante on the telos of history, and yet he isn't committed to a sort of providential over-determination. History is the meeting ground of truth and falsehood, the stage of cosmic drama, and not merely a proving ground for individuals. There is an element of perfection and freedom that involves the communication of goodness to others (agape descending, eros ascending) that is realized in history, through human beings' mutual empowerment of one another in the ascent Dante lays out. This is clearest in the Commedia in the roles of Beatrice, St. Lucy, and Mary. But this can be more or less actualized in the polis, and obviously a thin anthropology would be a barrier to any notion like this, even a secularized one.
  • Where does logic come from? Some thoughts


    Perhaps our differences only reflect a difference in our understanding of the definitions of “knowledge” and “intellect.

    Yes, we are. Intellect in the older faculty psychology refers specifically to the understanding of universals, of form. It's not the same thing as memory or what gets called the estimate/cogitative power that allows for problem solving and inductive pattern recognition. There is a whole big literature on this and if animals can "use language" in the ways that even young toddlers can.



    I agree that from an empirical perspective we encounter particulars first, and then abstract the form. But I wonder whether that perspective risks treating the form as derivative —something we derive from the object. In the Platonic (and arguably Aristotelian) sense, form is not something posterior to the object, but that in virtue of which the object is what it is.

    That is, form isn’t just a feature we discover by experience—it’s the condition that makes experience possible. It's because of the reality of the form that we can identify the particular. It’s ontologically prior, even if not temporally so. This is where I’d place form in a “vertical” rather than horizontal order—closer to what Neoplatonism or even certain strains of phenomenology suggest.

    I wonder whether framing form as something abstracted from sensible experience is more of an empiricist perspective (e.g. J S Mill) than Aristotelian.

    I think the bolded is very important to keep in mind. And yes, I'd agree that the form that has been abstracted by the intellect has to be posterior to the form in what is known. The empiricists are copying Aristotle so they do sound similar, but for them "abstraction" has become a sort of inductive pattern recognition, whereas for Aristotle it's the active/agent intellect making the form of what has been sensed (form being communicated through the senses) come to be present in the intellect.

    When I speak of “undifferentiated givenness” or the in-itself, I don’t mean it as some kind of vague or latent actuality, waiting to be identified. To say it must have “some sort of actuality” is already to try to give it form—to insert it into the order of knowable, nameable things, to say what it is. But the point is: we can’t do that without distorting what we’re trying to indicate. Here is where 'apophatic silence' is precisely correct.

    That’s why I describe it as “neither existent nor non-existent.” It’s not an actualised thing, but it’s also not mere nothingness. This is something I’ve taken primarily from the Madhyamaka tradition in Buddhist philosophy, which insists on the middle way (hence the name) - between reification (it is something!) and nihilism (it doesn't exist). In that framework, we are dealing with what is empty of intrinsic existence, but not therefore non-existent. It’s not a substance, but nor is it nothing. It’s a kind of ontological openness. That is the meaning of śūnyatā.

    Interesting stuff. I am not familiar with it. It reminds me of the chora in the Timaeus or some versions of matter. Eriugena has the distinction of nothing through privation and nothing on account of excellence. But then latter would in some sense be the fullness or all possibility, total actuality. One image I like is a sound wave of infinite amplitude and frequency, which of course leads to every infinite peak and trough canceling each other out. The result is a silence, but a pregnant silence. I mean, it's an imperfect example. Dionysius and Eriugena don't think God is a sound wave. It's more about the fullness defying finite description.

    This “in-between” condition—neither purely empirical nor purely intelligible—is what makes the Platonic view so compelling in discussions like this. It avoids collapsing ideas into mere mental projections, while also refusing to treat them as physical facts. They’re real, but their reality is of a different order—something we participate in rather than simply observe.

    :up:
  • Two ways to philosophise.


    Are you seriously advancing the epistemic position that no one is ever wrong but that the two options would be: "yes I agree," and "I don't know?"

    "My epistemology isn't 'anything goes' but in it absolutely no one is ever wrong." Sounds like "anything goes," to me

    Second, I think you're also conflating multiple senses of "undecided" here. There is:

    1. We personally do not know the answer.

    2. The positive statement that one knows that no one can know the answer.

    3. The positive statement that one known that the position in question is neither true nor false.

    These are three different things. When people have resistance to "I don't know," it is normally not on account of 1, but on account of 2, generally when there is equivocation between 1 and 2 and it is used to advance some sort of positive claim.

    So for instance, if I don't know anything about molecular biology, it would make no sense for me to demand that my local school district not teach theories in molecular biology. Likewise, objecting to creationism being taught in schools only makes sense if one thinks it is likely false, or at least unlikely to be true, not because one "doesn't know" if it is likely to be true or not. But it's popular to equivocate between 1 and 2 on this issue.

    "I don't know" is objected to because an appeal to one's own ignorance, masquerading a "modesty," is often used to advance positive claims as decisive. For instance, "I don't know issues related to the human good, therefore we should "bracket out" everything I don't accept and advance my liberal political theory, anthropology, and ideology on the whole of society." Or "I don't know if realism or nominalism is true so we'll have to 'bracket' and just presuppose my preference for nominalism is true." That sort of thing. If people don't use their own professions of ignorance to justify claims, then I think "I don't know," is only going to annoy people when it's obvious intransigence.





    Since this word "arbitrary" has come up so consistently, I'm wondering if possibly some of us are using it to mean different things. But I'm going to use it to mean "not based on any particular reasons; like a throw of the dice." On that understanding, I would answer the second question this way: "It doesn't, but if the discipline is longstanding and has smart, experienced practitioners, quite quickly the demand for good reasons will channel the discussion away from arbitrary and unfounded practices. Furthermore, just about no one presents their views in this way."

    Right, so this is an appeal to a sort of virtue epistemology. Virtues are principles, so I can get behind that. However, I don't think "smart" and "experienced," are necessarily good virtues here. Consider the examples of Aryan physics, socialist genetics, phrenology, etc., which were created by intelligent, experienced scientists.

    Practices have to be open to external critique by some additional standard or else there is no way to identify pseudoscience. You get all the issues of the hermetically sealed magisterium otherwise.

    Here is the classic answer: sciences are based on per se predication, what is essential to things. And it is not essential to living beings that they are observed on Tuesdays, so we do not have a sui generis "Tuesday biology," nor is it essential to physical processes that they are researched by Jewish scientists so we cannot have a "Jewish physics." This isn't a silver bullet, but it captures most of the egregious examples.

    But from your response, it seems like what you really have is just loose criteria for "when people deserve a hearing," or are "reasonable" and not really anything about correctness or truth per se. Yet might this preference for current practice and what is deemed "reasonable" tend towards ruling out radical critique? If we were having this conversation just 100 years ago, perhaps a bit more, the proposition that "women and Africans have comparable mental abilities to European men" would be considered "unreasonable" by many, including smart, thoughtful people engaged in relevant practices.

    Hence, it seems that there are general principles here vis-á-vis various sorts of bias that are inappropriate. And these issues are still with us. One of the things the replication crises have exposed is that fields that became ideological echo chambers became very willing to accept and teach prima facie very hard to believe conclusions as "well supported" if they just so happened to support ideological agendas.
  • Where does logic come from? Some thoughts


    "Curiosity killed the cat," right? There is a sort of anthropological/metaphysical question of if animals can "know" as in, intellection, but obviously they can know in different ways, e.g. "sense knowledge," memory, etc. Both sensing and knowing involve a sort of union with the known.

    But the quote is from Slick Ari at the opening of the Metaphysics and he only mentions man because that's his focus.
  • Iran War?


    No, I would imagine the decision-making process is that they had a window of opportunity to act while Iran's proxies were largely defeated and when they would not face significant additional pressure from third parties for carrying out the strikes. Obviously, internal politics could be another factor.

    My guess is that they will keep degrading Iran's air defenses and missile capabilities, trying to reach a point where they can keep inflicting costs on Iran without facing significant costs in return. Iranian missile volume is already down significantly. The goal is probably to signal an open ended commitment to destroying their military and oil infrastructure until they agree to negotiate a settlement that ends the nuclear program. Barring that, there is the idea of the US destroying the main enrichment facility or Israel doing it with a cargo plane after having fully degraded Iran's air defenses (which is probably risky and might not work), or, unlikely, an air assault operation to destroy it.

    Iran was already in a precarious political situation due to major, violent unrest across the country aimed at the government. Iran is only about 60% Persian and there are separatist groups Mossad has been able to use in the past for additional pressure.

    In the past year or so Iran has also had open battles with the Taliban over water rights and been bombed by Pakistan, so they sort of have issues in all fronts due to the type of foreign policy they have pursued, but obviously the successful decapitation strike on Hezbollah that forced them to make a separate peace, the collapse of the Assad regime, loss of influence in Iraq, and now this has made them significantly more vulnerable, which explains the timing as well.
  • Where does logic come from? Some thoughts


    This also speaks to our curiosity. We always want to know what is over the horizon. We are natural explorers. It is in our nature to see the world more openly - to seek out new worlds and new civilizations - to boldly go where no man has gone before, because you never know what part of reality might be useful for something

    Or simply because "men by nature desire to know," or because they desire the glory of achieving the difficult.
  • Two ways to philosophise.


    Also, if you want to make an appeal largely to "reasonableness," this would suggest something like a virtue epistemology (which would be my preference in many respects). But a virtue epistemology has to have virtues, the virtues cannot be different in every instance. And virtues presumably have to be virtues for some reason. Normally, intellectual virtues are considered virtues because they keep false judgements out and help someone attain to truth.
  • Two ways to philosophise.


    That makes sense. I was thinking "binary" in terms of 2, because this seems to be the objection.

    I might add:

    5. If one claims standards are wholly unique in every instance then one cannot keep arbitrariness out.

    That's a little trickier. But 5 is obviously false as a descriptive claim. To use the example of economics given earlier, it is not the case that economists use different epistemic standards for every question. They do not complete peer review by judging each submission by entirely different standards. And so too for philosophy of science and epistemology.

    This gets at one of the unaddressed issues, which is identifying pseudoscience.

    And the idea that standards are wholly different in each instance is at odds with the idea that authoritarianism is always inappropriate in epistemology or that only reasonable narratives need be considered.
  • Two ways to philosophise.


    The troublesome word is again "correctly.

    I had considered you to be laying out criteria for correctness there because you wrote:

    Some narratives are acceptable, true, or valid for one sort of reason; some are so for another sort; some for a third sort; etc.

    And then gave your overarching standards for those reasons. The context for this was that I pointed out that if someone declares that their epistemology is not "anything goes," but then says they can give absolutely no reasons for when something "doesn't go," they have offered an obviously unsatisfactory response.

    If you're denying these as standards then we're back to: "my epistemology is not "anything goes,' but I can give no explanation of why some narratives 'don't go.'" Or "my reasons for denying some narratives are sui generis in each instance." How does this keep arbitrariness out?



    I think what you're imagining is a kind of meta-conversation in which some issue about epistemological standards could be put to rest for all time, and one version deemed correct. I don't believe this happens. I believe philosophers (and scientists) come to points of agreement, are better able to frame their disagreements, and then move on.


    I'm not. This seems like yet another attempt to set up a false dichotomy between declaring oneself infallible and having any epistemic criteria at all. But there are many fallibilists and yet very few who subscribe to "there are no epistemic principles" or "epistemic criteria are sui generis in every instance." Indeed, I would guess virtually none, because both of those positions have obvious problems.

    The idea is that there are better or worse epistemic principles. That doesn't mean we necessarily know them or know them with certainty. The denial that there are better or worse epistemic principles would seem to allow for arbitrariness, for how would it keep it out?

    In a more concrete actual practice, we see the consensus-driven model in constant operation, and able to produce perfectly satisfactory results. I think in the human sciences generally there is a great toleration for unresolved questions and disagreements, and very little belief that some day, some way, all these ways of interpretation will be put aside in favor of a single correct version. As for hard science, I have to rely on the discussions by others, but something similar seems to be in play, though possibly there's more hope for an ultimate TOE -- don't know if that's still on the table these days.


    This is the same false dichotomy.

    I guess an issue here might be "correct/incorrect" as another over-simplified binary. Again, I have to pose the obvious question: If a view about science or epistemology can be shown to be correct, as you mean it, why has this not ended the discussion? How is it possible that the debate is still vigorous?


    See above. I never said it was a binary. I said that if one claims that one's epistemology is not "anything goes," then not all narratives can be equally correct. But if not all narratives are equally correct then in virtue of what is this judgement made? Nothing about that requires a binary, claims of infallibilism, etc., it simply requires the observation that if one can give no reasons for their standards then their standards are open to arbitrariness.

    Likewise, if one claims one's standards are always different in each instance, then presumably the meta-standard by which they judge standards appropriate in each instance is also different in each instance. And the meta-meta-standard will also be unique in each instance. And so on. If this is not arbitrariness, it certainly cannot keep arbitrariness out. For by what metric would any standard be deemed poor in any particular instance?
  • Where does logic come from? Some thoughts


    This might be a point where we’re crossing conceptual wires a bit—because I think there’s a distinction to be drawn between ontological and temporal priority.

    We probably did. An important distinction is efficient causes as contingent, temporal linear series versus as hierarchical causal series. The former is something like a chain of dominoes knocking each other down. The first domino knocked over is prior to the last in a contingent linear ordering. But we also have hierarchical causal series of efficient causes, like a book sitting on a table. The table needs to be there for the book to be sitting on it, but the table can be there without the book sitting on it. One is dependent on the other, but they aren't mutually dependent. The dependence is there at every moment, not in some linear sequence.

    This is a common source of misunderstanding in St. Thomas' Second Way, BTW. People understand it as "the universe must have a temporal begining," but actually he thought reason and observation alone (at least what he had access to) couldn't decide if the cosmos was eternal. He merely thought that Aristotle had failed to demonstrate that it was definitively eternal. But even if the cosmos is eternal, we still need a first cause in a hierarchical causal series.

    I guess this would be relevant in a few ways. Substantial form doesn't exist outside substances or the intellect. There is the form "cat" 'in' cats themselves and 'in' the intellect of knowers. But the form has to be to be to be informing these things in the same way a table must exist for a book to rest on it. Yet it seems possible for there to be cats but not creatures with intellects. The existence of the form vis-á-vis cats is not dependent on the existence of the form in finite intellects.

    Of course, we might say the reverse is true. However, in terms of a temporal linear ordering, it does seem that cats had to exist before people could sense them and abstract their form.

    This is obviously a framing in terms of Aristotleian metaphysics but I think the concepts at play are isomorphic to many other systems.

    When you ask whether “undifferentiated givenness” is first in the order of being or in the order of experience, I wonder whether that’s still considering the question from a temporal perspective. The eternal is not temporally prior, because it’s outside of time—so it can be said to be ontologically prior, as the ground or condition of temporal existence. But treating it as temporally prior still risks a kind of reductionism.

    I may have misunderstood you. I didn't think “undifferentiated givenness” meant to refer to anything eternal, but rather the immediacy of sense certainty without any mediation. So I was thinking in the order of experience. In the order of created, changing (physical) being, my thoughts would be that for anything to be anything at all, it has to have some sort of actuality. So the temporal priority of an entirely undifferentiated being that is then actualized by consciousness strikes me as somewhat like the Platonic demiurge, giving form to nothingness. The example of consciousness causes collapse is sort of apt here, but not really, because that doesn't presuppose sheer potency prior to collapse, but rather a delimited sort of indeterminacy.

    Again, the question is the sense in which numbers are prior. Numbers do not exist at all on the phenomenal plane - you won't find them anywhere, except in the act of counting. So they are not temporally prior, even though there were obviously numbers of things that existed before anyone was around to count them.

    Wouldn't their existence in the intellect be on the phenomenal plane?

    But if numbers of things existed temporaly prior, and the natural numbers come to be in the intellect by abstracting the form of magnitude and multitude, how is that not temporal priority?

    That's at least the idea behind the abstraction of form. Things exist in one mode in material beings and another in the intellect of finite beings. But then finite beings don't create the form, but rather abstract it. We could say there is a generation of the intellectual mode (a reception of form by potency) but not a creation (something new).

    So, what consciousness are they constituted in, if it is unnecessary to their existence that there be expressions of them or awareness of them? I'd be wary about entering an answer to that question. Suffice to say they are real possiblities that can only be apprehended by a rational intelligence - not neccessarily yours or mine (definitely not mine, as I'm bad at math.)

    It's a tough question, but I would object to the idea that mathematical objects are "mind independent." If they have no intelligibility, no quiddity, no eidos, then they are nothing at all, but to possess these is to have intellectual content. I don't think anything is "mind independent" in the sense this is commonly meant, a sort of Kantianesque "noumena" of being devoid of intelligible content.

    So, I guess I would try to explain this the way David Bentley Hart does in All Things Are Full of Gods, that the notion of bare noumenal "material" existence is a mistake, an inversion of the proper order.

    But bad at it or not, maths deals in necessary truths. And it’s precisely this sense of necessity that makes the question “where does logic come from?” so important. We’re not just talking about how humans happen to reason, or how nature happens to behave, but about the conditions that make truth, structure, and intelligibility possible at all - how reason is imposed upon us.


    Yes, that makes sense to me.
  • Two ways to philosophise.


    First, the obvious:



    Not necessarily. We might not be denying a position, and not affirming it, but leaving it undecided.

    This is still saying some positions aren't true/correct. To say "all positions are true or undecided, and at least some are undecided" is still saying that not every position is true.



    You can no doubt see where I am going.

    Yes, but I don't get the relevance at all. What's the point?

    "Ha, gotcha! My epistemology doesn't say "anything goes," because it actually says that while no one can ever be wrong, it's also not true that everyone is always right."

    First, this is still "anything goes," as far as I can tell. If you cannot ever tell anyone else they are wrong, that is an epistemology that is too weak. It also relies on the idea that no narratives/statements are ever either correct or incorrect, i.e. that LEM never applies. But it hardly follows that because not everything is binary, absolutely nothing is.

    Consider: "Sure, I don't think you're wrong about the fact that putting mercury in kid's lunches is good for them and will improve their grades, but I think it is neither true nor false that putting mercury in kid's lunches is a bad idea, so we shouldn't do it." Saying something has a truth value gap, or is not truth apt, does not produce strong warrant.

    An epistemology that can dismiss no positions as wrong is too weak, and I think this is fairly obvious. "I don't think x is either true or false," doesn't produce strong warrant for action. Further, if you cannot reject any position then consider that even if I had said: "all statements are either true or false," you would not be able to say to me "that's incorrect," but rather only "it is neither true nor false that all statements are binary in this way." And yet to tell someone, "you're wrong about that statement, it is neither true or false," is to tell them they are wrong, not that they are neither right nor wrong. You'd find it quite impossible to defend this solution while actually keeping to it.

    And note, you are equivocating on "undecided" when you say "I don't know," is the same thing as "undecided." You clearly know this on some level, because you know what the term means in the formal context. For instance, your reply here likens "undecided" to merely any pronouncement of one's own lack of knowledge. But I have to imagine you don't really think that accepting LEM is declaring oneself infallible. Most philosophers, for most of history, accepted LEM, even radical skeptics. For instance, by your definition here, the Goldbach Conjecture could be deemed "undecided," because it isn't proven, and yet that would obviously be an equivocation on the term "undecided," switching between its formal and informal meaning.

    The case, for @Tom Storm's edification, that corresponds to the notion "undecided" in denying LEM would not be: "I don't know the answer to 'idealism, psychophysical parallelism, god…,'" but rather "these positions are neither true, nor false." In some sense then, it isn't modest. It claims to know something about the truth value of the statement in question.

    Second, this entire conversation is a non-sequitur, because my point was that @J's standards allowed for judging their own refutation to be correct, and clearly opened the door for affirming contradictory positions as "correct." I never said "each narrative is either correct or incorrect." Depending on how one uses "correct," descriptions/narratives can obviously be more or less correct. The Doctrine of Transcendentals allows that things can be, in an important sense, more or less true. I've already corrected you on this above. See:

    So a decision made for no reason at all isn't arbitrary?

    "It isn't 'anything goes.' Why doesn't something go you ask? I can offer no reason/principle/criteria to justify why some things don't go."

    It's an error to call that arbitrary?

    Now if you're going to disagree with that, by all means. I never made the point you're ascribing to me.

    Lastly, the OJ Simpson example is absurd. Do you agree? Can you come up with a non-absurd example? Statements about future contingents would be at least something stronger. But if one wants to argue that LEM doesn't apply to future contingents, it does not thereby follow that LEM doesn't apply in cases where ignoring it leads to absurdity. Indeed, ignoring material logic entirely and applying an exclusion of LEM to the whole of human inquiry, to all subjects, seems to itself be a universal principle of the sort you are denying exists.
  • Two ways to philosophise.


    Banno, you obviously know the answer to the question, you're using the correct term. It isn't claiming that it is both true and false (contradiction), but is a violation of LEM. However, in this particular instance, the violation of LEM is absurd.

    I mean, what's the point here re epistemology? Because some systems exist that ignore LEM it we should consider the possibility that it is neither true nor false that OJ was the person that killed his wife? Or that it is neither true nor false that she was murdered?
  • Two ways to philosophise.


    Whatever. Seems to me that just repeats the same error.

    So a decision made for no reason at all isn't arbitrary?

    "It isn't 'anything goes.' Why don't some things go you ask? I can offer no reason/principle/criteria to justify why some things don't go."

    It's an error to call that arbitrary?

    If we allow a case in which it remains undecided if some sentence is true or false, then do we have a contradiction?

    What's an example of an "undecided" historical or scientific fact? If I think of a murder trial for instance, this suggests to me that no one killed the victim found stabbed repeatedly, or that no one in particular killed them, which seems absurd.
  • Two ways to philosophise.


    What do you mean by undecided? Do you mean "we don't know" or do you mean "Bin Laden was neither the mastermind of 9/11 nor not the mastermind of 9/11." I think the latter in this example is clearly farcical.

    Either OJ Simpson really killed his wife or he didn't. It's possible we can never know, or never know with certainty. Some historical facts are not accessible. That doesn't mean Nicole Simpson was stabbed and hacked up by "no one in particular" is an option.
  • Two ways to philosophise.


    Stating that all statements are binary does not show that all statements are binary, nor assign a burden to those whop deny that all statements are binary.

    This statement?

    Either all narratives are acceptable/true/valid, whatever you want to call it, or they aren't. If some aren't, in virtue of what are some to be rejected?

    But this is a misreading. I did not write "either each narrative is true/correct or it is false,' or even "either each narrative is true/correct or it isn't." I am saying, "if not everything goes," (i.e. if not all statements are true/valid/correct), then you need some reason for why not all statements are true/valid/correct, else the choice is arbitrary.

    It doesn't say anything about the statements being binary. In saying "not anything goes," you have already admitted that some narratives "don't go." My only point here is that if you decide that some narratives "don't go," you need some reason for that decision, else it is arbitrary.

    The point is merely that there cannot be a blanket denial of any principles/criteria/reasons.

    So then we reach: "but the principles/criteria/reasons are different in every instance." My question then would be: "if they are different in every instance, in virtue of what are they good criteria/principles/reasons?" The denial of any overarching principles doesn't lead to arbitrariness in the obvious way that a total denial of all reasons/principles does, but I am not sure how it keeps arbitrariness out either. In virtue of what does one know the right reasons/principles for each instance, if they are always different?
  • Two ways to philosophise.


    All of them.

    I don't think I suggested anything remotely like this. Is this in reference to wisdom? The point I made there was very simple: a contentless, vacuous term cannot be used as a criteria that keeps anything out because it applies equally to everything.

    You're the one who turned your misunderstanding into all sorts of different theses.

    And yet non-classical logics are coherent. Non-classical logics, such as paraconsistent logics, do allow for contradictions without collapse, and they are mathematically coherent and well-developed.

    Ok. It still seems problematic to me if various scientific or historical claims are allowed to directly contradict one another and yet be equally "correct narratives." Consider: "CO2 emissions do and do not increase global temperatures," or "Osama Bin Laden was and wasn't the mastermind of 9/11." Or for philosophy, "we can and cannot know the external world exists."

    Now, before you go off into claims of "infallibility," I am not saying that we always know the answer in such disputes, or that both positions might not be well supported. I am saying that claims like: "Bin Laden was the leader of the 9/11 attacks" and "he was also not involved with them at all," should indicate that at least one cannot be true (obviously some sort of resolution through finer distinctions or an identification of an equivocation not withstanding).

Count Timothy von Icarus

Start FollowingSend a Message