• A question concerning formal modal logic
    What systems have that predicate? Is it definable in typical modal systems? What is the definition?TonesInDeepFreeze

    You can define any predicate you like, necessary or otherwise. To make a predicate 'P' that is necessary for an individual a at a model, you just posit that the model you're working with is such that for all worlds w in the set of worlds W associated with the frame of the model, P(a) evaluates to true at w.

    The standard Kripkean treatment does not, of course, allow for a primitive predicate like this to be true for all individuals (or just some individual) at all models. But necessity, on a Kripkean semantics, is not a matter of logical truth that generalizes over models – it's a matter of truth at all accessible worlds to some particular world, and if we have an accessibility relation on which every world is accessible from every other, then this is equivalent to truth at all worlds in that particular model. There is no impediment to supposing such a model.
  • A question concerning formal modal logic
    This depends on how you construe existence. If you just have an existence predicate, E!, then sure – you just say that E!(a), let's say, where 'a' refers to that individual, values true relative to every world W in the domain of worlds in your frame.

    If you construe existence as a matter of existential quantification over identity, as ∃x[x=a], to mean that there is an individual identical to a (the individual you're interested in), then it depends on what your quantifiers range over. In fact, on a classical Kripkean treatment, existence is always necessary existence, since if there is some individual x identical to a in the domain of individuals, then there will be at any world, since the domain of individuals and the domain of worlds are simply separate.

    On the other hand, you can make the domain relative to a world, such that at world w, there is an individual x identical to a, but at world w', there is no (because the domain associated with w includes a, while the domain of individuals associated with w' does not). Here, you are not forced to make existence necessary existence, but you can – you can just include a in the individual-domain of every world in your domain of worlds.

    A logic that banned the necessary existence of an individual would have to make some special provision for how existence is interpreted, and why you could never have a domain of worlds such that an individual exists at every world.
  • No epistemic criteria to determine a heap?
    It does if you accept any responsibility for the care of P1 as well as P2, and try to apply logic.bongo fury

    I've already answered this a couple times. I'm not sure what you're hung up on.
  • No epistemic criteria to determine a heap?
    Sure. Not because we aren't pretending (we are) but because the game is better described as tolerating dissent.bongo fury

    The issue is also that insofar as epistemicism purports to be a description of this game, it is a false one – vague language isn't somehow secretly precise, and players of the game are to some extent metasemantically aware of this. An untutored person pretty much invariably (in my experience) finds epistemicism absurd upon first hearing of it. No only that, it takes quite some time to even get them to understand what epistemicism is, because the very notion is so far from what they take for granted about the way language works that it takes a while even to explain it.

    So if epistemicism neither captures people's metasemantic awareness of their own language, nor does it seem to describe anything 'objective' in the practice itself, what is its utility as a hypothesis? Are you defending it in any capacity, or just using it as a springboard to talk about the difficulties with vagueness? I could see the proposal to act like it's true, as a recommendation that we ought to treat language as arbitrarily precise, as a coherent opinion, though still not a good one. What you seem to be saying now, however, is that epistemicism isn't really true in any sense – it just helps us highlight some features about vague language that are puzzling to us (though even here I disagree – I think vague language is vague, and so can cause concrete problems of indecision in everyday life, but that doesn't make it puzzling, as if we were fundamentally confused about what's going on).
  • No epistemic criteria to determine a heap?
    Ah, OK. I think I didn't pick up sufficiently on the 'pretend' part.

    Surely, though, pretended things aren't so? Is your position that we ought to pretend there is a single correct use of a term, and in the case of vague language, pretend to be epistemicists?

    But here, as we discuss this now, we presumably aren't pretending – so shouldn't we say epistemicism is false?

    Epistemicism as a 'noble lie' would be a funny position to take! Or maybe it's a mutual pretense we're all in on? But then, I have to admit I fail to see the value in acting like vague language determines precise boundaries. Sometimes it's useful to be more precise, sometimes not.
  • No epistemic criteria to determine a heap?
    So, you do reject P1 with respect to general usage, in English, of the word 'heap'. I accept that you accept P1 with respect to your own usage. Your personal threshold is perhaps much further along than one. But you appear happy to acknowledge that usage as a whole allows for literal application of the term to a single grain. A linguist or dictionary compiler may beg to differ. They would offer a single grain as an obvious example of incorrect usage, or opposite meaning.bongo fury

    I accept P1 because I wouldn't apply 'heap' to a single grain.

    You seem to think that because 'heap' has some property preventing it from being applied to a single grain, therefore P1 is true because people 'can't' apply it to a single grain.

    But you've got it backwards. It's because people don't use 'heap' for a single grain that P1 is true. We could turn around and decide to start applying it to a single grain, if we wanted to, and declare P1 false as a result. I just wouldn't want to, because changing around the application of words is confusing and sometimes pointless. And so I take P1 to be true.
  • No epistemic criteria to determine a heap?
    They can be bent, but not too far, and obviously how far is the puzzle.bongo fury

    The problem is this is not true. You seem to be hung up on the false idea that a magical barrier exists preventing people from using words in certain ways. It doesn't – of course people tend not to bend too far, but it's not like they can't, as some philosophically interesting matter. Of course they can (and they can even move the bishop non-diagonally – try it yourself...).
  • No epistemic criteria to determine a heap?
    Of course, I would not reject P1, because I think using 'heap' is such a non-standard way is pointless and confusing.
  • No epistemic criteria to determine a heap?
    I certainly do. Ridiculous as I find the 'hidden step', I think that ordinary usage deserves some kind of recognition of its ability to distinguish between correct and incorrect, in some way that doesn't fizzle out to 'relatively correct'. Usage can sometimes be a matter for negotiation, and adjudication, but sometimes not. We know that anything black is an obvious counter-example to white, and is therefore anything but minimally white, and similarly for off and on, bald and hairy, etc.

    Hence my readiness to restart, and invite you to consider an absolutist position on a single grain. E.g.,

    [1] Tell me, do you think that whether a single grain can be correctly called a heap in common English is a matter for negotiation or adjudication in context?

    I appreciate fully that you may well see no need at all to deny that proposition. (I'll have to bluster that you don't speak English, but never mind!) But if that's because you have embraced anything like the half bell curve as a picture of usage (or of fuzzy truth), then notice that you are, after all, ditching P1 and not P2.
    bongo fury

    All these things are a matter of adjudication. You could choose to use a word in a highly nonstandard way, and people could go along with it – but they often won't, and they'll be more unwilling to, the farther you move away from an established usage. But if you decide to use 'heap' to refer to a single grain too, then sure, go ahead, that's also a pattern of usage that could be established. It would be 'incorrect' in virtue of some prior pattern of established usage, but so what? Patterns of usage can be re-negotiated as well. This is a matter of how to apply the word, not an interesting inquiry either into the nature of language, or the nature of sand and piles of it.

    The epistemicist, in appealing to a strict notion of 'correct usage,' is invoking a kind of magical view of language. That is, in addition to facts about how speakers coordinate their thoughts and behaviors using words, epistemicists seem to think there is some extra fact about words, unknowable in principle, that determines what intrinsic property they have, in addition to or maybe even independent of, all these facts. But there is no reason to believe such a thing exists – again, it is like thinking words 'have' meanings the way elements 'have' atomic numbers. But this is a fundamental misunderstanding – to say a word has a meaning is no more and no less than to say the word has certain causal powers in virtue of a community of speakers coordinating to use it in a certain way. There are no other semantic properties hiding behind this, as if words had magnetic properties attracted to some physical objects and not others.

    In terms of the history of semantics, I think of what many of the analytic philosophers of language do as a kind of return to a magical or pre-modern view of language, whereby people tend to think that words have quasi-magical powers in their own right to attach to or 'get at' objects – hence the metaphors of magnetism in reference, and so on. But semanticists have known forever that this isn't so – words relate to things by having causal effects on interpreters, who then causally interact with those things (this is Ogden & Richards, from the 1920s, who take this insight to be the start of modern semantics). Analytic philosophers are sort of like the magicians who want to know something's true name, in other words – yes, we can call Johnny any number of things, but which thing really refers to him? There is a very, very basic confusion happening here. Johnny is the referent of 'Johnny' because of how people are disposed to refer to him – the name 'Johnny' doesn't have other special properties that designate that man is its intrinsic proper referent, over and above all facts of usage!

    When someone says a certain usage is correct, they might either mean: (i) as a descriptive matter, this is how people tend to use the term, as summed up by some statistical measure (based on prior usage or an inference about disposition to future usage, or whatever), or (ii) as a normative matter, that some use is to be singled out as to how the word is to be used. But neither of these are descriptive facts about words having meaning as if that were something else beside how people use a word. This, as I said above, is the return to the kind of magical, pre-modern view of meaning.
  • No epistemic criteria to determine a heap?
    Who is correct is a matter of arbitrary decision in this case, since it is a matter of arbitrary decision wether we choose to apply the word 'heap' or not, and so construe it as correctly applied or not (and something is a heap iff the word 'heap' is correctly applied to it, 'iff' being read as material equivalence).

    We might disagree over something's being a heap, even if we know very well what sort of thing it is – but this will not be a disagreement over anything involving, say, the nature of the pile of sand, but rather a disagreement over whether, given that we know what the pile of sand is like, whether the word 'heap' is rightly applicable to it. Who is 'right' or wrong' in these scenarios? It's a matter of adjudicating how to use the words, which may or may not be important.

    The epistemicist has the 'atomic number' model of metasemantics, which as I said, is in my view mistaken.
  • No epistemic criteria to determine a heap?
    Sorry, I don't get the joke you're making. If P2 is false, we're done, presumably? I looked through the rest of your post but couldn't make sense of how it was making a rebuttal (if that's what it was doing).
  • No epistemic criteria to determine a heap?
    Absolutely.bongo fury

    Then we're done, aren't we?
  • No epistemic criteria to determine a heap?
    As to the Sorities Paradox, it is Premise 2 that is false – one is often at liberty to say that the addition of a single grain creates a heap where there was none before.

    Now maybe this strikes you as puzzling because it seems that one would not be able to find a great criterion for which grain ought to cause the shift. But that's the point – where we have to make a decision about how to apply the term (and often we will not, and just shrug instead), and we're free to do so at arbitrary levels of precision if we so please. Of course, as we get away from the borderline cases, our arbitrary decision will become less and less defensible in the linguistic community, as we move farther and farther away from standard usage. But in the gray area, people will be tolerant with us, and allow us a fairly large range of such decisions, if there ever is the need to make them.

    Of course, we seldom have to decide a boundary of one grain at which a heap becomes not a heap. Where we do have to decide (let's say, for some reason, we're buying a quantity of sand, and a 'heap' costs so much), then we actually are free to agree on arbitrary levels of precision down to the grain in the use of the term, if we want to. Hence, Premise 2 fails.
  • No epistemic criteria to determine a heap?
    The demand that there be an exact criterion determining what is or is not a heap comes from a mistaken metasemantics – the assumption is that words have their criteria of application like an element might have an atomic number. We look at it closely enough, and we determine exactly what it is.

    But what determines whether a word is applicable in a certain case? There is no fact about this independent to its being used to apply to various cases. Therefore we do not first have the word with a well-defined applicability criterion, and then people using it correctly or incorrectly according to that criterion. Rather, we have people using the word for certain cases, and in virtue of this, we say that its use is correct or incorrect insofar as it conforms with those prior cases (subject to semantic drift over time).

    If it is the use of the term that gives it its applicability, where its use has no perfect standard, neither will its applicability. You've got it backwards – that we don't use heap in an exact way is the datum, the 'fact of life,' not that which emerges as a problem from the assumption that all terms have an exact application (false).
  • Currently Reading
    Robert Aquinas McNally – The Modoc War: A Story of Genocide at the Dawn of America's Gilded Age
    Peter Guardino – The Dead March: A History of the Mexican-American War
    William T. Vollman – The Dying Grass: A Novel of the Nez Perce War
  • The Death of Analytic Philosophy
    I do agree that a philosophy that doesn't spend its time making shit up serves a largely negative function, so can't survive as an independent discipline. But I also think that it doesn't matter whether philosophy survives as an independent discipline, any more than it matters whether astrology does. It affects little, matters little. It's been grandfathered in, and would never survive today on its own merits.

    To the extent that Anglo-American philosophy continues to exist, it will do so because it apes other disciplines, or flails to be 'relevant' to them by means of commenting on them. But this too is uninteresting, in my view, since typically philosophers are worse equipped to comment on these things than those who actually work in them. They tend to be dilettantes, trained with a generic 'toolbox' of techniques of inquiry that don't really work. And so they remain a kind of peripheral annoyance, but one that for the most part keeps to itself.
  • The fact-hood of certain entities like "Santa" and "Pegasus"?
    Hard to see how you got that impression. Quine very deftly traces the problem to ancient puzzles of ordinary language.bongo fury

    Who is honestly puzzled by fictional entities? Who is confused about what they are? Is there anyone who is worried, for example, that they will run into Harry Potter on the subway? No; we all know what we mean in saying either that he exists or doesn't, and what a character in a book is. It's only philosophers that confuse themselves.
  • The fact-hood of certain entities like "Santa" and "Pegasus"?
    Right, no need for Quine to write On What There Is, then.bongo fury

    Agreed!
  • The fact-hood of certain entities like "Santa" and "Pegasus"?
    Since existential generalization is a rule of inference in an artificial language, whether it applies in this case is up to how the logician defines that language.

    You can have logics that allow it, or logics that don't. It really doesn't matter.

    In some sense there's no problem about the existence of fictional objects – we all know precisely well what we mean by saying they do or don't exist, and no one is confused. The problems only come in when we try to formalize languages talking about these things and try to keep the rules of inference straight among them.

    There are two goals creating such a language might have – as an engineering project, to make sure everything works in the way we want it to, or as an empirical project, to formalize something that approximates 'natural' speech about fictional objects.

    As to the former, you can do whatever you want. As to the latter, I tend to think the issue was definitively settled by the Lewisian analysis from the 70s that made use of Kripkean modal logics, and that there is no interesting issue here. People continue to write about it, but that's the nature of philosophy – when your salary is paid by writing about something, you'll write about it.
  • The Death of Analytic Philosophy
    Roughly, it begins with things philosophers say as its data, rather than questions presented by the wider tradition. Its focus is on creating technologies to explicate what is said, whether formal calculi like mathematical logic, or metasemantic heuristics, like ordinary language analysis or the positivist criteria. Its first observation is that someone has claimed something, and its typical concern is with what one could possibly be doing by having said that.

    There's more to it than that, and the techniques bear a family resemblance, but that's the gist of it.
  • The Death of Analytic Philosophy
    I've read quite a bit of analytic philosophy, as well as some about its history.

    My own view of the matter is that 'analytic philosophy' ended around 1979 or so, with its last major work being Rorty's Philosophy and the Mirror of Nature. It ended not because it was criticized or replaced – and the latter work is well within the tradition, just at its tail-end, rather than a repudiation of it – but rather because a new generation of philosophers simply replaced the old. There were some people, like say Dummett or Evans, that sort of continued the tradition after that point, but they're remnants lost in the general swarm of change that happened after that.

    The philosophy that has taken place in the Anglo world after 1979 doesn't bear much resemblance to what came before it. I don't know what to call it, but the continuation of the name is fairly superficial. I would say it's a kind of Anglo neo-scholasticism, that represents the intellectual concerns and political interests of the dominant Anglosphere. It doesn't have any interesting central project, or even really a central aesthetic beyond over-professionalization, but all of it reflects the political and social views of its practitioners.

    The article makes the common historicist mistake of assuming that something came into existence when people self-consciously began referring to it as a genre or collecting the works of that genre. But that's nonsense; by that criterion, Tolkien is not fantasy. The real story of analytic philosophy's birth is the fairly boring mainstream one – though you could push its impulses back to, say, George Boole if you like, and find ancient precursors in some Greek stuff.

    I've actually spent some time just looking through old journals, month by month – the collections of analytic papers the author alludes to reflect a pre-existing sociological reality, rather than creating it.
  • Pragmatism as the intensional effects on actions.
    It looks like a verbal question. You could treat meaning that way – but I suspect it wouldn't capture many of the things we mean with the ordinary notion, so why bother?
  • Which books have had the most profound impact on you?
    The World as Will and Presentation – Arthur Schopenhauer
    Outlines of Pyrrhonism – Sextus Empiricus
    Ficciones – Jorge Luis Borges
    The Incredible Shrinking Son of Man – Robert Price
  • Currently Reading
    Benjamin Madley – An American Genocide: The United States and the California Indian Catastrophe, 1846-1873
    Brendan Lindsay – Murder State: California's Native American Genocide, 1846-1873
    Exterminate Them!: Written Accounts of Murder, Rape and Enslavement of Native Americans during the California Gold Rush
    Randall Milliken – A Time of Little Choice: The Disintegration of Tribal Culture in the San Fransisco Bay Area, 1769-1810
    Damon Akins – We Are the Land: A History of Native California
  • Israel killing civilians in Gaza and the West Bank
    Yeah. I've been reading lately about the genocides in California. The Californios and Oregonians were also 'just defending themselves,' and so on. The state was emptied of the vast majority of its native inhabitants in just a few decades, with the U.S. military playing a large role. It becomes harder to not see these things if you just have examples of other instances of genocide in history to reference.
  • Israel killing civilians in Gaza and the West Bank
    I think one of the reasons why this conflict continues is the belief that Israel has a special right, or claim, to Palestine (by which I mean the geographical area that currently covers the State of Israel, the West Bank and the Gaza Strip). I'm uncertain whether you share that belief. Statements like those you made I've quoted above suggest you do. The references you make in other posts to a "land conflict" and your criticisms of "ownership" of property suggest you do not.Ciceronianus the White

    Yeah, my guess is a lot of people with a blind spot for Israel have some sort of Abrahamic belief. The current understanding, as far as I know, is that the Hebrews just were Canaanites, and it's questionable whether the united monarchy and first temple are historical. But still, it's worth mentioning – God doesn't give anyone land! It doesn't work that way! If that fact were accepted, much of the talk would be demystified.
  • Currently Reading
    Jesus from Outer Space – Richard Carrier
    The Amazing Colossal Apostle – Robert Price
    The Great Angel: A Study of Israel's Second God – Margaret Barker
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    To talk about "the nature of the mind, the world, language, and so on" would be to talk about the different ways in which we can think about those things, or what are the most plausible or most productive ways to think about those things.Janus

    So talk talk about X is not to talk about X, but to talk about how we think about X?

    Even if this were so, philosophers aren't any good at the latter either.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    Not me, it's the person using the defense. I was just pretending to be retarded = I never meant to find out anything anyway!

    So philosophy 'teaches how to think about things in different ways.' Someone ought to tell philosophers that's what their subject is about – it seems they haven't gotten the memo! They talk about, for example, the nature of mind, the world, language, and so on.

    But of course, this isn't a serious position – people only (predictably) roll out the old 'philosophy's not supposed to do what it claims to do or spends its time doing' when challenged directly.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    This is the old 'I was pretending to be retarded!' defense. It's news to me that philosophy's not supposed to actually teach anyone anything! But of course, that's just a defensive position, pulled out when cornered. Motte and bailey.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    The Kantian is no better, in thinking that the nature of the mind, or whatever it might be, can be unlocked in the same way. The object of inquiry is different, but the method is equally ludicrous (and like the pre-Kantian, the Kantian never yields any results, proves anything, etc.).
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    I think philosophy is weird in a way fishing isn't, in that fishing doesn't have a professed aim it manifestly fails at, and has for thousands of years. This is maybe the most salient feature of philosophy, and periodically gets noticed and lamented even by philosophers themselves (who have professional and cognitive incentives not to notice).

    It's also clear why one would think that fishing catches you fish. It's not clear why one would think that the methods of philosophy can unlock general features of the universe – on reflection the idea seems somewhat insane. That's why it's interesting to think about why people might have been led to believe in the methods.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    I wouldn't think it was that weird, if he just did it once in a while, and maybe to try to get someone to stop fishing. I don't really discuss philosophy anymore except in threads like this about this very topic (and even then, I think I haven't commented here in like half a year), and I don't really read it anymore or talk about it anywhere else.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    The truth is that, if you're a fisherman, all that really means is that you're good at fishingcsalisbury

    Of course. But I think being good at fishing is a real and useful skill, whereas being 'good at philosophy' doesn't really entail being good at anything, unless you're on the job market in philosophy.

    But is there an OLP of bird augury?csalisbury

    Sort of. History always adduces random skeptics of prevailing doctrines. The point is that they're ad hoc, and have no special gift – things just 'click' for them. I actually do think philosophy is losing its popular prestige, and while philosophers are smug about this in a weird way, I think part of them knows that they don't know how to justify their existence, and the people who make 'crude' criticisms of it can't be kept at bay forever, because they are, at their kernel, correct.

    Ok, maybe. But really think about that. Imagine someone told you back in the day that the stuff you were getting into was bogus. Imagine you didn't work through this stuff, but were nudged away from it. Good anthropologically, maybe. But would you feel as confident saying its hokum?csalisbury

    It depends on the social context. One reason I don't have to be nudged away from, say, flat-earth theory, is because I grew up in a context in which the reasons it was inadequate were obvious enough that trying to adopt it would be a huge affront to my ability to make it through the day (I would need to make sense of how my plane trips worked). You could imagine a world in which we just know enough about the way our own language and cognitive faculties work, and this was such an ambient part of an ordinary person's knowledge, that the idea of adopting philosophy would look as silly as adopting flat-earthism or bird augury.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    You can talk to a fisherman about life shit and they will have a lot to say, right? (& yeah, this is classic liberal wisdom-of-the-working-man pap that has been around since at least Wordsworth. Nevertheless it's true, I spend a lot of time talking to fishermen) A lot of what they're saying is going to, occasionally, take a philosophical flavor; the tuned-in philosophy brain will take this stuff into a mental vestibule, without letting it into the main room. You understand what they're saying, but you see the mistakes they're making.csalisbury

    I'm not sure people who are educated in philosophy are doing better than fisherman, so that they have any special insights into 'mistakes' they're making. There are a couple very basic things of informal logic, sure. But other than that, I genuinely am not sure that philosophy really grants you a skillset – sure, it allows you internally to see what someone is doing wrong 'from the point of view of philosophy,' so you can be a snickering grad student on Twitter laughing when someone claims not to give a shit about the is-ought gap (but Hume is so important! You've never read him! etc.) or says no one should give a shit about Quine or whatever (but you don't understand!), but I honestly think those grad students don't really know anything. They are privy to a certain bunch of books and rituals for talking about them, but do they know anything more than the fishermen? Are they able to think better, avoid mistakes they make? I think, no. And further, I think philosophy actually makes you capable of making mistakes the likes of which you'd never dream of if you didn't get into it – I really do think that to a large extent it makes you think worse, because it introduces you to malformed thoughts and gives them prestige. Reading Heidegger literally makes you dumber (I've witnessed it). Just like, say gorging yourself on New Age books (and taking them seriously) makes you dumber.

    So what's happening? Here you can take the historical or anthropological perspective and kind of break down what's going on. They are doing an anthropologically known activity that you are not, or are no longer, doing. Ok. But fold it back on itself. A martian anthropologist, or whatever. What's happening when you're recognizing, from your post-philosophy anthropological perspective, what they're doing? You listen and nod, but for a canny observer, who knows how you act in other situations, it's clear you're not expressing real agreement. For the martian anthropologist, this looks a lot like someone who, idk, is familar with olympic-level athletics, tolerantly observing a sub-olympic performance. Advanced biometrics and the deal's sealed - this is someone simply tolerating a performance they know is lacking.csalisbury

    I think if you can see through it you do have a sort of skill, but I don't think it's like being a great athlete, or being smart, or something. I think it's a kind of fluke. The comments of the late Wittgenstein were essentially due to a troubled mind, and not indicative of the wider stream of philosophy (this is why today he is more of a saint than a 'researcher'), and a few people picked up on them, along with the English common sense, and came to some realizations. It really happened by accident – the insights weren't assimilated by the broader public or academia, and they won't be in the future. Same for people who for whatever reason happen on this stuff later and are cognitively primed to see it. I really think it's happenstance, much like the skeptic who achieves ataraxia by accident according to Sextus.

    So basically it's like being the weirdo in Rome who thinks that reading the bird guts is all a bunch of hokum – there are people who knew it was nonsense, but they could never gain much social currency, and the reason they knew was probably accidental (smart people believed it, educated people believed it, etc.). Now, it may be that in the future, it becomes obvious to everyone that philosophy is all a bunch of hokum (which it is), in the same way as it's obvious that bird augury is a bunch of hokum. But that would come as the result of changes made elsewhere – better competing paradigms for explaining and manipulating the things philosophy is 'supposed' to answer for. Maybe that will happen, maybe not. But there is a reason we are susceptible to philosophy, just as there's a reason we're susceptible to augury.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    I honestly haven't read as much of either as I'd like to have. I have been trying to make an effort, though, especially with history as regards my home state (California), and it has been a fairly eye-opening, disturbing experience (the history of California is a bizarre, violent, tumultuous, and sad one). It gives me a bit of vertigo, to learn about real things – but I think once you get the taste for it, the fantasies just don't satisfy anymore. The explanations for why the way the world is often have a clear genealogy, and history undermines philosophy to the extent that the former often explains the latter, but rarely if ever vice-versa (philosophy is unseated as being a primary, or deep, form of inquiry).

    I'm coming at this from a different angle, but I don't want to push it either. As a student of anthropology and sociology, you're familiar with the dynamic of landed vs aspirational classes ( the way in which the landed color the aspirational.) I'm talking about something like that.csalisbury

    Hmm, I'm not sure what you're getting at. Are you saying that philosophy comes from the 'landed' esoteric tradition, and it's not possible to shake it off?
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    I have no issue with just discussing things, but philosophy doesn't do it self-consciously, so it always ends up in its old traps. A more clear-eyed discussion might be possible, but I think it would lack the core features of philosophy.

    If you're really interested in knowing how these things go, my advice would be: read history, read anthropology. Insofar as you read philosophy, read it historically, the way you read about reading entrails. Don't actually do it! [And knowing more about history and anthropology makes philosophy less appealing cognitively, I think inescapably – it persists in ignorance].
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    Witt will talk a lot about philosophy not being able to work, get traction, be anything but a house of cards, a fly in a bottle. Heiddeger and Emerson say that the more we grasp (for certainty) the more slips through our fingers; our acceptance of only a narrow criteria for knowledge blinds us to our more varied lives.Antony Nickles

    My view of philosophy is a bit more prosaic. It's just a bad method of inquiry, based on misconceptions that we have no reason to bind ourselves to anymore. It's like entrail-reading to try to see the future, say. We just don't really have a reason to do it anymore.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    I think the chief achievements of OLP are at a meta-level: it was the site of the invention of not only of metaphilosophy (including the journal of that name, which is still going to this day and quite good), but also of metasemantics, that is, the search for the conditions under which expressions become meaningful, and what it is for something to be meaningful. Granted, this was under the guise of providing a very specific metasemantics, adopting the Wittgensteinian maxim distorted through Moore, but this was the first time in the specific tradition they were working in that it was done.

    You can see precursors to it in the early analytic concern with meaning, especially the positivist conditions on intelligibility, but the positivists never asked the question in such an explicit way, not of which sorts of things were meaningful, but what it even meant for something to be meaningful, and how this might be made intelligible in terms of actual linguistic practices. This is a very powerful move, and one that I take to be 'naturalistic' and 'anthropological,' as opposed to the kind of (what I take to be) misguided neo-Kantian attempt to look for the origins of meaning in transcendental conditions that, say, Habermas fell into. The OLPers had a view of the foundations of meaning, where the foundational conditions were not coherently deniable from within, as you made use of those very conditions ('ordinary language is correct language,') but which themselves were multiform and contingent (something like the the shifting riverbed).