• Two ways to philosophise.
    many roads to the same locationCount Timothy von Icarus

    I like the roads. That's nice. But of course the real trouble is that we must choose not knowing where each road leads. They all lead somewhere, but is it where we want to go?

    And the two metaphors combine naturally: how do you know if some place is a place you'd like to go until you've been there? Do you decide based on what other people have said about it or what?
  • Two ways to philosophise.
    I haven't seen any way the normative question can be foreclosed on. And indeed, if it was foreclosed on entirely, and we said there were absolutely no better or worse epistemic methods, that seems to me to be courting a sort of nihilism. But neither does the existence of the normative question require "contextlessness" to address.Count Timothy von Icarus

    This is exactly why I moved to anchor the normative question to relations among or transitions between given epistemes (worldviews, frameworks, ideologies, whatever).

    People move from St Louis to Kansas City. But if you live in St Louis, how much do you really know about Kansas City? How do you decide? Can you see Kansas City as a native does before you move? Do you need to? It's well known that an outsider might see what's good about a place that the locals take for granted, but an outsider might not see at all what the locals love about a place. Is there available to everyone, no matter where they live, a reliable method of judging the value of a place? Is where you live now relevant at all?

    My plan for making the normative question more tractable was, instead of asking whether St Louis is better than Kansas City (or, in analogy to the science issue, whether they are the same kind of city), to ask, if I live in St Louis, should I move or stick? And the same if I live in Kansas City.
  • Two ways to philosophise.
    I am wary of the word "thingies."Leontiskos

    It was intended as an abstraction; if it doesn't hold up, I wouldn't mind or be surprised.

    Now note that you have to omit "and only" from (a) if (a) is not to collapse into (b).Leontiskos

    I don't think so.

    Your preliminary answer to Q3 was, "Yes-ish ― this one is in some ways too easy and too hard." Now is it too easy when we ask what is common to the sciences, and too hard when we ask what is restricted to the sciences? Or is there a different reason why it is "too easy and too hard"?Leontiskos

    My understanding was that if you're intent on policing the boundary between science and something else (art, sport, pseudoscience), you want to reliably pick out all and only sciences, you want necessary and sufficient conditions for some activity counting as science. I don't think you can have that. The "necessary" part is too easy -- "done by sentient beings" for a start. But that doesn't help much in narrowing the field of candidates. The "sufficient" part is too hard because of the diversity of methods and practices. Not all sciences perform experiments, for instance, or have to define "experiment" quite differently. (The universe is a population of 1, so cosmology has a problem right out of the gate.)

    the same metaphor applied differentlyLeontiskos

    It was one of Wittgenstein's metaphors for how family resemblance concepts work.

    I.e. "common thread" vs. "binding thread." Or, "Is there some thread common to all the sciences?," vs. "are there threads that run through the sciences and through nothing else?" Note that the first question is neither about the necessary or sufficient conditions of science. It is simply about whether there are things that all the sciences share.Leontiskos

    And of course there are.

    The idea that this criterion must therefore have to do with "necessity" is bound up with (Kripke's, among others) modal essentialism, which I don't find helpful.Leontiskos

    It's just another way of talking about "all and only", just quantifiers. It doesn't implicate modal logic at all; on the contrary, the modal box and diamond thingies are just quantifiers understood to range over possible worlds. That's literally all they are, restricted quantifiers. So you've got this backwards.
  • Two ways to philosophise.
    why we would want such a liberation?Moliere

    I tried to suggest two reasons: one identifies your ideology (etc) with dogma and delusion, which prevent you (as @Wayfarer notes) from seeing things as they are; the other is transitional, and based on the intuition that to put on a new pair of glasses you must be able to take off the old ones. In the latter view, you might, in that moment where you have no glasses on, not see perfectly but rather not see at all (Kant's "intuitions without concepts").

    Your first paragraph is close to my view, that reason serves the social function of comparing different views so that we can triangulate our perspective on the world using the perspective of others. We are naturally adept at two things: rationalizing our own views and finding fault with the views of others. You can leverage that. And if you institutionalize and formalize the process, you get science. Roughly.
  • Two ways to philosophise.
    science doesn't have a monopoly on any of the strands of the rope that binds the sciences togetherSrap Tasmaner

    Kicking myself for not noticing you had already used the same metaphor:

    perhaps we're asking if there is some common thread between the two paradigms in which the shift is effectedLeontiskos

    And the answer is almost certainly yes, but what's common is only part of what makes both science, or both the same science, or whatever, so it's not the whole explanation. Anyway, that's my hunch.
  • Two ways to philosophise.
    We can also think about this in terms of commensurability and communicability.Leontiskos

    As if @Banno won't already be exercised enough by my use of "conceptual scheme".

    I have very mixed feelings about the issue of "commensurability" but yeah, I would like everything you mentioned to be on the table. I think it is perfectly reasonable to ask whether any of us can truly understand the ancient Greeks, say. I think it's perfectly reasonable to ask a question like that even if I were later convinced that it's in some way a defective question.

    An anecdote

    I once saw a small flock of birds attempt to perch in a very small yellow-leaved tree. It was too small for all of them to light so they sort of swarmed around it, some finding a spot then taking to the air again moments later. They gave up after maybe five or ten seconds and set off to find a better spot, and left behind a nearly bare tree, the beating of so many wings and jostling about of all these little birds had caused nearly every leaf to fall. I felt, just for a moment, as if I had seen the tree ravished by Zeus, who had taken the form of a flock of birds.
  • Two ways to philosophise.
    So now we are asking, "Are there [paradigm/framework/worldview/evidence regime/language game/scheme]-independent standards?"

    Is that the question you want to ask?
    Leontiskos

    Yes, that's the idea ― and I'm glad it's clear enough despite me mixing up the numbers. (Anyone who found the post deeply confusing should reload to see my edits.)

    We landed at some point on questions like this: Are all narratives acceptable? I think it's clear no one wants to say that, but they mean different things when they answer. I understand the impulse of the question; when I was young and discovered Science, or when I was somewhat older and discovered Logic, I thought they were tools especially useful for ruling things out. But I'm older now, and I can't help but read that question and ask, acceptable to whom? in what context? for what purpose? And I understand the question as intending to be taken as "acceptable full-stop," or, if need be, "acceptable to Reason." And I can't help but wonder if anyone is ever in a position to stand nowhere and choose which town to go to ...

    Hence my plan of grounding the question instead on the relations among thingies: how do you, given that you're currently in St. Louis, decide whether you might like Kansas City more? Whether Kansas City might be better (in some sense you could give substance to)?

    I'm not immune to the claims of reason as the great tool of liberation from dogma and delusion ― and have disconcertingly frequent occasions lately to wish fervently for its wider embrace. Even though everyone knows (well, almost everyone, around here at least) reason has a shockingly poor track record ― despite its PR ― as a tool for freeing people from dogma and delusion, that only really applies to modern man in his natural state, not to reflective man trained in the use of reason (i.e., us). Sadly, I for one would expect a lot more consensus to have emerged in philosophy if that were so.

    In other words, I want (1) to be an open question. Maybe my youthful faith in reason was warranted. Maybe not.

    I don't want to, but in the interests of comity I will also answer your questions ― with the proviso that I'm not altogether happy about my answers.

      Q1. No.
      Old2. Obviously.
      Q2. Evidently, and probably not.
      Q3. Yes-ish ― this one is in some ways too easy and too hard.
      Q4. No, and I intend this to be the same as Q1 ― a sort of "persistent context".
      Q5. As with Q2, evidently.
      Q6. No, and "false prison" is just rhetoric (drawn from a nice book about LW).
      Q7. Same as Q3.

    Here's my problem with The Criterion of Scientificity: what we're talking about is behavior, and largely social (rather than cognitive) behavior. What makes what you're doing science is, primarily, how careful you are about your work and your willingness to submit it to the review and criticism of others, but there are a number of other important points (the construction of an explanatory framework, for instance), and I think (a) we are really talking about a classic "family resemblance" here, where there are a great many criteria in play, an evolving set, and you won't find all of them or a consistent subset that identifies all and only science, and (b) science doesn't have a monopoly on any of the strands of the rope that binds the sciences together ― which is why identifying a few things common to all scientific practice (as I confidently do above) is not quite enough to identify only science (necessary but not sufficient). Every science may have this collaborative aspect I'm so insistent on, but so do other things; you need that plus a healthy subset of the other traits of science, which themselves are traits not exclusive to science. (Do painters not engage in careful observation? Do painters, on occasion, not observe and paint the exact same object under varying conditions? Etc.)

    It is clear that people sometimes leave St. Louis and light out for Kansas City. It is possible, and the question is, first, what enables that move, and, second, how does anyone judge whether it was a good move, or the right move? (Anyone might be that person moving, someone who stayed behind, someone who already lived in Kansas City, or someone who lives in Chicago.)
  • Two ways to philosophise.
    I've been dithering about whether to get back into this. I've been looking for a way to do so without simply playing partisan to one side.

    There are two questions:

    1. Are there context-independent standards?
    2. Are there context-dependent standards?
    Leontiskos

    I suppose we all agree the answer to (2) is "yes", though we may choose to interpret the question differently, hedge in various ways, and so on.

    The conflict here is certainly about (1).

    I would like to see this approached as an open question, but I'd like to frame it in a particular way, as a question about (1) (2), upon which we all agree.

    Now, I've never read Kuhn, though I've been familiar with the gist of the original argument for years. We all know that the issue he addressed was the nature of paradigms in scientific research, and the replacement of one paradigm by another, which, he claimed, was never a matter of new observations invalidating one paradigm and ushering in another that was more adequate.

    That's close enough to what I have in mind, only I'd throw in every sort of framework, worldview, evidence regime (or whatever it's called, @Joshs has mentioned this), and so on. If you like, you could even throw in language-games.

    I'm not wedded to any particular view here, but I think it's simply a fact ― interestingly, a fact about our culture ― that since the rise of cultural anthropology, in particular, we are all of us now more knowledgeable about the existence of views quite different from our own, and have grown more sensitive to those differences, which shows up, for instance, in the way we talk about history now (the other another country). A certain sort of relativism comes naturally to Western Educated Industrialized Rich Democratic people.

    We are also by now smart enough to know that the sort walled gardens imagined by early structuralists are a myth, and that neither are worldviews (et very much cetera) static.

    So here's how I would want to address question (2) (1): is there some mechanism available for prying yourself out of a given scheme/worldview/framework, and is that mechanism the use of reason? We might see this as a step required for the change or evolution of a worldview (though not the only way), or as a mechanism for shifting from one paradigm to another, Kuhn be damned.

    So there are two ways it could be anchored to issue (1) (2): either (a) as what connects one thingy (worldview, framework, conceptual scheme) to another, or changes a thingy noticeably; or (b) as something that enables you to free yourself entirely from the false prison of all thingies.

    I want to add that it seems clear to me that the project of the Enlightenment hoped that reason could pull off (b), and much follows in its train (reason is the birthright of all, no one need ever again be beholden to another in areas of knowledge, and so on).

    (With the discussion of pseudoscience, I found myself thinking about alchemy, and the place it is given nowadays as a crucial forerunner of chemistry; while its theory may leave something to be desired, its practice was not without merit. So how does chemistry emerge from alchemy? Was it the application of reason?)

    So is it possible to set aside all worldviews, frameworks, and schemes, by the use of reason? (To achieve, in that much-reviled phrase, a "view from nowhere".) Is reason the crucial means by which one jettisons the current framework for a new one? Or is there something other than reason that can allow such transition or liberation?
  • Two ways to philosophise.
    this is the middle-ground position that I'd recommendJ

    I'm still catching up on the thread, but fwiw I want to express my appreciation of this series of posts of yours, and throw my support behind your views, to no one's surprise I expect.

    If I wanted to formalize it a bit, I might say that we're not advocating the abandonment of criteria tout court; useful, meaningful criteria (of value, of truth, et bloody cetera) are both local and modifiable. Local here meaning capturing as much of the context of their application as needed. (A question like "Is this a good car?" has no answer or too many without context.) Modifiable meaning that if your criteria can't evolve or aren't open to challenge or debate, you're doing it wrong.

    And I think the counter, the demand for universality, permanence, certainty -- which will attack even what I'm saying here, "Are criteria always and everywhere like this? Then you're contradicting yourself!" -- should just be ignored as juvenile. This is not how serious people think. It's like lecturing Jerome Powell after taking Econ 101.

    Anyway, some nice posts @J.
  • Two ways to philosophise.
    It does often seem like there are people here who are trying to understand what others think, and others who want everyone to think like them.Tom Storm

    One of those camps is dramatically larger than the other.

    I remember a little cartoon, taped to a terminal on the checkout counter at the college library. A guy, resting his head in his hand and gazing at a computer terminal, and he's saying, "Gee, I wish you could talk. I'd love to know what you're thinking." And there's a thought bubble for the computer, which is thinking, "I wish you could think. I'd love to know what you're saying."
  • Some questions about Naming and Necessity


    I will try to get to the big can of worms you opened later tonight.
  • Some questions about Naming and Necessity
    Sadly, I have lost my note of where I got this story.Ludwig V

    Saint Anselm? I'll have to google now.

    Ambrose!
  • Some questions about Naming and Necessity
    Why couldn't it be true that we need reference equally to talk to ourselves? I'm not even sure that your version would be true as a genetic account -- who knows which came first, private naming or public discourse, or whether they were simultaneous?J

    It should be clear from other posts that I agree we do not know, and may not be able to know.

    But I am still a partisan of the communication first view, or, rather, shared intentionality and cognition first. A lot of that I get from Tomasello. I was playing with my granddaughter last year after watching one of his talks and it's shocking how obvious this is once you look for it: I roll the ball toward her and she glances up at me then back at the ball until she traps it in her pudgy little hands and immediately her face pops up to look at me. (Did I do it right? Is this how we do it?) Then she focuses on the ball so she can roll it toward me and as soon as she lets go, her face pops up again to see, again, if she's doing it right. It's constant. We start as early as possible learning to see the world through the eyes of our caretakers. I think talking builds on and elaborates this fundamental orientation of ours toward communal cognition.
  • Some questions about Naming and Necessity
    We really don't know.frank

    I agree.

    I hope no one will take the forcefulness with which I'm expressing my view to indicate dogmatism. I could be entirely wrong.

    Honestly I think I'm inclined to push this sort of inside-out approach just because so much of our tradition presumes the opposite. I'm curious to see if other approaches might be enlightening.
  • Some questions about Naming and Necessity
    That's not what's private about private reference -- rather, I'm arguing that it's the independence from "triangulation" or the need to have a listener comprehend the speaker's reference.J

    And I'm suggesting that this "independence" is to some degree illusory, in two senses: the sorts of things you think are the sorts of things you could express, whether you do or not; and secondly, they are that way because you learned how to think from other people.

    Roughly, I want to convince to feel, behind every thought you have and every word you utter, millions of years of evolution and hundreds of thousands of years (at least) of culture. The thoughts and words of countless ancestors echo through your thoughts of words. Everytime you choose as the starting point for analysis "What am I doing all by myself?" that's a mistake. It's the tail wagging the dog.
  • Some questions about Naming and Necessity


    What I'm saying is that we only have something we call "reference", the thing that we do with referring expressions like names and descriptions, so that we can talk about things with other people. More than that, our individual cognitive capacities are shaped by our interactions with other people, so the sorts of things we want to talk about are already the objects or potential objects of shared cognition.

    And I think our referring practices are shaped by the goal of achieving shared cognition. In conversation, both speaker and audience contribute: the speaker says what they believe will be enough to direct the audience's attention, expecting the audience to draw on whatever they can to "fill in the blanks" (context, shared history, reason).

    Why does any of this matter? Because words are a "just enough" technology that evolved for cooperative use; a word, even a name, is not something that carries its full meaning like a payload. Words are more like hints and nudges and suggestions. They are incomplete by nature.

    And so it is with using them to refer. We should expect that to be a partial, incomplete business.

    I think it's tempting here to think of this on the analogy of regular human finitude: in our minds we pick out objects to talk about and we do so perfectly, completely, but words are imperfect and ambiguous and are kind of a lousy tool for communicating our pure intentionality.

    I doubt that story, but about all I have in the way of argument is that our cognitive habits and capacities are shaped by just this sort of good enough exchange. My suspicion is that we largely think this way as well. And this makes a little more sense if you think of your cognition as overwhelmingly shared, not as the work of an isolated mind that occasionally ventures out to express itself.
  • Some questions about Naming and Necessity
    Surely Robinson Crusoe did some private referring!J

    I know you're kidding, but that's clearly the wrong test case. He was taught to refer to things using first oral and then written language. Even gesturing at things is learned behavior.
  • Some questions about Naming and Necessity
    whatever it is I'm doing, privately, is not an example of referring.J

    I just don't think that follows from anything.

    Everytime someone argues that blah is born out of social practices which continue to support and inform it, someone will say, "So if I privately blah, in my mind, you're saying it's not really blah?!"

    No, of course not. It's why I tried to make clear in that post that both views of language at least attempt to end up with both social and private uses.

    Here, consider reading. Famously, reading used to only be done aloud. To this day, children are overwhelmingly taught to read aloud: your teacher tells you, out loud, what sounds the letters make; the student demonstrates their ability by making those sounds out loud. It is how this knowledge is transferred to the next generation. It makes clear the relation between our use of oral and written language.

    Would anyone then conclude that reading silently is not "really" reading? No.
  • Some questions about Naming and Necessity


    Yeah that's quite interesting, and I think both (yours and mine) represent types of triangulation.

    A further curiosity is that parasitic reference has to be self-consciously contrastive, so it's the sort of thing a parent can engage in; on the other hand, children are said to be learning when they manage this sort of "playing along," "calling things what you call them," but they lack the distinction between the two ways of doing this.
  • Some questions about Naming and Necessity


    I'll dig it out. I think I know what box it's in.
  • Some questions about Naming and Necessity


    I think broadly you'd expect, and can find exemplars of, two ways to go on this, as usual:

    (T) Language is, first, a system for organizing your thoughts; secondarily we developed ways of verbalizing our linguistically structured thoughts to each other, for obvious reasons.

    (C) Language is, first, a system of communication, an elaboration of the sort of signaling systems many other species employ; secondarily we developed the ability to "internalize" an interlocutor (perhaps imaginary) and to use language to organize our thoughts.

    A whole lot flows from this fundamental difference of approach. I'm not sure there's a reasonable means for choosing between them, but I tend to think what evidence there is favors (C).
  • Some questions about Naming and Necessity
    Can you recall a reference for this?Banno

    It's on the first page:

    Say something that requires a missing presupposition, and straightway that presupposition springs into existence, making what you said acceptable after all. (Or at least, that is what happens if your conversational partners tacitly acquiesce - if no one says “But France has three kings! ” or ‘Whadda ya mean, ‘even George’? “)

    Complete text available at the David Lewis papers.
  • Some questions about Naming and Necessity


    There's actually a funny issue with non-response I've been thinking about, since 's entreaty that I stick around. It's one of the things Lewis talks about in Scorekeeping, if I'm remembering correctly.

    Suppose you ask me who that guy is holding the glass of champagne, and I realize you mean Jim, but I happen to know Jim is holding a glass of sparkling cider. I could silently correct you and just answer "That's Jim," but in doing so I will have implicitly endorsed your claim that Jim is drinking champagne.

    We are again in the territory of farce.
  • Some questions about Naming and Necessity
    You got the reference to Quine, but Srap didn't.frank

    Does this sentence strike anyone but frank as plausible?

    Sometimes @frank I just don't see the point in responding. I'm sure you understand.
  • Some questions about Naming and Necessity


    I'll add one little note, relevant to the issues raised in the OP about essential properties.

    In the collected papers of Ruth Barcan Marcus, there is a transcript of a discussion between Marcus, Quine, and Kripke, who was (iirc) at the time maybe not yet 20, and I forget who else. Anyway, I remember a specific exchange where Quine said that Kripke's approach would require bringing back the distinction between essential and accidental properties, and Kripke agreed, but didn't consider that the fatal flaw Quine did.

    I think there was some bad blood later, Marcus or people on her behalf claiming that the causal theory of names was stolen from her.

    Anyway it's interesting to see Quine's star student (and then later Lewis) already plunging into waters he was deeply apprehensive about.
  • Some questions about Naming and Necessity


    Right right. It's been years since I read this. I've got nothing to contribute on "what Kripke would say" so I'll mosey along.
  • Some questions about Naming and Necessity
    whether the statement-type designation -- "He is the person about whom I say . . ." -- is rigid.J

    It just seems obviously not to be.

    1. It has an indexical in it. I think that rules it out from the jump.

    2. As phrased, it names a class of actual performance, without even a ceteris paribus clause. The obvious way to strengthen it is to shift to talk of dispositions. But c.p. clauses and dispositions have known issues.

    What you seem to want is really an in-between category of "rigid-for-you".
  • Some questions about Naming and Necessity
    I think Srap Tasmaner is basically saying he doesn't think at all when he's not engaging another person. I think he's saying he's not even conscious of the world around him until he discusses it, at which point a sort of negotiated narrative comes into being.frank

    In this case, even to the degree that I am engaging with another person, I am speechless.
  • Some questions about Naming and Necessity


    I'm sure it's my fault. Of course it should be possible to provide an account of what makes names names, what makes them special, what their role in language is, what makes them different, and this is the sort of thing Kripke is up to. Sure.

    But we were also talking about reference as such, and it's clear to me that an account of names in terms of baptism, or words in terms of stipulation, can't also serve as an account of reference but presumes it. If you want to teach someone "blork" means that thing, you have to already be able to successfully refer to that thing. (I think Wittgenstein raises similar objections to theories of demonstrative teaching, as if pointing "just worked".)

    So talk about stipulation and teaching all you like, but it doesn't get you to that level of originary reference you're chasing, the intentionality you cannot be mistaken about. It relies on that; it doesn't explain it or even describe it.
  • Some questions about Naming and Necessity
    If I teach others that my shriek refers to Mr. Champagne, in what way could this reference fail for others, or be mistaken on my part?J

    This question is a non-starter. You're presuming the entire system of conceptualization and language usage is at your disposal, and then all you're doing is in effect introducing a word by stipulation. It is interesting that we can do this, but it doesn't get anywhere near addressing the questions you're interested in.

    We're actually covering similar territory to the memory discussion. My position is that rather than the pure phenomenal experience we overlay with narrative, which we can then strip away, all we've got is narrative. The process you imagine of "stripping away" is real, but creative, it's making a new thought out of the thoughts that came to you not just enmeshed in context, but constituted by our systems of understanding and communicating. I don't think you really have the option to just set those aside and recover some original underlying experience ― you never had access to any such experience.

    The idea, as I conceive it, is similar to Sellars's argument in "Empiricism and the Philosophy of Mind": he allows that there must be some sort of raw inputs to our thinking processes, but denies that they have any cognitive status whatsoever. In particular, they cannot serve the Janus-faced role thrust upon them, linking on the one side to purely causal processes of sensation, and on the other side to our conceptual apparatus of knowledge and reason. Nothing can fill that role.

    As it is vain to seek the primordial unconceptualized experience, it is vain to seek the originary act of referring within a mind that knows no other minds.
  • Some questions about Naming and Necessity


    Well, I'm not even sure what we're talking about now, but it looks like you are trying to create one of Wittgenstein's private languages. You want to have in hand an association between an object and something, a name, a referring expression, or a bit of behavior, and for that association to be something you can't be wrong about.

    I think there are a couple layers to this. One is the apparent incorrigibility of attention: when I think of something, perceived or recalled or imagined, even if I am making important mistakes about the properties of that thing, even if I misidentify it, I cannot be wrong about it being the object of my thought (or intention). In my pitching example, the guy is remembering something someone did, even if it wasn't who he thinks it was or in the circumstances he thinks it was. There is, we want to say, a pure, original, and unimpeachable phenomenal experience underlying the stories we tell about it, even if those stories are all wrong. Even if it turns out the thing you're thinking about, that you think you remember, never happened, it's still what you are thinking about.

    It's a compelling vision, but I suspect it is fundamentally mistaken.

    When we come to language, the act of referring seems somehow to share in the unimpeachability of attention. The additional problem here is that "refer" is one of Ryle's "success words", so when we attempt to describe reference we describe successful reference. The downsides here are that (a) what is genuinely interesting, impressive, or mysterious is the element of "success" rather than something specific to referring; (b) our vocabulary blocks a proper comparison of successful and unsuccessful attempts at reference; (c) by being defined as successful, reference seems to take on the color of incorrigibility we associate (I think mistakenly) with attention.
  • Some questions about Naming and Necessity


    Referring is something done by fiat.frank
  • Some questions about Naming and Necessity


    I'm a little unconvinced by the "about whom I say..." locution, precisely because we're lacking a guarantee that the sentence the speaker utters means what he thinks it means (or "what he intends it to mean" or "what he means by it").

    I know the tendency of this analysis is to brush off mistakes, but suppose you point out to the speaker -- for easy examples, imagine the speaker isn't quite fluent in the language he's using -- that the words used mean the person is a prostitute: you might end up with a speaker insisting that they wouldn't say that! You'll probably want to cover by changing your description to something like "about whom I mistakenly said ..." but that's no help. What, so you *thought* the person was a prostitute and now realize they aren't?! Doubt the speaker will agree to that. Keep trying. (See " A Plea for Excuses".)

    And in the meantime, the speaker has still failed to refer, because once words are in the mix, you're stuck with them; either you trust them to faithfully carry your meaning, as your ambassadors, so to speak, or you allow that there must be negotiation between you and the audience.

    "You know what I mean?" isn't always a rhetorical question, even when intended to be.

    It's as if, what we need to say is that when you attempt to refer, you "hope" the words you utter will do the trick -- you could also hope you're using the "right" words but I think that's secondary. Now what is the audience to do with your hope? How does that help them know what you mean?

    ((There's a reason sitcoms are full of this sort of stuff.))
  • Some questions about Naming and Necessity
    Referring is something done by fiat.frank

    Tell us what you mean by that, and why you think so.
  • Some questions about Naming and Necessity
    maybe a better way to understand this is "The man over there who I think has a glass of champagne in his hand." That way, the description is not wrongJ

    Unless it is. This is such a great example because the reference of the word "champagne" is regularly disputed. Are you using the word "champagne" "correctly"? Are you sure? Is there definitely a correct way?
  • Some questions about Naming and Necessity
    Reference is set by the speaker.frank

    I don't think it's that simple.

    In cases where the speaker is mistaken, memory being what it is, it is possible for them to learn what they are trying to refer to.

    (Example:
    "When Maddux was pitching the last game of the World Series --"
    "Maddux didn't pitch the last game; Glavine did."
    "Okay then Glavine. No, wait, I know I was thinking of Maddux, so maybe it wasn't the last game I was thinking of ..."
    And this can go on. It might turn out the speaker was remembering yet another pitcher he had mixed up with Maddux. It might or might not have been a World Series game.)

    The other problem is that even if we say the reference is whatever the speaker intended, besides the problems already suggested above, intention not always being perfectly determined, we have the additional problem that words don't just mean whatever you want them to. The speaker has no choice but to engage in the grubby business of negotiating with the audience to achieve successful reference.

    Grice noted the complexity of our intentions when we speak to each other, even in the absence of confounding factors: not only do I intend you to understand that I mean X by saying Y, I also intend you to recognize that I so intend, and I also intend you to recognize that I intend you to recognize that I intend you to understand I mean X by saying Y, and on and on.

    So, no, I can't agree that it's just a matter of the speaker "setting" the reference, as if the audience were superfluous.
  • Some questions about Naming and Necessity


    I would only add that "the one holding a glass of champagne" is said for the audience's benefit. I can look at someone and silently think "asshole" and I know who I mean, no hoops jumped through. If I gesture, for you, at someone and say "asshole", you might need clarification about which of the people over there I disapprove of. Hence "the one holding ..." or even "the one holding - what is that? Is that champagne?"

    Point being it's not exactly a matter (de dicto) of what the speaker thinks, but really of what the speaker thinks the audience will think. "The guy holding what you would probably think is champagne, but I saw the bottles and ..."

    I can't remember if Kripke gets there, and I'm not looking at the book, sorry. But reference is a matter of triangulation, not just what pertains to the speaker or pertains to what she speaks of.
  • The Phenomenological Origins of Materialism
    the job of the human sciences is not to explain but to interpret and understand.J

    Which is fine, I've just been avoiding committing to some major difference between the natural sciences and the human or social sciences, because I've been trying to clarify ― or insist upon or defend or something ― that there is some genuine continuity, that the political scientist is as much a scientist as the physicist. I'd like that point to come out similar to saying that a biologist is just as much a scientist as a physicist, which most people will agree to without a moment's thought, but I think it's obvious there are ways in which biology had a much harder time making progress than physics. We got the theory of evolution before genetics. We had the number of human chromosomes wrong ― even once we had a number ― until 1956.

    To your point, part of my point earlier was not to assume that what makes physics science was everything about physics, some of that may only apply to physics, or may only apply to the natural sciences. So I'd be open to saying even the expected results differ, that we want explanations from the natural sciences but interpretations from the human sciences. That may be. Where I've been hoping to link them is in the process enacted to produce whatever kind of knowledge they produce, all that business about careful procedures and communal self-correction. It wouldn't bother me if there were sciences about different things that produced different sorts of results, so long as they were producing those results using a process that would be recognizably science to a scientist in any field. That's awfully idealized, I know, but I think about even what a sociologist could tell a chemist about the care with which he collected his data and the statistical analysis he performed on it, and the chemist would recognize a brother scientist at work, even allowing for the great differences in their fields.
  • The Phenomenological Origins of Materialism
    I'd want to say that those tiny moments of musicality shouldn't be notated, even if they could be.J

    Agreed. I suppose I shouldn't have put it this way because I was thinking of the musicologist not the musician, someone who is analysing a performance rather than creating one.

    I'm of two minds in this talk of "having enough data" I keep using, here in talking about music or above talking about the social sciences.

    There's a great forgotten book called The Road to Xanadu by John Livingston Lowes (iirc) in which he traces every image, very nearly every phrase and every word, in two poems of Coleridge (Xanadu and Ancient Mariner) to sources in Coleridge's library. It's not an "explanation" of the poems; I believe the point Lowes made (and I may misremember) was that in a way knowing all this only deepens the mystery of Coleridge's creativity in taking all this material to create these things. It's not like you could train an LLM on Coleridge's library and then say, "Write me an astonishing poem," and out they would pop.

    (Coleridge being a particularly ripe case, as Eliot described him, a man visited by the muse for a while, and when she left, he was a haunted man. Coleridge himself didn't understand what had happened.)

    So, part of me does want to say that there can never be enough data to explain, much less predict, human action, and certainly not unlikely human action like creativity. The "human sciences" would then be marked either by arrogance or folly, as you like. I could be old school, I'm old enough.

    But I'm not convinced. That attitude strikes me unavoidably as a rearguard action, defending human nobility against a godless and disenchanting science, that sort of thing.

    Instead, I think it's simply a fact that the data needed, and the theory needed, are evidently beyond us, and so we must make do and aim a bit lower in our expectations, or at least be more circumspect in our claims. When it comes to scientifically informed debates over social policy, for instance, we sometimes know enough to do better, but still less than we think we do and so some caution is advisable.

    We know a lot of what was swirling around in Coleridge's head, but not all of it, and we know something about how his brain worked, because it worked like ours, but the specific historical process that took those inputs and yielded those outputs is unrecoverable.

    So it is with any musical performance. I'm inclined to say that one of the reasons the musician played this note this way is because of that time she wiped out on her bike when she was 8. That might be a big enough factor to make it into her biography ― if, say, she broke a finger that healed in a way relevant to her playing. It might be a kind of emotional turning point for her, if it nudged her attitude toward risk a certain way. It might be an infinitesimal factor, no more or less relevant than the peanut-butter sandwich she ate that day, but all of which went into making her the person who produced that performance.

    We're talking really about what God knows about her. When God hears that performance, does he smile slightly and connect it to that skid on her bike? God has all the data, so how does he understand the world and the people in it?

    It looks like that's the standard for science I have been indirectly endorsing, or if not "standard" then "ideal". Which is a little odd, certainly, but maybe that's fine. In practice, science is entirely a matter of making do, and being very clever about what you can learn and how despite not being gods. I guess.

    Therefore Bayes.
  • The Phenomenological Origins of Materialism


    Yeah I think there's a trick to that story, that it does mean it's too hard to sight-read.

    But then I also think about the difficulty of notating jazz correctly. And I think about Jimi Hendrix, who seems to add some tiny bend or flutter to almost every damn note -- how do you notate all those micro-decisions? And so it is with any great musician, there are all those millisecond decisions that go into the performance, all those tiny variations that distinguish a good performance from a great one.

    Now, should we say there is no hope of a scientific approach to great musicianship? I actually don't think so. I think the point is that vastly more data is needed than you might at first think, certainly more than you would think if you looked even at a complex score, which is great simplification of what a musician actually does.

    Any of that make sense to you?