Comments

  • Does Phenomenology Consist Merely in Introspection? Dennett and Zahavi on Phenomenology.
    I think this raises a question as to what the analytic tradition consists in if not some kind of introspection and synthetic a priori analysis, that is some kind of phenomenology. I mean it doesn't seem to be doing empirical science.Janus

    Yes, I think the idea is quite simply that if it’s introspection then it’s not science, and there’s an optional detour through philosophy. (If introspection, then philosophy, and if that then not science.)

    There is still a part of world in which introspection is considered a good thing, a praiseworthy thing, perhaps even a thing one ought to do or one must do to lead a rich and fulfilling life. (We get threads about it here.) Still, it’s not science, which makes it — from a point-of-view I’ll cheerfully admit is made of straw — something like a ‘hobby’, all well and good but not something ‘serious’.

    I don’t think this puts only the “analytic tradition” in question. What was Aristotle up to? Or Kant?

    Anyway, what you (not you, @Janus) call ‘introspection’ I might just call ‘thinking’ and some people might call ‘reason’. Or ‘reflection’.

    I almost feel tempted to let science win whatever argument it wants to have with philosophy. If science wants to claim it’s the only sound or reliable way of producing knowledge systematically — sure, you can have that; philosophy can produce something else, understanding maybe.

    (In this context, I always thought it odd that the new biologically sophisticated version of psychology ends up being called “cognitive science”, as if knowledge is the only mental phenomenon that matters.)
  • Does Phenomenology Consist Merely in Introspection? Dennett and Zahavi on Phenomenology.
    mere introspectionJanus

    I keep wondering what the force of this 'accusation' is supposed to be.

    Why does it sound so much like saying phenomenology is "merely philosophy"?
  • How Useful is the Concept of 'Qualia'?
    Could be made a better inference with a theory of content determinationfdrake

    I’ve been wondering where “aboutness” is in these causal chains, and how @Isaac proposes to infer 9 is about 8 from 9 was caused by 8. Since the model is not supposed to be descriptive but predictive, you might think intentionality could be captured by saying that 9 points to 1 — which is around the corner from your point about priors. That makes it at least as tempting to say that 9 is about 1 as it is to say 9 is about 8 (or 7 or 6 or ...). Why look back in the chain for meaning, instead of forward? It might also make sense to think of the logging at 9 being something like “Cycle finishing, coming back to 1 now,” or “Now connecting 8 to 1,” or “Got from 1 to 8, headed back to 1.” Don’t we expect to find not a bare, descriptive report logged to consciousness but one with some kind of directedness embedded in it?
  • How Useful is the Concept of 'Qualia'?
    I think human will necessitates little leaps of faith all the timeKenosha Kid

    Well, the phrase has two elements: the leap, and the faith. Leaping is not like walking, a steady, methodical progress from one place to another. Leaping is taking the distance covered by many steps at once; we cannot make our steps bigger, so to pull off such a feat we must actually leave the reassuring solidity of the ground and take to the air, at least for a moment.

    It is a choice sometimes forced upon us. When our walk brings us to a ditch or a small stream, there are three ways forward: there is an imaginary way straight on from this side to the other, as it would be if the ditch were not there; there is a way under, down into the ditch or stream and then up the other bank; and there is a similar way over, through the air, above where we would walk if only we could.

    If we generalize this situation, the ways under and over are not always available. We can imagine thinking as traversing an obstacle course. (In everyday life, there is often a timer ticking, but not in philosophy and only for external reasons in science.) An obstacle blocks the methodical progress forward; it may offer a simple way over (a small ditch to be leaped over), a methodical way over (like a climbing wall) or under (you may have to crawl under something), and so on. Some obstacles may offer a choice — slog through the water or swing across on a rope. Some may offer a false choice — attempt to slog through a deep mud hole, which you will not be able to do, or jump over.

    This is one way of situating leaping: it is a solution to some obstacles but not others, when it was not our preferred way forward. If, on the other hand, you already preferred leaping, because it is faster, and chain leaps together one after another, we don’t call that ‘leaping’ but ‘running’.

    But are we sure it’s leaping-over-an-obstacle we’re interested in? There’s another idiom that seems similar, which is ‘jumping to a conclusion’.
    *
    (Kahneman’s line about System 1.)
    If you compare them, it’s clear that we disapprove of jumping to conclusions because it is not a response to an obstacle; you had the option of continuing to make steady, methodical progress but, out of impatience, gave up traveling methodically, selected a destination and simply teleported there.

    No one feels any compunction about leaping when it is called for. But when we are thinking, how do we know when our leaps are a solution to a genuine obstacle — the intuitive leaps of a Copernicus or an Einstein — and when we have simply become impatient and jumped to a conclusion?

    To know, then, whether we should leap, we need to know whether we face a genuine obstacle. That leaping “works”, that it moves us quickly from one place to another, is not in question, but if we did not have to leap, where we land might not be where we wanted. But how do we know where we want to land? Because this is in the nature of an obstacle: an obstacle is something you are one side of or the other. You do not need to see the whole course; you do not need to know what destination you are headed for; you only need to know that it is on the other side of each obstacle you face.

    And here is at least one place where we might see a role for the second part of the phrase, for faith. How do you know the destination is on the other side? Perhaps you don’t; perhaps you only have faith that it is. And I think this is just how people tend to use the phrase “leap of faith” (whether it has anything to do with Kierkegaard or not). That it is precisely a leap to an unknown place. It will be some place, but whether it is is the place we hope for is unknown — as, in the simple case, you might hope each obstacle on the course is the last.

    (That some destination worth reaching is on the other side of a series of obstacles, of problems to be solved, has become an article of faith in philosophy. Even Wittgenstein, who makes noises about there being no genuine philosophical problems, implies that he has such a faith in PI 107, the “rough ground” speech.)

    But we’re getting ahead of ourselves. To put all this rambling back into context, before abandoning it again, the question is whether there is a genuine obstacle to taking our everyday experience at face value. There is a long history of philosophical objections to such naivety, and a considerable body of recent scientific objection. But related though they may be, there are two different issues here: one about the facts on the ground, that is, about how we get along in the world; and one about how we are to theorize how we get along in the world. If you object that we have no ‘direct access’ to things — whatever that means — that is a claim of theory, but it is a claim about how we get along, and implies that there is an obstacle between ‘us’ and ‘the world’. Whether you, or your mind, or your brain, know anything about this obstacle, it’s there to be responded to somehow. On the other hand, if you find the claim that there is such an obstacle compelling, that becomes a different sort of obstacle — how can I take my experience at face value, given everything I know about how, say, perception works? You can then say that the theoretical objection is no obstacle at all, but the end of the line; if you leap over it, you'll leap to nowhere. But at the same time, you can acknowledge that this is not the same obstacle that you (or your mind or your brain) face all the time, and *that* one *must* be leapt over. I take it this is close to your position; maybe there's nothing answering to "perception" or "knowledge" as traditionally understood, but we must behave as if there is to get along in the world, and we'll call that a kind of "pragmatism".

    What we need next is a better understanding of what an obstacle is. I wouldn't jump right to "how we recognize something is an obstacle", how we have a certain sort of knowledge, though that's in some sense what we want, so we can say whether there's an obstacle between us and the world that needs to be leapt over. Hopefully we can get to something like that later.
  • How Useful is the Concept of 'Qualia'?
    The call to commit to the objective reality of what you see is a mere leap of faith.Kenosha Kid

    I won’t disagree, but only ask: why should this be so? But that’s too much, too fast. What does it mean to take a leap of faith? Do you know what it means? How? Again, too much. We feel this compulsion to take such a leap, or feel we have already taken it and want to understand what we have done, or we feel that we should above all avoid taking any such leap and are worried that we may already have done so, without noticing. This is all worth thinking about, and I haven’t even gotten to the word “faith” yet, and there’s surely something to be said about that.

    2. All human errors stem from impatience, a premature breaking off of a methodical approach, an ostensible pinning down of an ostensible object. — Kafka, Zürau Aphorisms

    The true commitment to reality involves actively eliminating possible causes of our observations, not just only considering one (falsificationism).Kenosha Kid

    So you have a method in mind that will protect you from an impatient leap of faith. How did you arrive at this method, the method of elimination? If you’re going to talk of causes — of possible causes — of our observations, haven’t you already committed to quite a lot?

    I don’t think we’re in a position yet to say what method can solve this problem — that before us is the possibility of a leap of faith and we are resistant, perhaps with good reason, to taking it. I don’t know how to solve such a problem. I don’t even understand why this is the problem we face, but it absolutely is. Before announcing how it is to be solved, I would spend some time trying to understand what sort of predicament this is, why it makes us uneasy, and see if we can learn, from the situation we are in, if it is possible to get out of it, and if it is, how.
  • How Useful is the Concept of 'Qualia'?
    If there are other sources of our experiences (and there are, not just dreams and hallucinations, but biases, errors, and features of processing), and those sources aren't separable after the fact (and they're not), then there's always an unknown about whether we're seeing an object, some feature of processing data about it, or something else entirely.Kenosha Kid

    This is a helpful summary.

    All we know is the river. We have reason to believe the river itself has tributaries, smaller streams that feed into the river. There are stories, but no one can reach the place where you could to see these tributaries first flowing on their own and then mixing themselves with the river. We believe that when we scoop up a handful of water, the waters of many tributaries drip from our hands, but we cannot name the source of even one drop, so it is all, for us, only part of the river.

    Well, I’m of two minds about this.

    On the one hand, the sorts of things you refer to, you can refer to them because they have known effects and conditions. Mirages are interesting but they don’t cause you to underestimate the population of African countries; priming bias is interesting but does not cause the Georgia blacktop to shimmer in August. It’s not all water, but a stew we’re dealing with: some ingredients, like the seasoning, are so thoroughly mixed and have so affected others that they cannot be separated, but you can still spot a bit of carrot and identify it as stew-flavored but still recognizably carrot. If we could not point out optical illusions, biases, and the like, and distinguish them from normal perception and inference, your argument couldn’t even get off the ground.

    On the other hand, I am convinced by arguments from many quarters that we begin our questioning in the river of experience; we cannot step out of the river and observe it as it flows by, study what goes into it and where it comes from. We can identify some things as they go by, and we can make a science of that, but it is not the science of what flows by that tells you you’re in a river, and it is not that science that could tell you what the nature of that river is.

    I was going to say something else: the casting of everything as uncertain has a sort of methodological modesty about it — like finding a room a mess and cutting off arguments about who left that plate on the table, and who was supposed to have put the LEGOs away, and saying: it’s a mess now, however it got that way; everyone contributed (we assume, but perhaps falsely) so it’s simplest for us to ignore all that and clean it up together. That’s a pragmatic decision, and it will work — maybe! — but it’s an assignment of responsibility rather than determining responsibility, and it’s a mistake to think that because we can assign responsibility that’s all there is to it, and especially to think that when we effectively don’t assign responsibility — by assigning it to everyone — that no one was in fact responsible.
  • The Essence Of Wittgenstein
    We can't say that any particular game is the home, so it's rather meaningless to say that every game in which it appears is a "home" for it.Metaphysician Undercover

    I don’t think so. I think there are strong objections to the single home theory, but they don’t touch the idea of a word being at home in a language-game, having a role or a function. It’s easier to see in the negative: if you’re working on a bit of carpentry and you have the wood, hammer, nails, screws, drill, ruler, sandpaper, and so on, then the soldering iron doesn’t belong here.

    With words, it’s a little harder to be that simplistic because there’s a chance almost any word might find some use in a given language-game, but there are telltale signs that it doesn’t already have a use — one being that it is only allowed in as metaphor. Still, you can say that when discussing politics you’ll have ‘rights’, ‘elections’, ‘freedom’, ‘policy’, ‘legitimacy’, all sorts of words, but probably not ‘chlorophyll’ or ‘aubade’. And within a particular sort of discussion, say, a nitty-gritty-detail policy discussion, people may see a phrase like ‘the public good’ as so vague in this context as to be useless and thus unwelcome.

    The homonym business — eh, it’s almost semantics. The one argument against it would be that in introducing a word into a language-game it does not already have a role in, you’re relying to some degree on people’s understanding of how the word is used elsewhere — either for the metaphor, or by making a case that there’s a strong analogy between the known use and the new one. It would be hard to pitch a known word as an empty vessel you can add a new meaning to at will. (A somewhat outlandish metaphor can do the trick. Timothy Williamson got mainstream philosophers to talk about “luminosity”.)

    One point from the other direction doesn’t seem to be brought up much: must a word have a single use in a language-game? Why couldn’t a word have multiple uses in the same language-game?
  • How Useful is the Concept of 'Qualia'?
    we see different things, when we look at the same thing.Tom Storm

    I think that's exactly the right thing to say. Full endorsement from me.
  • How Useful is the Concept of 'Qualia'?
    We see different things.Tom Storm

    Then what would you mean when you said "we see the same objects (differently)"?
  • How Useful is the Concept of 'Qualia'?
    Do you also think the external world and all the objects in it are an hypothesis instantiated in your individual brain?
  • How Useful is the Concept of 'Qualia'?
    different people see the same objects in different ways.Kenosha Kid

    And I don't see how you can say that with a straight face.
  • How Useful is the Concept of 'Qualia'?
    Our perceptions aren't functions of objects from which we can prove the existence of those objects.Kenosha Kid

    That's a really interesting thing to say.

    So does that mean that if there were a 1-to-1 mapping, we could *prove* not just what something is, but that it is? (Not just a function, but a function whose inverse is also a function.)

    If the fake scotch tastes to me, at the moment, like real scotch -- we have to fudge a lot here, previous tastings of credible scotch? an average of those? what? --- then my tasting fails to discriminate the real from the fake, and thus fails to provide conclusive evidence that my drink is scotch. But that's all 'what'. How do we get to 'that' this way? Can I similarly not discriminate between drinking purported scotch and not drinking at all? I don't think dreams and hallucinations get you there; there have to be some genuine experiences for those to be possible.

    Anyhow, is this how we would get to 'that the scotch exists (and is really scotch)'? If there were a 1-to-1 mapping, it would leave no room for mislabeling my experience? And thus no room to think I saw or drank or tasted something I did not?
  • How Useful is the Concept of 'Qualia'?
    That doesn't mean that one whiskey tastes the same as another, that your first whiskey tastes the same as your second, that whiskey tastes the same to you now as it did when you started drinking it aged 11, or that it tastes the same irrespective of whether you brushed your teeth.Kenosha Kid

    Yes, these are all possibilities, and they are the sort of thing you're interested in, as I understand it, because you're interested in how that works. And it is interesting. But it's also interesting that bathtub gin + iodine + hair tonic tastes a bit like scotch, and we'll talk about this concoction, itself, tasting like scotch. When we say, "It does taste a bit like scotch," we take ourselves to be talking about that thing, and we're not simply and obviously wrong to do so.

    I'm just trying, a little, to hold you back from, in a stroky-beard moment of your own, correcting people -- "Actually, you mean that to you, at this moment, it tastes a bit like scotch." Whether you're inclined to say that sort of thing isn't altogether clear to me.
  • The Essence Of Wittgenstein
    One might search forever, trying to confirm the word's "home" never really being sure which game is the word's "home", therefore never really being sure of the word's meaning.Metaphysician Undercover

    I’m sympathetic to your thinking in this post, but this is backwards. That is, you’re talking here about reflecting on the meaning of a word, analysing it, theorizing it, rather than using it. When it comes to use, either a word will do for your purpose or it won’t — or it can be made to work the way you want or it can’t. Think first of cases of trying to use a word for some purpose rather than of scrutinizing the word; the point of a tool is to use it when it will get the job done, not to contemplate it.

    Perhaps the idea that there is one "home" game is just wrong, and the word has a home in each different game which it is used. Then shouldn't we say that these are distinct words, like homonyms, each with its own home in its own game? On what principle then do we say that it is "the same word" used in different games? Oughtn't we say that a word is homeless, and is free to go and find a place wherever one wants it to be?Metaphysician Undercover

    I think at the end here your view has something in common with @Joshs’s: he talks about each use of a word as something like inventing a new use for that word on the fly, extending or redefining its meaning with each new ‘application’.

    But doesn’t the ‘words are homeless’ line of argument contradict the ‘homonym’ argument? It is the same hammer you use to drive this nail and that, to remove the pin from a hinge, to knock a dent out of your wheelbarrow. So what do we want to say? That it’s a poor tool, or maybe no tool at all, that has only a single use-case? Or that all of these uses are in some (analyzable, theorizable) sense ‘the same’ — maybe, striking an object so as to cause it to move? (But of course you can do more than that with a hammer.)

    I think we do better to take in more rather than less of what’s going on, so that we can see the hammer being a part of — being ‘at home’ in — each ensemble of tools and practices where it is useful (cabinetmaking, house framing, tractor maintenance, surveying, etc.), but not part of others where it is not. I’d lean toward multiple homes, with both hammers and words. Someone used to using a hammer in only one way for one sort of job might be surprised to find other people think of it quite differently, and the same thing happens with words sometimes. (Someone might use a chisel as a doorstop for years without the slightest idea what it’s ‘really’ for.)
  • How Useful is the Concept of 'Qualia'?
    Cakes do have properties that reliably produce specific taste experiences when eaten by the sorts of creatures they were made for.
    — Srap Tasmaner

    Except they don't. It is precisely because there isn't a 1-to-1 map between the chemical constitution of a glass of wine or piece of cake and how it tastes that it's interesting.
    Kenosha Kid

    I’m not sure what you’re saying here, but my claim is that — plausible or not, convincing or not — the following is not simply incomprehensible:

  • How Useful is the Concept of 'Qualia'?


    As I understand it, the point of talking of qualia is in part to make room for our (affective) reactions to our perceptions. If there is something it is like to see this particular instance of red, then I don’t just have the experience of seeing the thing which is this particular red color, I also experience myself experiencing it, am aware of having the underlying experience, and thus can thematize my experience reflectively or respond to it affectively. So there must be an intermediary, a quale, which is not just an artifact of the process of my experiencing things — not just some ephemeral, intermediate step — but an object that I can be conscious of, the end product of at least some phase of the process of experiencing things.

    The need for such a thing arises because the way people ordinarily talk about things seems, in some circles, sometimes, to be plain wrong: we are inclined to say things like, “Oh my god! This cake tastes amazing! Here, have a bite!” In everyday conversation, we attribute to objects properties that, we have it on good authority, they do not have ‘on their own’ — coloration, taste, scent, all the exciting stuff in life. We know that taste ‘occurs’ only in the interaction of the cake and someone eating it, so if it is possible to experience a taste at all, this interaction must yield a product that can itself be experienced by the taster, a taste quale.

    But we were right the first time. Cakes do have properties that reliably produce specific taste experiences when eaten by the sorts of creatures they were made for. It’s the whole point of making a cake in a particular way, the whole point of treating baking as art or science, as you prefer, of working at it and taking it seriously. (That people have variations in how things taste to them makes no difference at all.) When people marvel at the colors of a sunset, it’s the sunset that is the source of their remarkable visual experience, even if that particular experience is only likely available to creatures who see like us. In short, it is remarkable things that cause remarkable sensory experiences (and pedestrian things that cause pedestrian experiences) and there’s something perverse about ignoring that, and elevating the importance of where (in our brains) and how (via our senses) we become aware of the unique things we find in the world, whether extraordinary or pedestrian.

    And people know perfectly well that, having had a range of experiences, sometimes their affective response is ‘colored’ by other factors: “Is it me, or is this coffee amazing?” “You’re in love, asshole, everything’s amazing, and the rest of us are tired of hearing it.” But there has to be groundwork laid for such displacement, experiences of things, just as there has to be for dreams and hallucinations.
  • How Useful is the Concept of 'Qualia'?
    It’s just really not clear to me how neuroscience has changed the philosophical landscape here.

    For instance, the SEP quote @fdrake posted — we already knew that was wrong, at least since Sellars, long before the advent of modern neuroscience. And Sellars is to some degree filling out Quine’s argument in “Two Dogmas”. All of this is either the shadow of Kant cast over analytic philosophy or re-invention of Kant. People just didn’t want to believe that Empiricism had died, so it had to be killed over and over and over again. (Point number one: this attachment to the idea of empiricism is worth thinking about.) If the neuroscientists tell us that we have no conscious access to any such ‘data’ and that by the time there’s something we can be aware of, it’s been scrubbed, munged, filtered, processed and modeled — yeah, we knew that already.

    We are told that we have a mental model of — unclear. Not of the world exactly, or things in it, because we are told that everything “out there” is hypothetical. If that means we have only degrees of belief — not knowledge — so be it. But Hume already figured out that reasoning concerning matters of fact is only probable, and he had also recognized that this meant he was flirting with (if not marrying) scepticism. Sorry, but I’m still not seeing anything new here.

    What is new is the word ‘model’. I haven’t read the literature, but around here it seems to be considered self-evident what a model is. (Excepting @apokrisis, who believes he has to account for how it is possible for the universe to have such critters in it as ‘models’.) Within the practice of science, in my limited understanding, ‘model’ might as well be short for ‘mathematical model’. (And that’s true even if you’re not doing statistics.)

    If that’s the paradigm upon which the psychological term “model” is based, it has a curious side-effect: the traditional candidate (among benighted philosophers of the past) for direct, unmediated perception is mathematical objects. We do not perceive them with our senses but know them directly. Insofar as the models in our brains are modeled on the models of scientific theory, they ought to be — it turns out — made of stuff that is not hypothetical and that (thank goodness!) we need no sensory apparatus at all to understand. But I can’t imagine any of the neuroscience enthusiasts around here talking about mathematical objects this way... (Ha! cross-posting with @Mww!)
  • How Useful is the Concept of 'Qualia'?


    It seems the distinction wanted is not between 'direct' and 'indirect', but between 'inferential' and 'non-inferential' -- only those are terms more appropriate to knowledge than perception. And that suggests we're still circling around the problematic nature of empiricism, as a theory of knowledge.
  • How Useful is the Concept of 'Qualia'?
    we don't have direct perception of objectsKenosha Kid

    What would that be that we don’t have? I’m seriously asking: what do you have in mind when you say there is a type of perception, direct perception, that human beings happen not to — well, “have” seems an odd way to put — so let’s make it: what would it be to perceive “directly” rather than “indirectly”?
  • Gettier Problem.
    There's nothing more to John being a bachelor than my felicitously using the term 'bachelor'.Isaac

    Felicity here seems to be a matter of the spell you have cast, by speaking the word ‘bachelor’, coming off.
  • Is magick real? If so, should there be laws governing how magick can be practiced?
    I'd be open to using a somewhat more "magical" vocabulary to talk about the role of our thoughts in the world.

    For instance, to think of someone as your enemy is to make them your enemy, though it might be news to them. I've flirted with defining away this sort of thing, taking "thinking" here as a sort of shorthand for how we are disposed to act.

    But there is precedent for going the other way. "The mind is its own place, and can make a heaven of hell, a hell of heaven." (In a somewhat Miltonic mood, Geoffrey Hill said a poem is a "fortress of the imagination".)

    This way seems to imply a pretty heavy commitment to free will; the other way is more flexible.

    I suppose I'd like to say, there is something special about thought that isn't really captured in the somewhat mechanical model I used to entertain. Magic talk seems to go a bit overboard with that, but maybe it's worth thinking about what they're trying to capture.
  • Is magick real? If so, should there be laws governing how magick can be practiced?
    Blech.

    ‘Will’ and ‘action’ look like pretty crufty categories so I won’t take sides there.

    Insofar as the idea of ‘magick’ is just a souped-up ‘will’, I’m even less interested.

    It’s plain enough what Crowley, as quoted, is getting at — that if willing it makes it so, that’s magick. (Feels like that lets in some other class of somethings, but they might be just as wonky. Contrition — that might be one.)

    I don’t understand the regulation question. The only Crowley quote I had ever heard was “Do what thou wilt shall be the whole of the law.”
  • Is magick real? If so, should there be laws governing how magick can be practiced?
    Aleister Crowley once defined magick as, "The science and art of causing change to occur in conformity with Will."Bret Bernhoft

    Did he really?

    That seems broad enough to include regular, non-magickal action.
  • The Strange Belief in an Unknowable "External World" (A Mere Lawyer's Take)
    There’s a bit of the argument that strikes me as a little odd.

    1. Bees and humans perceive the flower differently.
    Therefore,
    2. We cannot know what the flower itself is like, but only how we perceive it.
    Therefore,
    3. Jack and Jill know only their individual perceptions of the flower, not the flower.

    In (1) we’re comparing the perceptions of species but in (3) of individuals. That makes the “we” in (2) ambiguous: it could refer to anyone qua human being, or to anyone qua individual human being.

    Why don’t we feel the need to distinguish how each individual bee perceives from how every other bee perceives?

    What in the comparison of the perceptive ‘styles’ of species underwrites distinguishing the perceptive ‘style’ of one human being from another? If (2) says “Members of a species can only know how members of that species perceive the flower,” how do you infer that Jack knows only how he perceives and Jill knows only how she perceives?
  • The Strange Belief in an Unknowable "External World" (A Mere Lawyer's Take)
    found as far back as at least Platonic “knowledge of” vs, “knowledge how”, and later in Russell’s “knowledge by acquaintance”, and a veritable myriad of similarities in betweenMww

    I think the issue reaches pretty deep: roughly, is the rational, the cognitive, derived from the non-rational, the non-cognitive? This is, it seems, the principal issue in philosophy of mind. It is the issue Sellars was dealing with in “Empiricism and the Philosophy of Mind”, in which he argued, broadly, that there is no magic thread to stitch the two together — “magic” here meaning: has one side that counts as non-cognitive and connects cleanly to the non-cognitive (our senses) and has another side that counts as cognitive and connects to our conceptual judgments and so forth. (It’s “sense impressions” of some sort that are supposed to pull this off, and Sellars argues nothing can possibly be what they need to be for empiricism to work.)

    It’s also the issue that Wittgenstein was dealing with in arguing that the foundation of, well, everything we do, is, well, what we do, i.e., our practices, and our practices are something we are trained in, and must just accept, not something we analyze and judge and understand rationally. (Sellars was here too, and has a much more complicated version of the same stuff in his article about language games, offering a solution to the problem that apparently it would be impossible to learn a language game.)

    That there are these two realms seems inarguable. The choices seem to be basing one in the other, or treating each as sui generis. One source of the temptation to base the rational, the cognitive, and the linguistic ,i.e., everything we think of vaguely as λόγος, in something not λόγος, is that children seem to make the passage from not having such capacities to having them, and mankind, we assume, made such a transition at some point. Darwin has complicated that question somewhat, and Chomsky after him.

    It is also possible to read Wittgenstein as denying that are two realms and denying there is such a transition to be made: maybe words like “know” and “true” and “meaning” are just words like any other words that we learn to use in certain ways and not others, and maybe they shouldn’t be thought of as ‘special’ or ‘central’ for philosophy. You could read the ‘language-game’ approach as suggesting that there are rather more than two realms, but they’re all just a matter of how we use language in different ways for diverse purposes in varying circumstances.

    I don’t purport to be able to dismantle the model of “knowledge that” underlying everything we do, at least not right here and right now. I think it has a somewhat dubious provenance — what we might call an “intellectualist prejudice” — and I think a great deal of its attraction lies in making analysis tractable. It is also resistant to empirical critique because any calculation or inference that it is plainly implausible to suppose we do, whether in going about our daily lives or in performing some extraordinary feat of skill, can also be swept into the rational and plenty-fast-enough but unconscious processes whirring along in our brains, whether those processes are merely postulated or actually supported by some evidence.

    But I do think there’s room for an alternative story, one which doesn’t begin by stipulating that the foundation of all our interactions with the world amount to predication — observing objects and events and classifying them, making inferences from our classifications, and so on. I think it is possible to take other ways of interacting with things as more fundamental.

    One example I’ve had on my mind for a little while is reading. My daughter mentioned to me recently that now and then she kind of burns out on reading and begins to actually notice letters on the page rather than reading them. When you have mastered the skill of reading, and your brain isn’t messing with you, we would have to say both that you see the letters, obviously, else you’re not reading, and that you do not see the letters, that you see right through them and your mind is filled only with their meaning. You have to see them without noticing them. Heidegger talks somewhere about the tool nearly disappearing from the craftsman’s mind as he works, and that it only stands out as something to be contemplated when it’s broken, or missing, or the wrong tool for the job at hand. So it is generally when we use rather than mention words — you pass right through however the words are physical inscribed (in ink or air) to the meaning, and maybe right through the meaning to a response, an action, a reaction, a feeling, a reflection, an occasion of knowing something new. We begin learning to read by looking intently at each letter, assigning the proper sound to it, and all that, and perhaps to become skilled at that process of observing and classifying individual stimuli as a b or a d or a p means precisely for it to become faster and unconscious (to move from System 2 down to System 1), but it is still an open question what supports even those steps of learning that are later ‘automated’ to become ‘second nature’. Learning to read is a specific sort of activity, embedded in a terribly sophisticated environment, and only possible for an already very sophisticated person, who can already speak their native language fluently and understands quite a bit about learning new things.

    I don’t have a knock-down argument that the cognitive (rational, linguistic) is grounded in the non-cognitive (non-rational, non-linguistic). I’m not sure there can be one. On the other hand, the arguments in the other direction come so easily that they are unconvincing, and involve a disconcerting amount of handwaving. There is, for instance, a story about a music student who was writing a paper about Coltrane and he agreed to talk to her about his music. She brought along a transcription she had made of one his solos that she wanted to ask about. He tried to play from her transcription, but, after a couple of tries, he gave up and told her it was “too hard”. How hard would it be to concoct some explanation about the sequence of decisions he ‘must’ have made when he improvised that solo and all of the factors he was taking into consideration every, say, tenth of a second, and explain the entire performance as if he were doing a peculiar bit of math, rather quickly, in his head, and unconsciously. It’s easy to describe such a ‘mechanism’ but pointless, because there is no chance at all that you could describe an algorithm that could predict what he was going to play. All you’ve really achieved is an alternative description of what he actually did and then claimed that it was perfectly understandable because we could so describe it. (It’s a sort of ‘argument from notation’.)

    My instinct is that we see in the way a musician or an athlete or a craftsman acts, in the ‘decisions’ they seem to be making, an involvement with the things in the world, a responsiveness, that underlies everything we do, including knowing. It’s just a bit more spectacularly on display when it’s Coltrane playing saxophone than when it’s just me making a pot of coffee. I’d like to think of this ‘involvement’ as being prior even to the distinction between cognitive and not, but I think inevitably from the cognitive side it’s just going to look like ‘not’. Oh well.
  • The Strange Belief in an Unknowable "External World" (A Mere Lawyer's Take)
    How do we know what it is that makes knowledge possible.Mww

    Maybe. But there are two other ways to ask that question: (i) what makes human knowledge possible? and, in a somewhat different vein, (ii) what makes human beings knowers?

    So, yes, one version of the question would make room for an argument that language ‘comes first’, or some conceptual apparatus, or history and culture, or biology. Indeed, how do you make rabbit stew?

    The other version might be brought out by the tired analogy of describing the progress of a game in terms of its rules: that everything happening on the field or on the board is in accordance with the rules leaves out almost everything about what people are doing when they play. And not just their motivations, but everything about the way they play.

    Ever watch little kids playing soccer? It’s like watching a flock of birds chase a soccer ball around the field. They are playing in accordance with the rules (mostly — offside is confusing), but their understanding of the game of soccer is not the same as you would find among adult players. One reason for that might be readily described as cognitive: there are things about playing soccer they do not know. But there are also things they do not know how to do in the other sense: they cannot do them; they lack certain skills. And there are things about playing soccer you cannot understand if you lack those skills. The development of a skill new to you can change how you understand the game; the development of a skill new to those you’re playing with can change their understanding of the game. (Imagine only one player on the field figuring out how to chip — deliberately rather than by accident — and how that would change everything.)

    My question was meant to land around here. You could produce an account of playing soccer that looks a bit like the rulebook, but you will miss almost everything, not only about why people do what they do when playing soccer (leaving aside why they play at all), but also how they play, how they understand the game, how they understand what they’re doing, and some of that is not a matter of cognition but of skill. If having some skill is a prerequisite for having some cognition, then by ignoring skill you would miss an entire class of cognition, and mischaracterize what’s left.
  • The Strange Belief in an Unknowable "External World" (A Mere Lawyer's Take)
    ... the limit of knowledge ... the limit of knowledgeable things ... a limit on knowledge ... a limit on experience.

    ... some say experience is knowledge ...

    ... that which we can know about ... a limit on knowledge itself ... to know of them ...

    ... new knowledge ... old knowledge ...

    And we don’t even know what we don’t know.....

    AAARRRGGGG!!!!
    Mww

    Suppose we wondered, is knowledge either the fundamental way human beings relate to the world or one of the fundamental ways human beings relate to the world? If either of these is the case, you should be able to abstract away the rest of the human being, and their other ways of relating to the world, and still produce a full account of human beings qua knowers. (Knowledge in this sense would have, at the very least, logical co-priority, so to speak.)

    As a step halfway toward answering that question, you might abstract away the rest of the human being and see whether you can produce a full account of knowing. (You could, for the sake of the experiment, ‘pretend’ that knowing is fundamental.)

    How would you know if you had failed?

    That is, suppose, for the sake of argument, that knowing is not quite fundamental, and in abstracting away the rest of the human being you had cut away something essential to understanding the character of human knowing.

    How would you know that the account you produced was not an account of human knowing, but only of how human beings would know if they were completely different — that is, if knowing really were fundamental?
  • Philosophical Woodcutters Wanted


    Is this the sort of thing you’re talking about?



    My son tells me this sense of living through the collapse of civilization is pretty common among twentysomethings.

    There was also an episode of RadioLab about the curious rise of nihilism in popular culture (“In the Dust of this Planet”).
  • The Strange Belief in an Unknowable "External World" (A Mere Lawyer's Take)
    What do you conclude from this?frank

    That one day I'll have to study Kant, because I can't make any sense of the version of Kant presented here. I don't feel warranted to conclude anything more than that.

    If you want a speculative answer, I could say this: the connection between the footprints and the man who left them, that the one indicates the other, that's "part of the world". By that I mean, this connection is not something we impose on the world, but something we encounter in it. That connection is what grounds our inference, from the footprints in the flowerbed, to the man who was there. The world we find ourselves in, is intelligible.
  • The Strange Belief in an Unknowable "External World" (A Mere Lawyer's Take)


    One reason this matters -- aside from whether you get mileage out of 'evidence' as a metaphor -- is that evidence is intelligible. Footprints are a natural sign; like other sorts of evidence they indicate something else in the world. Those sorts of connections make the world intelligible.

    The issue here is a sort of sleight-of-hand: something defined as being unintelligible is introduced as if it were part of the intelligible world, like the airplane represented by a radar blip, or the flower represented by its scent. These are connections we are familiar with. Some of us even know something about how those connections work. Whatever we're attempting to say about the plane 'in itself' or the flower 'in itself', it's nothing like this.
  • The Strange Belief in an Unknowable "External World" (A Mere Lawyer's Take)


    Maybe I can be even clearer:

    When@Hanover talks about phenomenal experience, he uses the word "evidence", as if phenomena could be understood as evidence for noumena. I don't see how, do you? In what sense is the relation of the two evidentiary?
  • The Strange Belief in an Unknowable "External World" (A Mere Lawyer's Take)


    Not sure my point really came through.

    Footprints in the flowerbed are evidence of a man because there is a connection between a man and his footprints, which we'll tend to call 'causal', and because we're familiar with this connection, so we say things like, "We know that men leave footprints."

    If we smell the scent of gardenias, we expect to be able to find gardenias somewhere nearby, because we know they're the sort of thing responsible for that scent. If we knew nothing about gardenias, we wouldn't take such a scent as evidence that they are nearby.

    So I'm wondering how our senses can give evidence of something that we not only know nothing about, but can know nothing about.
  • The Strange Belief in an Unknowable "External World" (A Mere Lawyer's Take)


    I keep wondering how the idea of 'evidence' is being used here. Footprints in the flowerbed are evidence of a man, who was standing there, or of his having stood there, but can they be "evidence" of something we, in principle, can know nothing about? How? How would we establish the evidentiary relation between the footprints and the something or other? What could we mean by claiming that there is such a connection?
  • Bannings
    "You get nothing! You lose! Good day sir!"Baden

    But that was a test. The banned cannot return his everlasting gobstopper, and you will never say, “So shines a good deed in a weary world.”

    Just as well.
  • What is Being?
    But one needs to be aware that our handed-down grammar biases us toward a certain way of thinkingJoshs

    Well, there's a lot to say about language. I wasn't wading into those waters, and I'm still not, but don't let me stop you.
  • What is Being?
    You do admit thoughTheMadFool

    Of course I don't. I explained an idiom, and its bearing on a quote, that's it.
  • What is Being?
    I was just explaining what Josh posted, which would make no sense whatsoever if you didn’t know the German idiom Heidegger refers to. No, the point is not to study grammar to understand being.
  • What is Being?


    It helps if you know that the common way to say ”There is ...” in German is “Es gibt ...”, which is literally “It gives ...” It’s an impersonal construction like “It’s raining.”
  • What is Being?
    can I appeal to you personallygod must be atheist

    Sure.