• How Does Language Map onto the World?
    Nice conclusion.Tom Storm

    Honestly, probably not. Okay as a theory of communication or of social interaction -- I mean, still preposterously reductionist! -- but not an account of language at all.

    That's the thing. Even if you take the kind of sociological view of language (and oppose the sort of representational view), you might still want to explain what kind of a thing language is that it can be used for communicating or other social functions (signaling of various kinds, etc).

    On the other hand, even if language has features you don't find in other animal signaling systems -- and it does -- that could be only to say that our signaling is more complicated but not different in kind.

    I guess my remark landed somewhere around there, but I couldn't guess whether that's right.
  • How Does Language Map onto the World?
    Hence the point is not to understand language but to use it.Banno

    The point of what?
  • How Does Language Map onto the World?
    Thought I'd address the actual topic for a moment.

    While we may wish to reject the materialist realism of science as a form of metaphysical prejudice, we cannot do so in favour of an alternative metaphysical framework that also claims to describe an ultimate reality be it a new form of idealism, panpsychism, or some Hollywood influenced Matrix version of 'we are living in a simulated reality' without having a theory of language that explains how any of these realist claims are possible. — Lawson

    I think the beauty of Lawson’s promise (which I still don’t understand) is that if there’s no realist theory of language then discussions about effete topics like idealism and panpsychism bite the dust for good. That would be an interesting development.Tom Storm

    But also apparently materialism, all of which just amounts to this:

    Most of the issues that raise a ruckus in philosophy are metaphysics. They are matters of point of view, not fact.T Clark

    But I'm a little confused why he cast this in terms of language and how the claims are made. Presumably because verification is off the table from the start? And the claim that there is something wrong with the very words in which idealism, say, is proposed -- that *is* the old logical positivist diagnosis, that you're not even really saying anything.

    On the other hand, if you don't think of language as the home of claims about reality, there's no particular problem with metaphysics. If your endorsing panpsychism gets you a job or gets you laid, it's just another day at the office for language.

    Of course language might also be useful for doing science. In which case we're back to

    Such an ephemeral ontological object cannot really be the subject of any serious investigationIsaac

    Or to

    I'm not sure what's to be gained from lining up on two sides to say "There's one kind of thing!" or "There's two!!" More interesting is what you can do with such a claim. Naturalism is pretty straightforward as a working assumption, rather than a dogma; you know how to proceed, what sorts of things to look for, how to design experiments, how to craft a research program. I'm not clear what the other side offers except a defense of people's common pre-scientific beliefs.Srap Tasmaner

    (@Isaac likes it when I quote myself.)

    It feels like a pragmatist take on language ought to fit better with science-engendering prejudices (or metaphysical assumptions) than with science-blocking ones, but it's beyond me at the moment.
  • How Does Language Map onto the World?


    Cool. I don't anything about that stuff.

    Honestly, I'm probably over-fitting by suggesting it was even a communication-related selection.

    I've often found the gestural origin of language somewhat appealing because speech production is still gestural once it moves from hands to lips and tongue and vocal cords. Anything that gave us fine motor control might have jump-started the ability to make more variegated and precise sounds, so that could be part of the story.

    Not the whole story though. Speech production is complicated, and I've always heard that children understand far more speech than they can produce, so it doesn't make sense here either to give people an ability they just layer language on top of.
  • How Does Language Map onto the World?


    For a quick illustration. Grice tells the story of a guy that some college at Oxford wanted to offer a fellowship, but he had a dog, and the rules forbade dogs, so the fellowship committee "deemed" his dog a cat.

    Grice only comments that our use of language may involve quite a bit of deeming.

    (And he himself proposed a theory based on infinitely deep chains of intensions and recognitions -- you recognize that I intend that you recognize that I intend that... And he admits that can't ever really be completed; hence you'll have to "deem" some level complete. In a very similar way, David Lewis concludes that probably no one ever really quite speaks a Tarski-style language, so he works a bit at how they might count an approximation or an equivalence class as success.)
  • How Does Language Map onto the World?


    One thing that's really tricky about the question of realism in language is that it's not just a question of theory; even if our best theory says that language does not map onto the world, the idea that it does is part of our practice. What looks like it could be a misconception, or an unreachable, unreached, or even un-aimed-for goal, is operative within our use of language, plays some kind of role.
  • How Does Language Map onto the World?
    Language is a crowbar, a smokescreen, a mirror, all kinds of things.plaque flag

    Another way I've looked at that is that we use language in place of the grunts and calls and mating dances and dominance rituals and grooming and all that other communication non-linguistic animals engage in just because, well, we've got language and it's usable for that. Which is to say we engage in what amounts to non-linguistic use of language, and that muddies the waters if you're trying to figure out whether language is different and how.

    But maybe language is just animal communication only moreso. Animal communication with better tech (recursive syntax and all that). It's of course true that people write sonnets to get laid -- or claim the mantle of "truth" to control others -- but you won't see anything else if you don't look for anything else. I used to say it's an accident that in slightly upgrading our capacity for communication, evolution selected for something that was far more powerful than we could possibly have needed -- and here we are, a globe-spanning civilization. Evolution aimed for better chitchat and gave us language, and we're still trying to understand what happened.
  • How Does Language Map onto the World?
    The acquisition of capability for learning linguistically is secondary to learning from interactions with the world.wonderer1

    But (1) language production and consumption is interaction with the world, social interaction, and (2) one of the things I wanted to get at -- and in a way, try to push back on the "map" metaphor -- is that it's not like children first acquire a complete conception of the world and then "paint" language onto it -- they have to do it all at once.

    It seems obvious that a lot of basic learning mechanisms are common to us and our non-human relatives, but it's also apparent there are mechanisms specific to acquiring language, and it's a question whether some of the basic mechanisms are a bit different since they're part of a system that is also acquiring language. Is there an additional constraint on at least some of the concepts we form that they must be, so to speak, language-able?
  • How Does Language Map onto the World?
    a pragmatic understanding of language, which doesn't address the question of realismTom Storm

    That was the idea, yes, but I'm not sure it excludes what we want out of realism. This is precisely a question about the cognitive capacities and behavior of language-users. One reason to focus on learning when faced with such an issue is to "catch it in the act." Children are the ones who have to manage this mapping somehow; if it's a real thing (heh) then they're the ones who have to connect "ball" in their mouth to ball in their hand.

    Put another way, if you're going to see it anywhere, you'll see it there, so look at the research on language learning and if that's not what it looks like, then this mapping is a myth.

    I can give a small example of what I have in mind -- I think I'm remembering this from Rosch's prototype theory of concept acquisition. If you imagine a bunch of concepts arranged along a scale of abstractness, something like cocker spaniel-dog-mammal, then children tend to come into that scale in the middle, learning dog before the more specific or the more general.

    Now we can ask how this partial language maps onto a partial world. Dog applies to every breed, and adults are fine with that. But what about in the other direction? Indeed children will over-generalize their use of a concept while they lack the more general term, so, if dog is the first mammal concept they acquire, or the first four-legged mammal, they'll apply it as if it were what we use mammal for: cows are "doggies", cats are "doggies", and so on. (In Monsters, Inc Boo calls Sully "kitty" -- those guys at Pixar are smart.)

    Realism finds its clearest expression in the model-theoretic description of language, where you have a complete, closed set of symbols and a complete, closed set of objects, and they are matched up to each other according to some scheme. (It might be more precise to talk about systems of differences among symbols and among objects.) But to talk about natural languages, you have to allow the collection of symbols to grow, and allow the collections of objects that satisfy those symbols to shift, because the satisfaction scheme shifts, most dramatically when the collection of symbols is still small, but growing rapidly, as it is with children.

    This is just one approach I remember a bit of, and only a tiny start on confronting the issue of realism using this research. What do we say about the child seeing a field full of cows and excitedly announcing "doggie!" or "doggies!"? One thing is clear, that the child would not have been "trained" to say this, because that's not what adults say, so an account that passes by issues of categorization is missing something. Is it plausible to focus only on categorizing the communicative situation, and describe the child as thinking, this is an appropriate occasion for uttering "doggie"? There's still over-generalization, but it's different. --- And what happens when the child does acquire cow but still doesn't have mammal? Does that mean cows are, to them, a kind a of dog?

    One thing is clear from trying to write about this: hard as it is, it's easier to talk about a partial language than a partial world, but I think we have to find a way to get at the latter as well. If you don't know anything about chess and watch a game, you see everything the players do, under one scheme of description, but I really want to say that "black's king is in check" is not a possible fact for that observer -- neither true nor false if you must -- and you could describe this as not being able to categorize positions by whether black's king is in check. We might say the observer's world is not partial in the sense that it has less stuff in it, but that it makes slightly less sense. But it's also true that the observer cannot see check, and so there is something in the world of the players that is not part of the observer's world. (I think James somewhere gives an example of a dog, seeing perfectly the interactions among humans but attaching necessarily different meaning to it.)

    So there's some stuff about realism.
  • Why should we talk about the history of ideas?


    Yeah, that's not bad. I've figured out what philosophy really is dozens of times, but I'm starting to think you can just not do that.
  • How Does Language Map onto the World?


    Just realized there's another way to put this: just as DNA is in some sense instructions for physical growth, I'm using "framework" to mean something like instructions for mental growth, what I was reaching for with the word "learning".

    @Janus quoted Bateson the other day, from Mind and Nature, and in that book he talks about his little "how to tell this thing was once alive" test and it comes down to growth, living things have to have grown into the shape they have. So it is with an individual mind, a community, a culture, all things that grow and learn and adapt.
  • Why should we talk about the history of ideas?
    One way (hardly the only way) to look at philosophy historically is as a zoo of intense personalities who react to those who came before and influence those who come after.plaque flag

    Yeah, but it's not only other inmates of the zoo that matter, not by a long shot, especially if it's more like your

    fundamental metaphor for realityplaque flag

    that matters most. It's someone who made a strong impression on you, or it's something in the zeitgeist, or it's the character of the people you interact with over and over, every day. Not just other philosophers. --- And if it is, we don't need the broader history of ideas but only the history of philosophy.

    I'm sympathetic to the rest of both of your posts, but I'm still a little hesitant to put narrative front and center. I think it may be the fundamental mode of language use, and thus everything built on language use, but there are layers of life management below language, and I can't quite see language displacing those.



    Nothing much for me to take issue with there, but I still have the issue I started with. What I haven't heard yet from anybody is some sort of full-throated defense of, I don't know, 'decentering' philosophy in philosophical discussion, not taking its self-image seriously, and treating it instead as only a part of Something Bigger, something like the history of ideas, the Great Story of Culture, whatever. --- I'm trying to keep an open mind, since my instinct is to treat these moves as some species of informal fallacy, which is, it happens, the sort of thing I find interesting sometimes, but I was looking for a more charitable take.
  • How Does Language Map onto the World?
    It remains unclear what you mean by "framework"Banno

    That's fair -- it was kind of a placeholder.

    I started to type out my old answer, but on second thought I'll say this: your framework is a description of how you learn. That's how you update your understanding of yourself and your environment through behavior, even if only mental. Because we're highly social, that will include how we justify and validate beliefs for each other, but there's no reason to think that's a template for all the learning we do.

    So maybe it would help if you tied all this back to the OP?Banno

    I would approach the issue in the OP by looking at how juveniles learn. For human juveniles, that includes learning language, and that's the focus of the OP, but you have to wonder if some of the learning mechanisms and strategies of our non-human relatives are still operative in us, so you have to look. A human infant does it all at once, so we would want to know if there are relatively independent subsystems that differ little from other animals, and if there are some that are colored, modified, reshaped from the beginning by the telos of language acquisition -- doing the same things with a different meaning because the system they are part of is different -- besides the ones that are unique to us and involve language.

    How does language map onto the world? The obvious place to look is children, who have to learn how they work, how the world works, and how language works, and figure out how it all connects.

    I've always thought it's interesting that language is usable from the earliest stages of acquisition: you can say "ball" before you can say "I would like the ball now, please," and that works. Languages are partial-able, as we use them. Now throw in that the child's understanding of themselves and of the world is also partial, and that has to work too. And these have to be linkable, in this partial state, and that has to work.

    And that never actually changes. Language, world, self --- we never achieve full understanding of any of these, so we go on our entire lives in with this partial understanding, just as when we were infants. And it works.

    No answer there to @Tom Storm's question, but that's where I'd start.
  • Why should we talk about the history of ideas?
    First of all, why is that paragraph 'weirdly factually wrong'?Wayfarer

    Because you've never been clear on the difference between analytic philosophy and ordinary language philosophy. We don't need to go into it here.

    But now I get to ask the same question again, because this was another post just like the one I was asking about. All very interesting I'm sure, but what effect was the history of the history of ideas supposed to have on me? (Maybe you're just "catching me up," in your view, so we can have a proper conversation, but as it happens I already know what the history of ideas is and I've read some of the stuff you mentioned.)

    The pedagogy is designed to teach people to be employable rather than give a deeper insight.Moliere

    I don't think that's quite it -- obviously for some things, sure, but science and mathematics are just different, and I don't think it's for the reason you suggest. ---- But it looks like this thread has already changed topic! Now it's a thread about how we teach science and how we teach philosophy, and that's fine.

    Ridiculous how education essentializes and splits up technical topics cleaving it of any human element.schopenhauer1

    I'm not sure that's true either, if you recognize that there are skills needed and technical background needed to do this sort of work, and the curriculum is designed to get you up and running, able to do mathematics, to do scientific research -- and those are great human endeavors! They don't have to focus on the human element because you are the human element and if everything goes right, you'll be thrilled to head to campus or to the lab or to the site everyday because you get to do science all day! This system largely works, and you can see just by peeking into any lab at the nearest research university, grad students listening to some tunes and doing their work -- a perfect life if there were more money.

    St. John's, I believe tries to teach students through primary sourcesschopenhauer1

    True. I've known some Johnnies very well (and married one, a long time ago). They learn geometry from Euclid and physics from Newton.

    And I don't think you would want philosophy to exude that kind of authority where the right views are already there to be learnt?apokrisis

    Absolutely. But why? Because we don't have any certainty to convey... With the sciences -- geez, with medicine especially, it seems -- it's becoming commonplace for half of what you learned in school to be falsified by the time you retire if not much sooner. (There are some numbers on this in The Half-Life of Facts, but I forget what they are.)

    But what you learn from close reading of the big names is as much the way they thought as what they thought.apokrisis

    Certainly. When I was young, I read philosophy in a believing frame of mind, acquiring ideas I could endorse or not. Got older and for a long time have read philosophy with little interest in the 'doctrine' at stake. I enjoy Wittgenstein primarily because we have such an extraordinary record of an interesting mind at work. I just like watching him go, and I think I've learned from how he thinks. I've enjoyed watching Dummett at work because his command of logic is formidable and he sees things I have to work through slowly. Sellars also has an unusual mind. I even like the tortuous way he writes. He's every bit as intricate as Derrida, but not for the same reasons at all. ---- Anyway, big yes, and I think this is an excellent specific reason for reading original texts, but then that only throws into sharper relief my original question: what does the history of ideas contribute to such an experience?

    Yet how would you set up Philosophy 101?apokrisis

    It used to be my ambition to teach Philosophy 101 using Calvin and Hobbes as the text. (Wittgenstein somewhere said you could teach a class in philosophy using only jokes for your text.) There is one textbook I admire, Contemporary Epistemology by Jonathan Dancy, later known mainly for writings on ethics. Begins with two problems, Gettier and skepticism, and then goes through historical accounts of knowledge noting how they handle these two key problems or fail to. I liked the problem-oriented structure. It's a bit ahistorical in one sense, a very mainstream Anglo-analytic sort of thing, but it really engages with a lot of stuff and brings it to life. Nice book. Probably not quite what I'd be after now, but a solid example of how good a philosophy textbook can be, in my view.

    There are narrativesPaine

    If I can abuse your post a bit, several people have suggested that knowing the history of a philosopher's ideas helps them understand those ideas, but there is another way to go here, which is to suggest that it's narrative that matters. (Hey @Isaac.) That is, that we don't naturally deal with 'naked' ideas, but with ideas as they occur within narrative -- that's what our thinking is organized around and pretending to discuss an idea 'in isolation' means you're probably just embedding it in some other narrative without acknowledging that transfer. And that probably means distorting its original meaning -- but that might not be our primary interest anyway, as I've noted. --- At any rate, if we can only deal with ideas as elements of some narrative, we might as well face up to that up front, even if there's no decisively privileged way to do that.
  • How Does Language Map onto the World?
    I explicitly proposed that the issue is one of the choice of grammarBanno

    Or that the difference between realism and anti-realism is more one of choice of grammar than profound ontology? But that is all philosophy is - wordplay.Banno

    Couple things: that's a question, not a proposal; also, it's hard to know what the proposal implied would amount to, since you follow Davidson in claiming there are no alternative frameworks -- but if not an alternative framework, then what's the difference a different grammar makes? Style? Are you claiming that realism and anti-realism say the same thing in different ways? That doesn't sound like you. Or is it that we're all realists, but anti-realists don't admit it (perhaps not even to themselves)?

    You've a few jokes, but nothing substantive.Banno

    But that is all philosophy is - wordplay.
  • Why should we talk about the history of ideas?
    generally I understand the idea by understanding the ideas' storyMoliere

    it makes sense to understand the context of what has gone before so as to ground what seem the concerns now.apokrisis

    That's two votes for better understanding through history, which it's hard to argue with. I've often wished math and science were taught with more of an eye to history.

    But what do you study when you do a philosophy degree but the history of ideas?apokrisis

    This is true, but I would put it this way: philosophy curricula more closely resemble literature curricula than they do the sciences or mathematics, and that's slightly odd. (I'm not the only one to have noticed this.) You could say that's a result of our above average scrupulousness, since secondary sources tend to get things wrong or slant them in some way -- but that assumes that what matters most is what Kant or Aristotle or Schmendrick actually meant, and that's a matter of biography, isn't it? We're supposed to be in the ideas business. Kant only matters to us because his ideas are interesting; his ideas aren't interesting because he's the one who had them.

    Studying the history of ideas helps you understand that things that were once seen as true but now aren't may be true again.T Clark

    Now that's a specific lesson, kind of a warning really. You've folded in both swings of the fashion pendulum here: what you hold true, even obviously true, may in the not so distant future be considered obviously misguided in any number of unflattering ways; and then the naysayers may themselves be naysayed in turn. There's a whiff of vanitas about the whole proceeding, looked at this way, and one might be tempted to chuck the whole thing. Or you could embrace the ephemeral nature of philosophical struggles and shortlived victories and take giddy pleasure in it -- after all, you needn't worry about having any lasting influence!

    I would say that in fact a problem is that folk skimp their history and don’t realise how much is simply being rehashed with each generation.apokrisis

    Indeed. This is even stranger than the phenomenon of fashion in philosophy. But -- coming back a bit to my original question -- it does little good in discussion to point this out, because no victory in philosophy is ever complete, and probably not even lasting. So if you point out to someone that they're taking the same view Schmendrick did in the 30s, they might just add him to the list of people to quote in favor of their position! It's most unlikely that Schmendrick's position was ever definitively refuted, only discredited in some way, or passed by at great speed by fashion. You might even accidentally cause a Schmendrick revival...

    But anyway, the history of ideas is important as it is the only way of understanding why folk tend to believe the things that they do.apokrisis

    Now here I think we're closer to a sort of Nietzschean genealogy, and I'll only remark that the implication is that why people think what they do is not what they think -- it's not the arguments and the evidence but the currents of thought they've swum through and been buffeted by. I don't disagree, but it tends not to go over well in conversation, since it amounts to a kind of cultural psychoanalysis, and that's rude.
  • Why should we talk about the history of ideas?


    Sure, I mean, the history of ideas is really interesting. Love that stuff.

    My question was not whether it's worthwhile in general, but how does talk about the history of ideas contribute to philosophical discussion? I mentioned a couple of the answers I'm somewhat familiar with. I could also have mentioned its central role in exegesis, evident in the reading threads we've had dealing with texts by Plato, Hume, Descartes -- texts far enough removed from us that you have to restore some context to understand them well. I've always been inclined to call that sort of thing "history of philosophy" rather than philosophy "proper" -- but I'm not wedded to that view.

    It's an open question to me what the place of the history of ideas, and of the history of philosophy, should be in our discussions, and I expect people to give very different answers. My starting point was wondering what @Wayfarer's point was in telling @apokrisis what he did, as quoted above. What effect did he expect that paragraph to have on apo's views?
  • How Does Language Map onto the World?


    "But -- but -- isn't it true that there are true statements?!"

    It can be hard to convince yourself -- hard even to see the possibility -- that the answer to that question does not matter.
  • How Does Language Map onto the World?
    Be clear.Banno

    I can be more specific, at least, and we'll see whether it's any clearer. This is the end of what I quoted:

    The anti-realists failure to commit amounts to a failure to understand how language functions; "the ball" is the ball.Banno

    That's quite a dichotomy there, but the interesting bit is after the semicolon: what's the nature of that little "is" there?

    "the ball" is the ball

    My issue here is not the apparent use/mention violation. It's that "is" suggests there is a fact of the matter about what "the ball" refers to. You are, of course, extravagantly on record endorsing a Wittgensteinian "meaning is use" and everything Davidsonian, so you cannot possibly mean there is a fact of the matter about whether "the ball" is the ball.

    Yet there it is, an emblem of the fundamental failing of anti-realists, that they don't understand such self-evident truths.

    Honestly, I'm not interested in either of your options. The fact that you think there is a war between realism and anti-realism, and that one of them is true and the other false, well, that's just your realism working overtime, it's realism about realism, as if there is a fact of the matter about realism. This is exactly the structure of debate Dummett was trying to clarify, that realists tend to put anti-realists on their back foot by forcing them to give yes/no answers to questions that suit the realist but not the anti-realist. It's why @Isaac -- though he considers himself a kind of realist -- considers words like "real" and "true" useful mainly for bullying your opponents.
  • How Does Language Map onto the World?
    So a realist says the ball has a mass of 1kg; the anti-realist might say that saying that it has a mass of 1kg is useful, or fits their perceptions, but will not commit to its being true. The anti-realists failure to commit amounts to a failure to understand how language functions; "the ball" is the ball.Banno

    That is to say, realists take the context of claims for granted, and pretend there isn't one, while everyone else admits that truth is relative to exactly the sort of framework you deny exists. Even Tarski is pretty clear that truth is truth within a given (formal) language under a given interpretation -- never just truth straight-up. (And when model-theoretic semantics is extended as possible-world semantics, you also get 'true at w'.)

    It's going to be the same issue for facts, observations, what-have-you. You can follow Quine and plump for holism -- and that means some whole "framework" of some kind, however you make that palatable -- or you can explain how this atomistic approach to truth is at all defensible.
  • How Does Language Map onto the World?
    Animals move around and plants don't move around, although they may be moved by wind, while remaining in the same places.Janus

    This is as a good a thing as any to quote because it's the opposite of something you seem to suggest now and then that bothers me a little, roughly that we "piece together" the world out of our various bits or sorts of experience.

    I think the whole ought to have some priority -- even the blooming, buzzing confusion is a whole, within which we make distinctions and so forth.

    My thinking is that when it comes to oysters, say, it's not a matter of acquiring the oyster concept by having the oyster experience, as if that could be perfectly sui generis and then you stick a name on it and add to a list of things that are part of the world. Rather I'm thinking that you'll experience oysters as like and unlike other things, in various ways, and make a place for them within the distinctions you already know, but also -- and this is the main point I want to get to -- modifying your total conception of the world by making room for oysters. To find out there is something on the taste gradient between fish and -- I don't know, doesn't matter -- crab or whatever, that alone might be a surprise and change your conception of what else oysters relate to in your experience, because now fish are also on the oysters gradient, and all the others that criss-cross there, texture and smell and look and origin and presentation and how they pair with beverages and which condiments are best and worst, all that foody crap.

    So I want to say you're always remaking the whole world while you acquire new experiences that don't come to you in neat packages, just being exactly what they are, but experienced from the beginning as like and unlike things that already belong to your world. I think you're always working on the individual concept and the whole battery it belongs to, the system it's part of.

    Obviously there should be similar dynamics with the accrual of facts rather than concepts, although the structures at issue will look more narrative.

    On the great big other hand, how all this happens is clearly a matter for proper research, and there's been plenty of neat work done on concept acquisition, so my preference for a decidedly holistic take only counts for so much. Some of what it counts for might be that the piecemeal assembly of the world ought to turn out to be incoherent, and Sellars strolls around this territory sometimes, like a good Kantian. It's batteries and clusters and systems and hierarchies of concepts, never just one at a time, that we deal with, and so some of the back and forth here that treats the experience of oysters as this perfectly self-contained sui generis qualia-in-waiting strikes me as misguided. Unless I'm completely wrong.

    Not, by the way, ascribing the view attacked here to you, but you've said a few things a little like that so I'm just highlighting the issue.
  • How Does Language Map onto the World?
    one neither passively absorbs, nor jointly negotiates the normative practices of that culture, but validates one's own construction of the world using the resources of that cultureJoshs

    The cultural control we see is one which is within the person’s own construct system and it is imposed upon him only in the sense that it limits the kinds of evidence at his disposal. How he handles this evidence is his own affair, and persons manage it in a tremendous variety of ways. — Kelly

    Didn't really respond to this. It's an interesting idea, but I'm not quite sure what "limits the kinds of evidence at his disposal" means. It sounds kinda like cultural determinism through the back door. Maybe it's just put a little too strongly here for my liking, but the focus on validation is itself interesting, since he's making the supposedly neutral pole of worldview construction (evidence, facts) the locus of something like cultural bias.

    And in some sense that has to be right if you define a culture as what nobody contests, what everyone takes for granted -- that means, in effect, what counts as fact for that culture whether they conceptualize it that way or not. Interesting.
  • How Does Language Map onto the World?
    How does each individual respond to their culture inheritance?Joshs

    I think your characterization is pretty good. It's obviously just false that people are locked into the culture, religion, morality, language, they are born into.

    I don't know much of anything about what mechanisms psychologists hypothesize for cultural uptake. What I mean is, we can count on evolution leaving in place dispositions that generate a more or less predictable world model given a particular environment -- can't quite say niche with us because we are very adaptable and there appears to have been considerable selection pressure for adaptability and even evolvability. We generate the models we do because we're designed to generate them given the right sort of input, roughly. Evolution has some ideas about what sort of environment an organism needs to thrive in.

    So how does something like that carry over to culture? Are the mechanisms of cultural uptake a repurposing of our basic model-building gear? I really have no idea.

    Even evolution seems to leave us with only something like very strong tendencies. It makes it easy (both efficient and effective) to build the usual thing because here are the tools you need and the instructions. And obviously culture's hold on us is considerably weaker, and our interaction with it considerably richer, the way we reshape and extend our partial inheritance as we go.
  • How Does Language Map onto the World?
    I'd also note here that 'sensorily' takes the sense organs existing in an environment for granted. What I call the constructive approach seems to want to take an interior as given and construct the exterior from this interior --but this conception of an interior seems to quietly depend on common sense.plaque flag

    Or at least dependent on how much of the science of organism-and-environment has become common knowledge. (Phenomenalism as a philosophical approach being subtly dependent on knowing that the eye registers 2D images, that sort of thing.)

    One way constructivism is right but misrepresents itself is in presenting the individual as constructing the world all by themselves, kinda from scratch, the mind as a perfect little scientist. It's true, of course, that each individual organism needs to construct their own, in some sense 'private', model of the world (and themselves in it), because that's what brain development just is, but it's not true that each organism constructs the framework they will use to construct the world from scratch. There's an inheritance. A lot of 'choices' have already been made for you (by evolution, and on top of that by culture) so you build your own, sure, but not completely idiosyncratically -- and not incommensurably -- but using the same inheritance as everyone else, for the base level, and as everyone in your culture, your speech community, and so on, for others.

    That gives a pretty clear way of allowing that the world is a construction -- because there are just so many ways in which it obviously is -- but accounting for agreement among people of the same species, the same culture, and so on.

    Ontogeny gets to recapitulate phylogeny rapidly because what used to be endlessly branching little pathways are now high-speed rails. As Hume put it, there are questions Nature has deemed too important to leave to our own fallible and imperfect reason.
  • Nice little roundup of the state of consciousness studies
    But the scientific search for 'what is the mind?' will always be bedevilled by the epistemic split between knower and known, because in the case of mind or consciousness, we are what we are seeking to understand - mind is never an object to us. And I say there's a profound problem of recursion or reflexivity in the endeavour to understand it objectively, given in the Advaitin aphorism, 'the eye cannot see itself, the hand cannot grasp itself.'Wayfarer

    This is a known problem for the individual, that when exercising a bias, say, you are not, and perhaps in some cases cannot be, aware of it. But other individuals can be aware of your biased perspective, as Browning memorably pointed out.

    It's just not obvious that the issue arises for types rather than individuals, and it is only the type that science studies. We all feel the pull of recency bias, of color constancy, all those myriad quirks of the way our minds work, none of which stopped scientists from designing experiments to reveal these quirks. We know that we can essentially eliminate consciousness through the use of general anesthesia, without the entire human race having to fall unconscious to find that out.

    I may not be able to treat my own mind solely as an object -- though I can surely take it also as an object -- but it's not obvious what the barrier is to me treating your mind as an object of my study, and since it is your mind, not mine, I can only take it solely as an object and never as subject. That object is the also the subject of your experience, so in studying your mind, I am studying your subjectivity, and thus studying subjectivity itself. Where's the problem?
  • The Argument from Reason
    I don't know the background of the guy you've quoted.javra

    Gregory Bateson
  • The Argument from Reason
    You reply as though I’m pushing you into buying something and you’re not yet prepared to buy itjavra

    Sorry if it came across that way. I just don't understand, that's all.

    So I’ll now ask you in turn for your own perspectivejavra

    Yikes. My views are in flux, even more than usual at the moment.

    Do you find that the basic laws of thought are fixed for everyone today, yesterday, and tomorrow?javra

    As far as I can tell, this means the world has a logical structure, so any belief that violates the usual canons of logic cannot be true.

    I'm going to lean "no". I think the picture presented here is of a static world modeled after the medium-sized dry goods we are used to interacting with. But the universe wasn't always like this, which means insofar as we see order around us it is not eternal but temporary -- it had to emerge and it will go away.

    What's more, our intuitions about things at our scale don't translate well to the much larger or the much smaller. Space itself bends -- what the hell? And I don't have a clue what's going on at quantum scales, but the stuff @Andrew M has tried to explain to me does not match how my keys and my breakfast cereal and my books behave.

    So I'm inclined to think these laws of thought -- phrase I really detest -- are an approximation, in almost exactly the way that Newtonian mechanics is an approximation. It's only an approximation and even that only applies because of where and when we live.

    That's not to deny its utility at all, but its utility is the point, hence my leaning harder all the time toward pragmatism.

    The twisty journey that all must take from lumpen realism, to the body shock of idealism, to the eventual resolution of enactivism and pragmatism.apokrisis

    That's how I read Hume, and it would be true of me except I've never found idealism appealing so there are things about the predictive modeling view that wouldn't bother an idealist but freak me out a bit, as I'm getting it all at once.

    If not, on what coherent grounds do you find that reasoning and logic can serve as means for discerning what is real?javra

    As above, because that's all they're for, at least as predictive approximations.

    You seem to accept the thrust of the argument from reason, that if materialism is true then there really isn't anything you'd call logic. I think there's no real argument presented at all, but there is something like a conflict of definitions. This discussion has at least forced me to consider which side of the fence I want to be on, and if I have to give up the eternal truths of logic to stick with natural science, then so be it.

    I've taken the opportunity to choose, but I don't think it's actually forced by the argument. Denying that the universe comes with a built-in logical structure we discover does not require us to deny that systems we have ourselves constructed can have the logical structure we give them. To say otherwise looks like a simple genetic fallacy to me. The logic I "find" in the world is an approximation I make; the logic in a mathematical proof or a computer program or a game of chess is fact, because we made it so. Is there some other sense in which the logic we imbue these artifacts with is eternal and unchanging? If so, it's something different from what we've been talking about.
  • The Argument from Reason
    We can start in the middle of thingsapokrisis

    Hey that part I understand and, for what it's worth, agree.

    the logical inferences of materialists when it comes to their metaphysics result in the conclusion that all logical inferences are relative - such that one might as well declare that "to each their own equally valid logic and reasoning".javra

    Wait, really? I thought the relativism at stake was to the ordering of nature, but you meant relative to the individual reasoner as a bit of ordered matter? Is materialism really committed to that sort of simplistic perspectivism? Why cannot materialism call on the laws known to hold in this material universe, logical and natural, and leave it at that? This universe is logical, and so logical inference is appropriate here -- no eternity needed. --- Or is the materialist unable, in your view, to recognize that, say, the law of non-contradiction holds in our world?

    I'm missing something. Apologize if I'm just misreading you.
  • The Argument from Reason
    Can you better explain what you mean by "immaterial entities" in this context?javra

    Just a placeholder for "not reducible to matter", since that's the other thing. I think I've got it now with the distinction between numbers and angels, although I do wonder why the problem with angels is that they're not physical and the problem with numbers is nothing at all.

    how can materialism and physicalism uphold their own rational validity when their rational validity is (for reasons so far discussed) undermined by the very metaphysical stance they maintain?javra

    And naturalism gets around this, on your view, by countenancing laws of thought as "natural, though immaterial, givens"; that is, you get to rely on logical inference and the materialist does not. Is that your position? I mean, that seems like cheating, like the Russell line about "the advantages of the method of positing."

    Whatever I end up thinking naturalism amounts to, I don't think that'll be it.
  • The Argument from Reason
    What it boils down to is the logical principle that whatever doesn't self-contradict is free to be the case.apokrisis

    If I'm following this, one point is that logic (at least in these sorts of discussions) is often conceived primarily as a constraint, contradiction is forbidden, non-identity is forbidden, and so on, but that's clearly not the whole story because you need some generative principle as well. (Hence tychism?)

    But much of what you write is about how constraints themselves are generated, rather than simply being given, and this is where symmetry breaking comes in, yes?

    It would certainly be more satisfying to have a story in which a single process gives rise to the constraints on its continued operation. Without such a story, you in effect imagine the universe to exist within a bigger universe in which there are already certain rules in place -- the rules of universe creation, these laws of thought -- and you simply decline to explain that one. You would face a similar problem if anything simpler and more general than your story were conceivable -- but you knew that going in and have aimed at maximal simplicity and generality.

    Do I have any of that right?
  • The 'Self' as Subject and Object: How Important is This In Understanding Identity and 'Reality'?
    I only know of thinking as something of which I do, the negation of which is impossibleMww

    Heh. My poster disagrees with you:

    Sometimes I sits and thinks. Sometimes I just sits.

    At any rate, you don't really mean it's inconceivable that you are not thinking; you mean it's impossible for you to think, "I am not thinking" -- well, you can think it, but it's necessarily false and a performative contradiction.

    Now it's curious that there's one sort of event that licenses contradiction: the death of a person. People will speak of the body of the dead person as they did when he was alive, "He looks so peaceful," that sort of thing. I'm not saying that's a contradiction. But the same person might say, if there had been a long illness, that the man she married was gone long ago. People don't mind switching between identifying the personality and the body as the person. They might even say "He's in a better place now" suggesting his real self is his soul -- and say that right after saying he looks peaceful!

    What's the point of all this? That we have confused intuitions about the self? Indeed. But they all have to do with life. Our confusion arises because of the transition from living to nonliving; that which was never living poses no challenge at all to our intuitions -- there's just no self where there's never been life.
  • The Argument from Reason
    What's relevant to a law of thought's occurrence is not our conceptual grasp of it as such but that it ontically occurs. It is only in this manner that laws of thought can be discovered - rather then invented - by us.javra

    I thought that might be what you're saying. That makes such a law a fact about the universe (if I understand "ontic occurrence" as you intended). There are two questions that naturally arise:

    (1) What is the real difference between such a law and other natural laws, such as the laws of thermodynamics?

    (2) How can we tell whether such a law happens to hold in our universe, or whether it must hold? What would make it necessary, and how could we know?

    (Okay the second one's two questions. My bad.)

    Naturalism, on the other hand, specifies that all which does and can occur is that which is natural - thereby nature at large - this in contrast to that which is deemed to not be natural (again, for example, angels, deities, forest fairies, etc.).javra

    Huh. For discussion I'll go with it -- especially since all I mean by "naturalism" is, roughly, "amenable to scientific investigation," and that's not much of a definition either. My ersatz definition is essentially an exclusion of magic, behavior that is inherently unlawlike and thus incomprehensible to science. Your version of naturalism countenances immaterial entities so long as -- what exactly? They are not traditionally identified as supernatural?
  • The Argument from Reason
    Considering the history of whaling, it's a wonder they don't also fuck with humans.Janus

    Remember Crocodile Dundee and the kangaroo shooting back at the hunters? Love that.

    global constraints on what is and can bejavra

    I'm not sure we can reach quite that far. There may be a halfway point, a sort of anthropic principle -- if that's the right word for this kind of selection bias -- that I gestured at above, what sort of universe could be intelligible to creatures somewhat like us.

    We're pretty far afield here, but I want to mention another way of approaching this, instead of pondering the status of possible constraints on the physics of a possible universe.

    We have good reason to believe infants acquire the concept of object permanence before the concept of object identity. Think about that for a moment. That means it is possible for a creature to live in a world in which, so far as they can tell, ducks sometimes turn into trucks, but they never simply disappear. What's going on there? At some point -- I'm not sure how old -- they would no longer accept the possibility of such a transformation, but it appears there is a time when they do. Can they reason yet? Hard to say, but they express surprise when there's no object where they expect one, so the predictive machinery is certainly running already, it just doesn't need object identity to get going.

    In effect, small infants live in a different world from us, with different or perhaps only fewer laws of thought. They transition to ours, mostly. Are they discovering more about how our universe works, about how any universe must? Maybe. Are they making richer and more rewarding predictions about their environment? Certainly. But they did live in that other world first, as we all did.

    Is that world devoid of reason because the law of identity is absent? Maybe. Must we say so? What would be the point of saying so? There is a class of predictions the infant does not make that we do, and they are the poorer for it, presumably. We can say that, from our position, having been successfully relying on the law of identity; we know, that is, what they're missing out on.

    But do we know what we are missing out on? Is it impossible that there are other laws of thought of whose operation we remain ignorant? That to some alien race we might appear like infants unable to conceptualize the simplest facts about our universe?

    Still working at your last few posts, @javra. Might help me make sense of them if you compared your use of the terms "materialism" and "naturalism". (I've never been very comfortable arguing the merits of isms, hence my reliance on whales and infants and play-writing hominids.)
  • The Argument from Reason
    Notice this rhetorical sleight-of-hand which re-frames necessary truths as contingent.Wayfarer

    That implies intent to deceive or mislead, which I assure you was not present.

    Look, I'm between the devil and the deep blue sea here. I've been on almost exactly the other side of this argument, right here on this forum, many times. I have defended philosophy against the encroachment of psychology. I had a long argument with @Metaphysician Undercover over the necessity of object permanence to counting!

    I am not *comfortable* allowing logic itself to be something like a fact of our universe -- maybe it is something more like a necessity for any universe, or at least for any intelligible universe.

    The reasons for my decision here are several: I have never found an account of the status of logic or mathematics I like, never, and it comes up over and over again; I'm not sure we have much to show for defending our turf against psychology, which seems to have been making more than a little progress without our help; something in @javra's phrasing really crystallized the choice for me, a heaven of eternal logic versus naturalism; and finally I've been reading William James, whose approach to pragmatism really does feel informed by his work in physiology and psychology. James was famously open to the supernatural, to religion and spirituality, even to the paranormal, so his pragmatism is not a matter of dogmatic anti-supernaturalism, but his starting point is always life.

    So I think maybe I'm ready to give up the idea of necessary truths. But maybe I'm not, we'll see. Quine waffled on this very issue for decades, with a set of commitments and inclinations similar to mine. It's hard.
  • The Argument from Reason
    we - here, in the world we inhabit - could only fathom any such alternative world only if it were to abide by the law of identity, and then other laws of thought that could be argued derivatives of this onejavra

    Oh yeah, that's a mess. Hmmm.

    If you have further thoughts, do post, and I'll try to give better responses later.
  • The Argument from Reason
    all principles of logic/reasoning are, when ontologically addressed, a relativistic free for all—this relativity existing in relation to the order of underlying material constituents from which these principles of thought emerge—a relativism that, again, is thereby devoid of any impartial, existentially fixed standards (in the form of principles or laws) by which all variants of logic/reasoning manifestjavra

    It's a very good question, and I thank you for it.

    As I read your response more closely (which I shouldn't be doing since I'm at work!), it seems the question cashes out like this: would logic be the same, and thus the rules of valid inference, even if nature were very different?

    That's a very difficult hypothetical, but I am inclined to say no. I think we think the way we do, and find success thinking the way we do, because nature is the way it is. We do think of logic as being above natural law, as being prior to it, but in a universe that behaved very differently than this one, if there could even be creatures like us to speculate, insisting upon the logic that works in this universe would look foolish, and nothing like the high road to truth.

    That's to say, what counts as logic for us presumes a universe in which that version of logic is reasonable, is successful, does tend to lead to truth.

    So I'll put my chips on what seems to me a naturalist and pragmatist view, and find some way to fight off the threat of nihilism.
  • The Argument from Reason
    logic/reasoningjavra

    Short answer is that I wouldn't write these with a slash between them. Logic is a system of relations among propositions; reasoning is something people do, and they can do it well ("logically") or poorly ("illogically").

    @apokrisis would have me say that even logic is just habitual, patterns of inference that have proved their worth, but he's got a whole metaphysics that makes that the natural move, and I'm not there yet.

    So I don't think insisting that reasoning is something living creatures do requires me to reduce logic itself to biology.
  • The 'Self' as Subject and Object: How Important is This In Understanding Identity and 'Reality'?


    That's along the lines I was thinking. In particular:



    How is the statement "This thinking belongs to me" like or unlike the statement "This breathing belongs to me"?

    We only know of thinking as something organisms do. I understand that the intent here is to set such particulars aside, to consider only what is essential to the concept of thinking per se. I can see the value in that, but how do you know you have excluded all and only the right sorts of things? Why is the body excluded? Why is life?

    Starting from nothing, with no preconceptions, how would you even come up with the category of 'thinking' as something to investigate, without the examples of living organisms that think before you, without yourself being one? Would a disembodied mind 'living' on a lifeless rocky planet compose treatises about breathing and metabolism and reproduction? How?
  • Simplisticators and complicators


    Hmmm. Chess is closer to mathematics, and the beauty in it is similar. One of Capablanca's little combinations is like a neat proof -- here is both the evidence that you can't play that move and an explanation for why you can't, and you should now understand where you went wrong. There's not much room for argument with a proof -- not as regards the result, though there are aesthetic considerations and other things.

    I'm not sure the same sort of thing is really available to philosophy, or to other sorts of debate, because the outcome is never so clear. I'm trying to think who might be the philosophical equivalent of Capablanca and no one comes to mind really.

    And we're still on topic, because the problem is achieving that clarity and simplicity that's so characteristic of Capablanca. Hard to do that with philosophy. I think you might find something similar in wisdom traditions and in religion. Confucius has a clarity and a directness that is reminiscent of great chess players, and when he reminds his students of something fundamental they generally accept that he has said everything that needed to be said. Other religious teachers can achieve something similar. But in our tradition? I don't know. People who express themselves with the certainty of a Capablanca in this context tend to be a little scary. We have reasons for our nuances and complications.