Interesting that you're from a linguistics background. I'm curious to know what you think about the adequacy and/or sufficiency of the three principles proposed for successful communication/interpretation. — creativesoul
I'm not really from a linguistic background; I just come at the issue from a linguistic perspective. I do have a university degree, but it's in sociology, and whatever formal education in linguistics I have I acquired in the context of a sociology degree (it's more complicated than that because of the way university studies were organised, but that's close enough). It's just that after graduating, I never did anything with my degree, and I kept up a sporadic interest in linguistic on my own.
About the three principles: I think they're all trivially true, but what's important is how you use them in a model of language, and I haven't quite yet figured out Davidson's model (and I probably won't just from this one article). He uses "first meaning", and I'm not quite sure what that means, so that's an additional difficulty I have.
When you start out studying any of the humanities, one thing you learn pretty quickly, is none of the terms probably mean what you think they mean, and different people use them differently, so knowing roughly what sort of theoretical background to expect helps you a lot in understanding a text. That's why it matters to me that I'm not very knowledgable about the philosophy of language. I have all the caution but none of the background when it comes to interpreting the text.
I'll have to run through the principles with what I think of as "lexical meaning", instead of Davidson's "first meaning". I think that's not quite it, but it should come close enough for the purpose here. "Lexical meaning" of a word is just the word it has outside of context. One can think of it as a dictionary in the mind.
So, yes, "lexical meaning" is systematic. For example, an apple is a type of fruit, but a fruit is not a type of apple. The hierarchy involved here is an example of the systematicity we're talking about.
And, yes, "lexical meaning" is shared, as is apparent when I ask you for an apple and you give me one.
And, yes, you have to learn a language before you can use it. And what you learn, are conventions. This is actually the most complex topic. In anthropology, colour terms are the go-to example, because it's easy to see that different languages order a spectrum differently. (Early linguistic is quite bound up with anthropology.)
But that's all pretty trivial. It depends on what you do with that in a language model, and the assumptions you make about what a language is can differ wildly. So when Davidson says "Probably noone doubts that there are difficulties with these conditions," I agree, but what difficulties you run into vary by the model you use. Sure language is systematic, but how systematic? Sure a language is shared, but what does sharing a language look like in practise? Sure a language is conventional, but how much do those convention enable/restrict your language use?
An easy example: If you study linguistics, you'll hear early on that the relationship between the sign and meaning is arbitrary, but then you'll immediately be told that onomatopoetic expression might be an exception. Are they? There's clearly still a level of arbitrariness, because, say, animal sounds are usually linguistic imitations of the real thing, but they still differ by culture. I think that's where the difference between a philosophy of language and linguistics come in. Philosophers tend to be interested in the topic, while linguists tend to be interested in those topics when they become problematic for their theories and research.
So when Davidson concludes that there is no langauge because of malapropism, I'll have to first figure out what is he expected. It's entirely counter-intuitive for me: there are language conventions, but unconventional language use doesn't automatically preculude understanding. For example, if a non-native speaker were to say "I hungy," you might still understand that he's hungry, even if he doesn't acutally use the auxiliary verb and forgets an "r". So to claim that a language is largely conventional is not to claim that if you deviate from those conventions, you can't be understood. We're not computers who return a syntax error for a simple typo. (And this is where I might inject that programming languages are more systematic than natural languages. That shouldn't be a surprise, but this is something you should consider when interpreting principle 1 within a theory.)
So, for example, Davidson says this:
Ambiguity is an example: often the ‘same’ word has more than one semantic role, and so the interpretation of utterances in which it occurs is not uniquely fixed by the features of the interpreter’s competence so far mentioned.
Here's where I'd just look at what I have as a model that I try to get as close to the real thing as I can. So when I notice that there's ambiguity, I'd just look at how we typically resolve ambiguities and add that to the model. Semantic Field theory, for example, helps a lot. "I took the money to the bank," includes two nouns, "money" and "bank", and because they're thematically related (part of the same "semantic field") "The pirate buried the money near the bank," feels more ambiguous, even though we still have "money" and "bank" - but "pirate" and "buried" suggests a river bank as a very real possibility. Beyond semantic field theory common sense would tell me that a pirate isn't likely to bury money near a institution that deals with cash. But once I have to consult common sense to resolve an ambiguity, I'm already aware of it. There's been a disfluency in interpretation. I have a model that would likely lead to misunderstandings, but that's no problem because, well, in real life there
are misunderstandings. I don't need a model of language that's
more systematic than the real thing. I don't need a model that's
completely shared. I don't need a model that's totally formed and restricted by convention. Because the real thing isn't like that either.
The interesting line here is "uniquely fixed by the features of the interpreter's competence". At that point, I'm guessing that he thinks there's a unique thing like "linguistic competence", as opposed to a more general competence. So later he says that:
nterpreters certainly can make these distinctions. But part of the burden of this paper is that much that they can do ought not to count as part of their basic linguistic competence.
If I compare that to my intuition, I'd say he's got a much narrower and more specific idea of what a "linguistic competence" is than I have. As a result I have to be careful not to impose what I think on his text. It's a question of phrasing. So by the time he ends with:
In linguistic communication nothing corresponds to a linguistic competence as often described: that is, as summarized by principles (1)–(3). The solution is to give up the principles.
I'm careful. I still don't quite know what he means by this, or what he expected language to be like. But I connect it to the rise of a couple of linguistic theories from around the mid-eighties to the early nineties (cognitive grammar, construction grammar, functional grammar), many of which were designed in opposition to Chomsky's Universal Grammar program (where there's a deep structure that all people share, and transformation rules generate the surface structures). So he may have just given up on some sort of "linguistic competence", a feature of a person's mind (?), that I never believed in to begin with, so what I would have thought of when reading about those three principles would have been pretty different anyway. For example, it doesn't make sense to me that we'd switch off our cognitive faculties that aren't directly involved with language when speaking, and I certainly don't see a need to integrate functions into a "linguistic faculty" that other cognitive tools do pretty well already. There's some sort of specialisation going on (and some of it is typically brain-related, as Brocca's or Wernecke's aphasia shows), and acquiring your first language seems to be easier and more formative than later language acquisition. But it's still not clear to me how much of language-cognition is specialised. If there are two positions that say "much of it" and "little of it", I'm more inclined towards the little-of-it spectrum.
So my intuition is that three principles hold up pretty well, but it's definitely possible to ask too much of them, and I think Davidson might have realised he asked too much of them. The question I have, is that so, and if yes: what did he expect a "linguistic competence" to do all on its own?
I left university in the early 2000s, so I'm almost completely out of the loop and have been for a while. Computational linguistics and neurolinguistics should have had some interesting results since that time, I would suspect, but I know little about any of that. If I did, maybe the post would have turned out even longer.