On your first post, the first part of an argument against people who disagree each having their own model:
Before we get to that, I want to fill out this:
we may have no choice but to give an account of how (a) the model I use, (b) the sentences I utter, and (c) the occasions upon which I utter them, are related. There are multiple possible explanations for the utterance of a sentence not in the model. — Srap Tasmaner
What I had in mind was this: it is plausible that any individual has only imperfect access to the model of the world they've been working on, and they only imperfectly "translate" it into an utterance. In the case at hand, there are at least these possiblities:
- I may know perfectly well what color Pat's house is, but forgot for the moment, or misremembered;
- I may have only known that Pat's house is the same color as Joe's, which I know to be white, but have failed to make the inference that Pat's is white;
- I may not have recognized that this is an occasion for using my Pat's-house knowledge -- maybe I misheard "Pat's" as "Srap's";
- I may have simply misspoken, perhaps because I was just a moment ago thinking of something black and was primed to say "black" instead of "white".
In addition, if we presume the two speakers share a model, it's reasonable to expect they would actually only be familiar with non-identical proper subsets of the community-wide model. I may know that Pat's house is white, and that his front door is white, and assume that the back door is likewise white, while those that have seen it know it to be grey; I possess slightly less knowledge of Pat's house than some do, but I can readily extend my acquaintance with the shared model by being informed or seeing the back door for myself.
That should add at least one more possibility for those who have a single model disagreeing, that one of them knows and the other assumed or guessed or made a valid but unsound inference, etc., because he wasn't familiar with a part of their model that the other is. (Or maybe neither of them actually know and they're both bullshitting.)
On your account, what people say is presented as a perfect reflection of the model they are using, and that's tantamount to simply identifying the model with what they say.
On my account, differences in what we say are inconclusive evidence that we have different models. There may be other reasons (as above) why on this occasion we didn't end up saying the same thing. And this is so because, differences aside, what any one person says is an imperfect reflection of the model they use.
If that's so, it's hard right off to say whether an occasion of disagreement indicates two models or one in use. You've presented -- at least, along the way somewhere else -- the argument for there being one. That was also more or less
@fdrake's reading of Davidson, in part. I'll have to think a while about what, in my test-bed here, multiple models would look like and whether we can tell the difference between that and a single one. --- Should probably say here more clearly: above I suggested there is community-constructed model that it is something like the union of all the models actually in use by individuals, each of whom is familiar with only a proper subset of that union; I'm inclined to consider that another access issue and say individuals familiar with different subsets of a single model share just one, but I'd be open to arguments that these should be considered different, if consistent, models. I'm not sure it much matters what you say here.
And then there's your main point, that the argument for no models runs through a single shared model just being unnecessary, that the only conceivable use for the model talk in the first place was if competing models were in the offing. If we all have the same one, we don't need that one and can just all have the same nothing.
I'm inclined to pause here and wonder whether the model, even if singular, is doing work that just the raw corpus of utterances can't. For instance, I can say that I deviated in speech from my model because of a priming effect, or misremembering, or misunderstanding the context. ("Oh
Pat's house. Yeah, it's white.") What does the no-models account say? Most of the time I say one thing, but on this occasion I say something else, and --- and what? Why did I deviate? It seems to me the idea of a model gives you at least a start on dispositions to speak in certain ways, dispositions that are not absolute guarantees. But on the no-models view, I just say stuff, and what I "believe" is represented by whatever I said most recently or whatever I say most commonly, or who knows what.
And perhaps now that I've dropped the B-word, we should look a little again at what the word "model" was doing for me. It is frankly representational -- I don't know how else to take "model." If we do develop such models of the world, and happen to use language as a medium for doing so -- no doubt necause of its considerable efficiency and portability compared to other media -- then, while language is the
medium of the model, I need not use it only for producing speech. It can be simply how I store a considerable portion of my knowledge, and my knowledge I can rely on in doing
many more things than speaking. I can also use it to store hypotheses, possible but uncertain extensions of my knowledge, which I can act on to confirm or disconfirm, and so on.
If there is no model, but only my speech behavior, then to do any of these things in which I rely on my linguistic knowledge, I must, presumably, speak to myself about them. Now I talk to myself
a lot, but I don't have to form the sentence "Cheyenne is the capital of Wyoming," much less speak it, even
sotto voce, to remember that it is. Do we perhaps engage in silent and unconscious speech in order to retrieve the facts we know?
That begins to look a bit like a "language of thought," which, oddly, is where my use of language as model medium seems to be headed. It's natural to talk about at least some of our knowledge being stored linguistically only because so much of it is acquired linguistically or is intrinsically linguistic. "Cheyenne" and "Wyoming" are after all names, related in certain ways, which, in this case, are in part purely matters of convention and thus linguistic. My knowledge that Cheyenne is the capital of Wyoming has no option but to be a bit of linguistic knowledge.
But the issue that arises next is obvious: I have considerable knowledge of my native language which I rely on in order to speak it. If that knowledge is not stored linguistically, how can I possibly speak my native language? How could I ever "whisper" to myself, even unconsciously, that Cheyenne is the capital of Wyoming, if I cannot call on my knowledge of English to do that, because I cannot conceivably remind myself linguistically how to speak?
Some of those arguments may not be very good, I dunno. I'm not sure where we are, now, but at least there's now something in the neighborhood of an argument for my initial assumption, that we use language as a medium with which to build a model of the world, which was unargued for to start with.
I hope we're not quite there yet, but if we are at the point where none of the not-really-disagreeing explanations work, then we may be forced to say that one of our two speakers has said something false, although at the moment we don't know which one. There are worse solutions than, as both
@Banno and Herodotus said, going and looking for yourself. As things are in my little test-bed, the model is in part a matter of convenience, and I'm still in a position to compare it directly to what it is a model of -- I can test at least some of it in the most direct way imaginable.
This is already covering a lot of ground, so I'll stop, but there ought to be more on what's happened here, whether I had an idiosyncratic and inaccurate model, and so on. But it looks like it's getting much harder here, so I wonder if we can take a step back and simplify things again.