Comments

  • Disproving game theory.


    Sorry for being dense, but I don't understand what property you're referring to in your last post. It seems to be talking about conditions for a game not to be worth playing, but I'm afraid I just didn't get it.
  • Disproving game theory.


    What is the reference of "this"?
  • Disproving game theory.


    If there is a deterministic game of perfect information with a winning strategy for one of the players, then, a fortiori, there is a deterministic game with a winning strategy for one of the players. So I don't understand your point.
  • Disproving game theory.


    Well, it is wrong, if it is implying that no deterministic game of perfect information can have a winning strategy for one of the players; indeed, I just gave you a counter-example (Hex). You seem to be supposing that (e.g.) the second-player can, by rote analysis, always find a counter-move to the first player, but this is highly non-trivial, and, in fact, false, as the example of Hex illustrates. The first-player may have a strategy that (i) either forces the second player to make a series of moves or else (ii) makes the moves of the second player irrelevant.
  • Disproving game theory.
    For what it's worth, Hex is a game with a winning strategy for the first-player, but which is both mathematically interesting (apparently the existence of the strategy is equivalent to Brouwer's fixed point theorem) and fun to play with your Uncle Fred.
  • Mathematics is 75% Invented, 25% Discovered


    The definition of exponentiation provided is that of cardinal exponentiation, in that it takes two cardinals and gives back a cardinal. So it must indeed work on the set-theoretic structure of the von Neumann representatives, though, as mentioned, it only uses the fact that the representatives in question have a given cardinality (compare with ordinal exponentiation, in which we also want to take into consideration the underlying ordering)---that is, it uses the fact that 2 has cardinality 2.
  • Mathematics is 75% Invented, 25% Discovered


    Notice that is working with the set-theoretical representatives of the natural numbers, i.e. the finite von Neumann ordinals, in which each ordinal is the set of its predecessors (so 0 is the empty set, 1 is the singleton of the empty set, 2 is the set consisting of the empty set and the singleton of the empty set, etc.). In this context, (cardinal) exponentiation is defined in that way, as the cardinal of the set of functions from one set to the other. (The parenthetical aside is only meant to mark that it is also possible to define ordinal exponentiation, but this generally faces additional complications, since ordinals generally have more structure than cardinals and we want to preserve some of that structure when defining arithmetical operations)
  • Realism and anti-realism


    I haven't read Scruton, but my bet would be that he is referring to Dummett's well-known (and controversial) anti-realist arguments that take as basis a verificationist account of meaning. Part of Dummett's strategy consisted in offering a new definition of realism and anti-realism that tied these positions more closely to linguistic meaning. In "Realism" (which you can find in Truth and Other Enigmas), for instance, Dummett defines realism and anti-realism (about certain phenomena) as follows:

    Realism I characterise as the belief that statements of the disputed class possess an objective truth-value, independently of our means of knowing it: they are true or false in virtue of a reality existing independently of us. The anti-realist opposes to this the view that statements of the disputed class are to be understood only by reference to the sort of thing which we can count as evidence for a statement of that class. That is, the realist holds that the meaning of statements of the disputed class are not directly tied to the kind of evidence for them that we can have, but consist in the manner of their determination as true or false by states of affairs whose existence is not dependent on our possession of evidence for them. The anti-realist insists, on the contrary, that the meanings of the these statements are tied directly to what we count as evidence for them, in such a way that a statement of the disputed class, if true at all, can be true only in virtue of something we could know and which we should count as evidence for its truth. The dispute concerns the notion of truth appropriate for statements of the disputed class; and this means that it is a dispute concerning the kind of meaning which these statements have. (p. 146)

    As is clear, Dummett thinks that the realist and the anti-realist are committed to giving a different semantics for the disputed class of statements. Whereas the realist will in general be comfortable with a truth-theoretic semantics, i.e. one that takes the meaning of a sentence to be encapsulated in its truth-conditions, the anti-realist will push towards a verification semantics, one that takes the meaning of a sentence to be given by its verification conditions. One example, dear to Dummett, is the mathematical case. A realist about mathematics will in general adopt a truth-theoretic semantics regarding mathematical statements, one that most likely does not differ from our usual semantics for natural languages. In this semantics, a name's semantic value (i.e. the semantic value of "pi") will be an object (the number pi), a predicate will refer to an extension, a sentence to a truth value, etc. An anti-realist, on the other hand, will, according to Dummett, adopt a constructivist or intuitionist semantics: a name's semantic value is a procedure for constructing the object in question, a predicate's semantic value will be a procedure for deciding whether or not something falls in its extension, a sentence's semantic value will be a proof, etc.
  • You can do with numbers everything that you can do with sets, and the other way around


    In the remarks you quote, I had in mind transitive models. My bad for not making that clearer.
  • What are Numbers?


    If you consider NBG as a two-sorted theory, and take the lower case variables to range over sets, then there's no need to revise my statement, since it employed only lower case variables...
  • You can do with numbers everything that you can do with sets, and the other way around


    Let's recap the discussion. I mentioned that the universes of ZF-Inf are all infinite, and remarked that this easily followed from the Power Set axiom. You then replied that "for any theory, and for any cardinality, there is a model of the theory such that the universe of the model has a member of that cardinality." I was puzzled by this and offered two interpretations: (i) on one interpretation, you were mentioning the fact that every theory with an infinite model has a model in any cardinality or (ii) you were mentioning the fact that, for any model M, it is possible to construct another model M* whose domain is composed of whatever you want. Now, (ii) was irrelevant to my remark, because I was talking about the size of the model, not the size of its members. And (i) forgets the fact that some theories have only finite models (say T implies "there is a y such that for every x, x=y").
  • What are Numbers?


    For more on the limitation of size idea, I strongly recommend reading the summary in Incurvati's book. Still, here are two ways of making the idea more precise: (i) A collection A is too big iff there is an injection from the class of all ordinals into A (Cantor); (ii) A collection A is too big iff it is in bijection with the collection determined by the formula x=x (von Neumann).

    On NBG, I thought it proved that there is no set of all ordinals? If so, then it does not prove the negation of your formula, since typically we would employ (à la Bernays) two sorts of variables, one for classes, one for sets, and lower case letters would correspond to sets... or else we could explicitly have two predicates, say M(x) for x is a set and C(x) for x is a class, and then adapt the formula.
  • You can do with numbers everything that you can do with sets, and the other way around


    Happily, public universities are free in Brazil, so it is possible here to obtain higher education without incurring in large debt!



    I'm not sure what your point is. Can you clarify?
  • What are Numbers?
    Here's an attempt at making this terminological situation more precise. Perhaps we should distinguish two senses of proper class. In the absolute sense, a proper class is a collection that is not a set according to your favorite conception of what a set is (for a recent survey of such conceptions, cf. Luca Incurvati's Conceptions of Set and the Foundation of Mathematics). For instance, suppose you adhere to something like the limitation of size conception, according to which sets are collections that are not too big (say, are not the size of the universe). Then a proper class is a collection which is too big (i.e. in bijection with the universe of sets). In this sense, the natural numbers are not usually considered a proper class, unless, of course, one is a strict finitist.

    But there is also a relative sense. Perhaps we can say that a collection is a proper class relative to some theory T if: (i) there is a predicate P such that x belongs to the class iff P(x), (ii) T proves that there is an x such that P(x) but (iii) T proves that there is no y such that x belongs to y iff P(x). For example, in ZFC, the class of all sets is a proper class, since x=x is a predicate satisfied by every set, but ZFC proves that there is no y such that x belongs to y iff x=x. Whether a collection is a proper class, then, would be relative to the strength of the theory. Perhaps a weaker theory (say, Kripke-Platek) would consider a proper class something that some other theory (say, ZFC) would consider a set.

    Unfortunately, it seems that even in this relative sense, the natural numbers are not a proper class relative to PA. Now, there seems to be some confusion here about whether to take first-order or second-order PA. Notice that, when people take about PA, they generally mean first-order PA. In any case, that's the theory that is bi-interpretable with ZF-Inf (to save me from typing ZF-Inf+~Inf all the time, let's just adopt the convention that ZF-Inf means the latter. I'll rarely make mention of this theory anyway, so no confusion will ensue). Moreover, second-order PA can't be at issue, because second-order PA proves that there is an infinite set, namely the set of all natural numbers, so the natural numbers cannot be a proper class relative to this theory. What about first-order PA?

    Here, the problem is that there is no predicate N such that N(x) iff x is a natural number expressible in PA. For suppose there is. Add a constant c to the language and consider the set of all formulas An:= n<c . Let PA* be PA + {An} (n in N) + Nc. This theory is obviously finitely satisfiable. So, by compactness, it is satisfiable, whence there is a model M in which c is a natural number greater than every natural number! This is a contradiction. So there can be no such predicate.
  • Truth


    You're welcome! As for my own stance, well, this is completely unrelated to the thread at hand, and is rather complicated. I think truth-theoretical semantics is too apart from linguistic research; in particular, it is too apart from what people like Chomsky are doing, so that there is little to no syntax-semantics interface (Davidson has some half-heart remarks in this regard in "Semantics for Natural Languages", but even those remarks show that he was not interested in this kind of problem). Moreover, I have no behavioristic aspersions to intensional idioms, so I see no reason not to employ propositions, meanings, etc., as needed. So I tend to think a more faithful model for natural language semantics is given by something like Montague Grammar or something similar.
  • Truth


    Yes, I think so.
  • Truth


    The end result of the derivation is not a proposition, it is a sentence stating a truth-condition. In the case of "I am hungry", we have (simplifying) "I am hungry" is true iff the speaker is hungry. There is no mention of "propositions" in the theory. Obviously, if you are a fan of propositions, you can then use the above derivation and, given certain assumptions, obtain something like '"I am hungry" expresses the proposition that the speaker is hungry' or whatever. But the point is that if you are not a fan of propositions, you don't need to take that extra step.
  • Truth


    I'm not sure I understand your point. The axioms are meant to be interpretive, that is, they are meant to reflect the real understanding that speakers have of their language. So it relates to "real life" by stating the (actual) conditions under which certain linguistic items refer to objects (in the case of referring expressions) or are true of an object (in the case of predicates).
  • Truth


    I'm not sure what your doubt is. Some semantical theories make use of propositions in stating the meanings of sentences. For example, both (a time slice of) Lewis and Stalnaker use propositions in their respective semantics, construed as sets of possible worlds. So you give the meaning of a sentence by associating with it a set of possible worlds. Other semantical theories make no use of propositions in stating the meaning of sentences. These are theories like Davidson's, which directly state the truth-conditions for sentences without any need to invoke propositions. Let's see an example of the latter in more detail.

    Consider the sentence "I am hungry". In order to give the meaning of this sentence, we need to give the meaning of "I" and "is hungry" (I'll ignore the inflection for ease of exposition). So we need two axioms:

    (R1) For any speaker S, at any time t, the reference of "I" at t is S;
    (P1) For any referring term a, speaker S, time t, and utterance u, if u is an utterance by S at t of a followed by "is hungry, then u is true iff the reference of a said by S at t is hungry.

    Using these, we have:

    (1) For any speaker S and time t, an utterance by S at t of "I" followed by "is hungry" is true iff the reference of "I" said by S at t is hungry (By P1);

    (2) For any speaker S and time t, an utterance by S at t of "I am hungry" is true iff S is hungry (by 1 and R1).

    Since the meaning of a sentence is given by its truth conditions, and this derivation displays the truth conditions of the sentence, this derivation gives the meaning of the sentence. Notice that I did not invoke at any time propositions.
  • Truth


    It is substantially different because it makes no mention of intensional entities such as propositions...
  • Truth


    Of course they have to be contextualized, but so what? Davidson's truth-theoretical semantics has the resources to deal with context-sensitivity (cf. Lepore & Ludwig, Donald Davidson: Meaning, Truth, Language, and Reality, chapter 5, for a sketch, and their Donald Davidson's Truth-Theoretic Semantics for details on various context-sensitive constructions such as indexicals and demonstratives). As for your sample sentence, it's meaning can be given by canonically deriving its truth-conditions from the axioms (1) "the moon" refers (at context c) to the moon; (2) "the earth" refers (at context c) to the earth;(3) if a and b are referring expressions, then the result of the concatenation of a with "orbits" concatenated with b is true iff the reference (at c) of a orbits the reference (at c) of b.
  • Truth


    I've read a lot of Soames, yes, and I don't think he's right on this issue---see Ludwig & Lepore's reply in their book on Davidson. (That is not to say that I think Davidson's truth-theoretical semantics is the way to go, since I don't.) But whether or not Davidsonian truth-theoretic semantics is the right semantics for natural languages or not is besides the point. The point is that it is not obviously incoherent to adopt this semantics when trying to avoid propositions while at the same time maintaining realism.
  • You can do with numbers everything that you can do with sets, and the other way around


    Well, if a given model has cardinality n, then it has no members of cardinality n+1... unless you're just saying that we can always replace one of the elements of model by a set of any cardinality, treated as a black box. Is this what you have in mind?
  • Truth


    Well, one can hope to bypass the need for propositions by adopting (for instance) a Davidsonian truth-theoretical semantics. In that case, the "meaning" of a sentence is given by a canonical derivation of the truth-conditions for the sentence, without any need to invoke propositions.
  • You can do with numbers everything that you can do with sets, and the other way around


    This is to miss the point. You can pick two non-standard models of the same cardinality and which satisfy the same sentences, but which are nevertheless distinct. So no first-order sentence, indeed not even a set of such sentences, will distinguish them.

    Incidentally, note that the procedure you quoted for creating non-standard models is not achieved by introducing just one symbol, k, but by introducing k many symbols; this k can be any infinite cardinal.
  • You can do with numbers everything that you can do with sets, and the other way around


    Well, it is true that if a first-order theory has a model of a given infinite cardinality, then it has models of every infinite cardinality. But a theory may have only one finite model, say of cardinality n for some natural number n. In that case, it may have no infinite models at all.
  • You can do with numbers everything that you can do with sets, and the other way around


    I joined when the other forum went down, but I haven't been much active. I generally post only when I have some spare time from university work...
  • Truth


    I don't see why. One can either take an abstract notion of sentence as one's paradigm (for instance, a sentence is just a set-theoretical object), which delivers the result that most languages will contain sentences that will never be uttered. Or one can define realism directly in terms of facts or situations or something similar (obviously, this may not avoid the issue if you identify facts with propositions, but this is controversial anyway). There are probably other options as well.
  • You can do with numbers everything that you can do with sets, and the other way around


    Again, a couple of observations:

    (1) If the objective was to just express the finite arithmetical sets, and not interpret ZF-Inf, I don't understand why you don't simply use the first idea that came to your mind. That is, if the set is, say, {2, 3, 4}, then we can express that using the PA formula x=SS0 v x=SSS0 v x=SSSS0. No need to use a sgn function.

    (2) In any case, again, note that being able to express a finite set is not the same as being able to interpret ZF-Inf. In particular, expressing a set is achieved by finding a suitable formula that is true only of the members of the set, whereas interpreting ZF-Inf requires one to find terms to stand for sets, and a suitable relation between such terms that captures the notion of belonging. For example, under the Ackermann interpretation, we can name the set {} by 0, the set {{}} by 1, and then we can say that "{ } belongs to {{}}" by way of "the 0th digit of 1 is 1". On the other hand, your supposed interpretation cannot say "the set {} belongs to the set {{}}", because you have neither provided a name for the relevant sets nor provided a suitable relation that captures the set membership relation.

    (3) I'm not particularly convinced that a model is a "set of sentences". Consider any non-standard model of PA. How can we capture this non-standard model in a set of sentences? Certainly it is not by taking the set of all sentences true in it, because it is easy to show that there are non-isomorphic, elementarily equivalent non-standard models of PA. So the model outstrips our (first-order) expressive resources.
  • You can do with numbers everything that you can do with sets, and the other way around


    You say:

    I just designed a formal language that produces legitimate number-theoretical predicates and that is isomorphic with the standard ZF-∞ language under the standard set operations (⋃,⋂). I like my own approach much better than the standard approach, if only, because it is much simpler.

    First, it's not clear what is for languages to be isomorphic. A model M is isomorphic to a model N iff there is a bijection between their respective domains that respects the interpretation of the non-logical symbols. What does it mean for languages to be isomorphic? Second, it's not clear what is the "standard ZF-∞ language under the standard set operations (⋃,⋂)". The "standard ZF-∞ language" is the language whose only non-logical symbol is the set membership symbol, "∈". What does it mean for this language to be "under the standard set operations (⋃,⋂)"?

    More importantly, you claimed in your first post that your procedure was meant to express the bi-interpretability of PA and ZF-Inf. What I'm saying is that this is very far from the truth. You have not shown how to define the relevant notions in PA (i.e. you have not shown that PA proves that your definitions are well-defined). You have not shown that, using your definitions, we can prove the axioms of ZF-Inf. And finally, you have also not shown that your "interpretations" are inverses, which is crucial for bi-interpretability.

    Finally, I'm confused by your use of the sign function. For any x, sgn(x) is either 1, 0, or -1, corresponding to the cases x>0, x=0, x<-1. So there are only three possible values for 1-sgn(x), namely 0, 1, 2. Hence this term can only code at best three possible objects.
  • You can do with numbers everything that you can do with sets, and the other way around


    I think you're losing sight of what it is we're after. We want an interpretation of ZF-Inf, not just any way to code random finite sets. I mention this because, given this goal, we should bear some things in mind:

    (1) The universes of ZF-Inf are composed of sets, not random objects (that is, there are no urelements). So every object in the universe is built out of the empty set in a structured way. So your set {2, 4}, for example, is actually the set { { {}, {{}} }, { { }, { { } }, { { }, { { } } }, { {}, {{}}, { {}, {{}} } } (I may have missed some brackets there). This is important because our coding scheme will use this to its advantage.

    (2) The universes of ZF-Inf are all infinite. This is clear from the fact that ZF-Inf has the power set axiom, so that there's no bound for the size of its sets.

    (3) The interpretation is actually coding sets as numbers, so that to say that 3 belongs to 5 (say) is actually to say that the set coded by 3 belongs to the set coded by 5.

    With this in mind, note that, under Ackermann's interpretation, the empty set is coded by 0, the singleton of the empty set (i.e. {{}}) is coded by 1, the number two is coded by 11 (i.e. by 3), the number three, by 111 (i.e. by 7), and so on and so forth for every von Neumann ordinal (note that I'm considering the leftmost digit as the 0th digit, the second leftmost digit as the 1st digit, etc.). On the other hand, the set {{{}}} is coded by 10, since it does not contain the empty set, but it does contain the singleton of the empty set.

    Note that simply having a coding scheme is not nearly enough for an interpretation (let alone bi-interpretation). You also need to show that (i) the elements in this coding scheme are all definable in the theory that is doing the interpretation and that (ii) all the axioms of the target theory are provable under this coding scheme. These are not trivial matters and some ingenuity is required to see that everything works smoothly (see the chapter by Hájek and Pudlák that I linked in the previous chapter to see how it is done).
  • You can do with numbers everything that you can do with sets, and the other way around


    A couple of observations:

    (1) I think the notion of a theory interpreting another can be made more intuitive by considering the relations between algebra and geometry. As is known at least since Descartes, there is a natural translation of basic algebraic operations (sum, product, subtraction, division, and extraction of roots) into geometric constructions involving ruler and compass, and vice-versa. At first, it may seem exotic that a discipline that is overtly about operations between numbers has anything to do with constructible figures and vice-versa, but once you study in detail the translation between these theories, they become more natural. In fact, the algebraic theory of field extensions even proved to be the natural setting for solving long-standing problems in geometry, such as the trisection of the angle, etc. The general notion of interpretation is basically a generalization of this idea.

    (2) I think you are confused about "arithmetical sets". As the wikipedia page explains, arithmetical sets are not sets which are coded by numbers, rather, they are sets of numbers definable in PA. For example, the set of the squares (i.e. the set {0, 1, 4, 9, 16, ... } is definable by the formula Ex(x * x = y), and hence is an arithmetical set. Similarly, every singleton of a natural number is also an arithmetical set. So considering the arithmetical sets will not give you an interpretation of ZF-Inf into PA. In order to achieve the latter, you need some way of coding the relation "x belongs to y" using only arithmetical predicates. As Kaye and Wong explain the paper you linked (p. 3), this is usually done by way of the Ackermann interpretation, namely "x belongs to y" is defined as "the xth digit of the binary expansion of y is 1". One then shows that, given this interpretation of "x belongs to y", all the axioms of ZF-Inf are satisfied. For instance, extensionality is trivially satisfied, since if w and z have the same binary expansion, they are the same number; the empty set is represented by the number 0; the set {a, b} is coded by 2^a + 2^b, etc. A more or less detailed (and tedious) proof that various set-theoretical concepts can be defined using this interpretation is given in section 1.(b) of this chapter (which is mentioned by K&W).

    (3) Notice that bi-interpretability between two theories, T and T', is a stronger notion than mutual interpretability between T and T'. The latter only requires that there be an interpretation of T into T' and vice-versa, whereas the former requires that each interpretation be the inverse of the other.

    (4) Incidentally, note also that our usual set theory, ZFC, goes far beyond first-order PA! So it's not exactly true that we can do with numbers everything we can do with sets. Rather, we can do with numbers everything we can do with the hereditarily finite sets!
  • Is Kripke's theory of reference consistent with Wittgenstein's?
    That depends on what you mean by "rigid designator", . If by "rigid designator" you mean a term whose extension is the same in all possible worlds, then I don't think there is any natural way of extending this idea to general terms (contra some remarks by Kripke). On the other hand, if you think that rigidity is a symptom of the directly referential character of such terms (the fact that they are non-descriptive, etc.), then I'd argue that many general terms (natural kind terms, mathematical terms) are directly referential, and so "rigid" in an extended sense.
  • Is Kripke's theory of reference consistent with Wittgenstein's?
    I'm glad we have come to at least a partial agreement, . Since you have granted that descriptions are not sufficient for an account of the semantics of proper names, you have also granted that names are not abbreviated descriptions. So we made some progress! Now, I don't think descriptions are necessary, either. Suppose I am perceptually aware of a person, and I decide on a whim to call that person "Jay". Notice that there's no description involved in the act of fixing the reference for "Jay", only a perceptual acquaintance. Perhaps I daydream about Jay a lot (out loud), and, later, people may pick up the name from me, and also wonder many things about Jay. But there was never any description attached to the name, only a perceptual link (in my case). Of course, a description could at any time be attached to the name, but, in this scenario, it has not been so attached; Kripke's idea is not that the historical chain of reference precludes descriptions (it obviously does not), only that they are not necessary.

    So how does this feed into the idea of communal practices? You ask me if the communal practices do not consist in a network of stories we tell ourselves. I'd say no, it does not consist merely in this. Of course, these stories are part of the practice, but they do not exhaust it. One other practice is the practice of tracking down objects throughout time (and other possible worlds), and the linguistic counterpart to this is our use of names. In other words, describing, or narrating stories, is only part of our linguistic practice; referring is another.

    Now, this may sound like a triviality: we use names to (rigidly) refer to objects. So what? That is the gist of what I take to be 's remarks. I think this is a bit unfair, at least in historical terms: before Kripke (and Føllesdal, among others), the semantics of proper names was really muddied. This confusion was partially responsible for Quine's attacks on modal logic, and after we understood clearly how names worked, this allowed us to also get clear on the semantics of modal logic, which in turn allowed us to ask more precise metaphysical questions, etc. But this may all be part of the "quasi-scientific" endeavor which Luke is (apparently) disparaging. How does this connect to our linguistic practice? Wasn't all this already answered in the later Wittgenstein's work, anyway?

    Personally, I'm unimpressed with Wittgenstein, here. Sure, he has many interesting remarks on language, but we don't want merely remarks, we want a full-fledged theory. And, for obvious reasons, one will never get a full-fledged theory out of Wittgenstein. Kripke, on the other hand, opened the doors for the phenomenon of direct reference, thus allowing for full-fledged treatment of other directly referential expressions, such as demonstratives and indexicals (see Kaplan's work). He also managed to sharply, and correctly, separate semantic issues from pragmatic issues (granted, this was done more in "Speaker's Reference and Semantic Reference"), as well as semantic issues from epistemic issues, giving a plausible account of how this separation worked. This relieved semantics from the burden of giving intellectualist accounts of meanings (in Frege's vein, for example, to which Frege's Puzzle is central), which in turn allowed it to concentrate on what is relevant, namely truth-conditions. Or so I would say.

    Note that has nothing to do with essential attributes or whatever. As is well-known, Kripke also defends in Naming and Necessity a (controversial) theory of essential properties. But this builds upon the semantic theory, so the semantic theory is independent of it. That is, it's not necessary (and not sufficient, either, for the matter) for the semantic theory to work that objects have essential attributes (well, aside from trivial ones such as "being self-identical" or "being such that either it is P or it is not P"). To see this, note that there are modal systems (Fine's S5H) which adopt Kripke's treatment of proper names as rigid designators, but in which there are no essential attributes aside from the trivial ones. This is important, since an early (Quinean) confusion about modal logic was that it was committed to some form of "invidious Aristotelian essentialism", which is definitely not the case.

    Finally, one last observation. Unlike what seems to be claimed by , the notion of reference developed by Kripke (and others, such as Føllesdal) does not involve causality, as I said in my first post on this thread. I'm pretty sure that Kripke would deny that causality must be involved, since he countenances reference to abstract objects, which by definition exert no causal powers (and in fact, I remember reading somewhere---perhaps in a footnote to Kaplan's "Demonstratives" or "Afterthoughts"---that, when asked, Kripke explicitly denied that his was a causal theory of reference). According to Føllesdal, the first to propose a causal theory of reference was Gareth Evans, in his "The Causal Theory of Names". Unfortunately, Evans (modestly) presented his theory as an extension of Kripke's, and so many people mistook his theory for Kripke's. But that is a mistake. There need be no causality involved in Kripke's account.
  • Propositions and the meaning of speech acts.


    There are a couple of things, here. Let's tackle each issue separately, in turns.

    (1) On the merits of extending formal semantics from declaratives to other speech acts: You say that, in a mathematical setting, fruitfulness is assessed either by the production of more theorems or by the exactness of the modelling activity; if I understood you correctly, you say that neither of these obtain the case of formal semantics. Well, I disagree. Obviously, as you pointed out, I believe that formal semantics is a worthwhile enterprise. And it's simply a fact that the formal semantics of declarative sentences is currently a well-developed research program. So why not extend this approach to other speech acts? In fact, that is precisely what formal semanticists have been doing. I claim that the fruitfulness of this approach can be assessed in the same way as a mathematical research program, in particular, in the exactness of the models produced. I would also add a further dimension, also analogous to mathematics: in many cases, it's less important to prove theorems than to coin new definitions (e.g. Dedekind's ideal theory), which serve to unify phenomena that were previously considered separately (e.g. the behavior of primes in certain number fields and the behavior of curves in function fields). This, explanation by unification, is one particular case of a virtue of a mathematical theory, namely its explanatory power. Now, I want to argue that formal semantics do provide us with added explanatory power. In particular, by showing what is common to apparently distinct speech acts (or moods), it allows us to explain a greater variety of phenomena than we could before.

    (2) Lewis's account deals with the meaning of sentences simpliciter, not with structured sentences: You are right that Lewis's account assigns meaning directly to strings and not to subsentential parts. That is a problem, though one that he is well aware of (see the "Objections and Replies" section). In the paper, he devises the notion of a grammar to deal with this situation (if you are aware of them, a grammar in Lewis's sense is just a generalized inductive definition). Basically, a grammar consists of (i) a lexicon of basic constituents, (ii) a set of operations which allow us to derive more complex constituents out of the basic ones and (iii) a function which assigns meanings to the derived constituents (an example of a grammar would be the one developed by Lewis in "General Semantics"). Unfortunately, as most linguists know, although there is always a unique language generated by a grammar, there will in most cases be multiple grammars associated with a given language. And, given Lewis's demanding definition on conventions, there is, as he explains in the paper, little sense to the idea of a gramar being conventionally used by a population.

    But then again, why would we want the grammar to be conventionally used by a population? Perhaps it makes more sense to say that the language is used conventionally by a population, while the grammar may obey a different set of constraints (if Chomsky is correct, for example, the grammar will be innate). And given these other constraints, we may obtain a way to associate a grammar with each conventionally used language which explains how the language was learned in the first place, how it answers to compositionality, etc.

    Here, it is important to understand what is Lewis's explanatory target. He is not interested in explaining the semantics of a language, or providing a general structure theory for languages. Rather, he is interested in explaining how semantics interact with pragmatics: given that languages are mathematical structures, how can it be that language is also a social phenomenon? Or, to put it another way, how can it be that Montague and Austin are investigating the same thing? Lewis's answer is: because of conventions, where a convention can be understood as an answer to a coordination problem (there is a strong game-theoretic component in his account, which is clear if you read his Conventions). That is, there are conventions in place which makes us behave as users of a given language. Given that the coordination problem involved is defined in terms of actions and beliefs, and these can only interact with sentences (or utterances of sentences), it makes sense for him to focus on a very coarsed grained view of languages, which focus on the interpretation of sentences. This also chimes in with the idea that semantics feeds sentences into pragmatics, so to speak.

    I agree that he gives prides of place to communication here (he is pretty explicit on this), and that there is little room in his account of conventions for the more creative aspects of language use as explored by Austin. But I see this as a reason to modify his account, not to reject it outright, perhaps by emphasizing the non-conventional aspects of language use.

    (3) Lewis's account is entirely focused on declaratives: Correct, though he does offer an extension, even in "Languages and Language" (this is one of the objections he answers), which is similar in spirit to the one in "General Semantics". Incidentally, in "General Semantics", the mood of a sentence is not simply associated with the sentence, but it is built in into its structure as one of the nodes in the phrase marker. Given that the performative is actually the root of the phrase marker, one could identify the performative with the whole tree; then the sentence radical will be the root of a subtree of the phrase marker, and will thus be indeed embedded into the performative. So I think Lewis's terminology is apt here.

    Anyway, it's clear that Lewis's treatment is merely illustrative and there are a multitude of ways to extend the basic ideas into a more general treatment of language. Here, however, we go back to the problem you highlighted before, namely if the approach as a whole is tenable. I think this is equivalent to the question of whether formal semantics, as practiced in the Montague tradition, is a worthwhile endeavor. Since I believe it is a worthwhile endeavor (largely for the reasons I gave in (1)), I think the approach is tenable.

    By the way, Lewis does not give a treatment of questions in terms of equivalence classes in those papers. He does give a treatment of subject matter in terms of equivalence classes in "Statements Partly About Observation", and I think it's clear that you may identify subject matter in Lewis's sense with questions. The basic idea is given there in terms of possible worlds: questions partition the set of possible worlds into equivalence classes of answers. So a question is given meaning in terms of its answerhood conditions, as it is interpreted in terms of equivalence classes of answers.
  • Is Kripke's theory of reference consistent with Wittgenstein's?


    I'd say that (following Kripke's "Speaker's Reference and Semantic Reference"), in this case, we must separate the speaker's reference from the semantic reference. The semantic reference of a name is the reference that attaches to its historical chain of reference, or, if you prefer, the conventionally established meaning of the term. The speaker's reference is the item to which the speaker wished to refer in using the term. Of course, there is in this case a conflict: in agreement with the Kaplan quotation I supplied earlier, in using the name "John" I intend to comply with the historical chain of reference and refer to John; but, because of your bad manners in not properly introducing your friend, I also intend to refer to the drunkard. There is a huge debate here about what it is that I have actually asserted in uttering "John confessed to me that he slept with your wife": literally, I asserted the falsehood (let's stipulate that it is a falsehood) that John confessed to me that he slept with your wife, whereas it seems that what I implied, or meant to say, was that the drunkard confessed to me that he slept with your wife.

    Note that this phenomenon is not specific to names. Here's an example adapted from Donnellan ("Reference and Definite Descriptions"). I am at a party and want to show to my wife who the dean is. I see him holding a martini glass and say to my wife "The man drinking martini is the dean". Unbeknownst to me, he is not drinking martini, but water. We can even imagine that there is another man at the party, drinking martini, but who is not the dean. So what I literally said is strictly speaking false, even though what I meant to say or what I implied is true (that that man is the dean). Examples involving other more bizarre scenarios and other lexical items abound.

    Now, there are two ways of approaching this. One is to keep firmly to the divide between what is literally said and what is conveyed by a given utterance. The first is the realm of semantics, the latter of pragmatics (I lean towards this approach). The other way is to maintain that a sentence, in an of itself, does not say anything, but needs some sort of pragmatic enrichment in order to represent a proposition. In this approach, the difference between semantic and pragmatics is blurred, since pragmatic factors may affect what is literally said. To illustrate, consider your case. The first approach would say that you actually, literally, referred to John, and hence literally said a falsehood. The second approach would say that you actually referred to the drunkard, and hence said something true. Similarly in the case of the man drinking martini.

    In any case, notice that, either way, this does not affect the general picture of the direct reference theory, though it may affect the details. The first approach attaches more importance in determining what is said to conventional factors, whereas the second approach attaches more importance to whatever it is the speaker has in mind. But both are compatible with the speaker using the name to refer directly to whatever it is she refers to.
  • Is Kripke's theory of reference consistent with Wittgenstein's?
    As I said earlier, it seems to me that your problem is less with the direct theory of reference per se, and more with the picture that is looming in the background, . Here is how the situation seems to me: you seem to think that we reference is something we achieve by some kind of individual mental effort. If that is the case, then it is indeed mysterious how we manage to refer only by being acquainted with a name. The analogous situation would be me going alone inside a store room searching for an item called "Blorg" without any description of what Blorg is. The task then seems impossible, and hence in order to accomplish it I must have some kind of help, in this case a description that allows me to individuate Blorg and reach out to it.

    What Kripke and Kaplan are urging, though, is that reference functions in a completely different way from this scenario. We do not "achieve" reference and the referencing task is not an individual one. Rather, it is more like that reference is handed down to us through our participation in a communal practice. Again, I must quote Kaplan:

    Contrast the view of subjectivist semantics with the view that we are, for the most part, language consumers. Words come to us prepackaged with a semantic value. If we are to use those words, the words we have received, the words of our linguistic community, then we must defer to their meaning. Otherwise we play the role of language creators. In our culture, the role of language creators is largely reserved to parents, scientists and headline writers for Variety; it is by no means the typical use of language as subjectivist semanticists believe. To use language as language, to express something, requires an intentional act. But the intention that is required involve the typical consumer's attitude of compliance, not the producer's assertiveness. — David Kaplan, Afterthoughts, p. 602

    So, again, to refer to Gell Mann, it's not required that I have some discriminating knowledge that allows me to individually reach out for Gell Mann. His own name does that for me, without requiring me to actively reach out to him. Think about it: that's how I can learn a new name, and use it in conformity to the standards of my linguistic community. Suppose I have never heard the name "Gell Mann", and then someone explains to me who he was. What allows me to follow the explanation is that I know that it is an explanation about Gell Mann, not the other way around. That's why the explanation can be wrong and I still be able to refer to the physicist, Gell Mann, because the reference practice was not tied to any particular descriptive content, but to the man himself.

    And that's also why I'm able to formulate counterfactuals about Gell Mann. To understand this point does not require any exotic knowledge of modal logic. Rather, it just requires the platitude that, when I think truly that Gell Mann could have been a lawyer, I'm referring to Gell Mann himself, not some weird surrogate in some weird other dimension (unless you are a counterpart theorist à la Lewis). And what makes the counterfactual true is that Gell Mann could, in fact, have been a lawyer. But then it's not the case that "Gell Mann" is an abbreviated description. For suppose it was, and that the description was "the famous American physicist who blah blah". Clearly "The famous American physicist could not have been the famous American physicist" is false. But "Gell Mann could not have been the famous American physicist" is true (he could have been a lawyer). But if "Gell Mann" is an abbreviated description, the the two sentences are identical, and therefore cannot have different truth values!

    To be honest, though, I think this modal (and the variant epistemic) argument is useful for convincing yourself that there is something wrong with the descriptivist picture. But it's not very good in going to the root of the problem, which in my mind is the incorrect background semantical picture that I sketched above. Once you operate the gestalt shift from an individualist semantics to a communal semantics, I think the direct reference theory appears as the natural upshot of the shift, and it actually explains what is wrong with descriptivism and what is right with the direct reference theory. Or so it seems to me.
  • Propositions and the meaning of speech acts.


    First, the references you asked for: for my general approach to semantic matters, I think the essays by Lewis are invaluable (even if you end up disagreeing with him). In this connection, I recommend especially "Language and Languages" and "General Semantics", which you can find, along with his other papers, in this website; note that the latter essay also contains a discussion about how to reconstruct the other moods in terms of declarative sentences. Since I'm not a semanticist (though I'm largely sympathetic to formal semantics, in particular the tradition stemming through Montague and developed by Barbara Partee), in the specific case of the semantics of questions I just gave a quick glance at the relevant article in the Cambridge Handbook of Formal Semantics (I can send you a copy of the handbook if you like), just to check that I wasn't misremembering the partition approach.

    Going back to the discussion, note that the two situations you described are not symmetrical. We have a reasonable well-developed semantic theory for declarative sentences (say, Montague grammar and extensions thereof). But we don't have a well-developed semantic theory for questions (and other "moods") that is independent of truth condition semantics, or of declaratives more generally. So we may hope to extend our analyses of declarative sentences to other types of sentences, but there is little hope of going in the reverse direction, since we don't even know where to start in that case. That's why we try to understand questions in terms of "answerhood" conditions, whereas no one (that I know of!) has tried to formulate a semantics for declarative sentences in terms of "questionhood" conditions.

    As for how to connect all this with language user competence, that is a hard question. Barbara Partee comments on it in her "Semantics: Mathematics or Psychology?", and Lewis also tries his hand at it in "Language and Languages". Lewis's answer is interesting, though controversial: languages are abstract mathematical structures, and they are connected with users by way of linguistic conventions that ensure that most participants of a linguistic community employ largely overlapping languages. This explains why we can give exact mathematical descriptions of (fragments of) some languages---w are just describing mathematical structures!---, and also explains the social dimension of languages---they enter the scene through the conventions in place. I find this a very attractive and elegant picture, personally, though I haven't thought it through yet,
  • Is Kripke's theory of reference consistent with Wittgenstein's?
    Yes, of course there are tricky cases (yours reminded me of Gettier cases), which may defeat the simple scheme I sketched. In the case of Artabanus, the friend of the historian, the confusion is compounded by the fact that Artabanus, the chief official of Xerxes, did kill Xerxes, or at least there are many who believe so. So here we have two people named "Artabanus" involved, and the ambiguity is what is causing the problem. We may even make matters more confusing, by requiring that the historian be a poor historian and not actually know that there was another Artabanus, chief official of Xerxes, who in fact murdered him---that is, the only Artabanus he knows is his friend, who did not murder anyone. I'm not entirely sure what the appropriate response would be in this case, though my first reaction would be to bite the bullet and say that, if this is your only source, you are not, in fact, referring to Artabanus-the-chief-official, but to Artabanus-the-friend-of-the-historian, and that you have a false belief about him, which may eventually be corrected by coming into contact with more reliable sources, even if you never become aware that there was a problem to begin with. But clearly this deserves more thought.

    The second case, of the joker Augustus, is easier, I think. If the first emperor was also called Augustus, then we're back to the Artabanus case. So suppose that the first emperor was not called Augustus, but, instead, Johannus (say). This would be similar to the Gödel/Schmidt case treated by Kripke, in which we discover that, in fact, the true discoverer of the incompleteness theorems (and much more) was Schmidt, whom Gödel plagiarized. Still, in that case we would not be referring to Schmidt all along when we talked about Gödel; rather, we had erroneous beliefs about Gödel. Similarly, in this case, I would say that we were referring to Augustus, wrongly believing that he was the first emperor, when in fact Johannus was the first emperor. Evidence for this comes from the fact that, if we were to eventually discover our mistake, we would say "Damn that Augustus, he tricked us into believing that he was the first emperor, when in fact he wasn't", a sentence that wouldn't make sense if "Augustus" referred to the first emperor (clearly the trickster didn't trick us into believing that the first emperor was the first emperor, even though the first emperor wasn't the first emperor!).

    This, in turn, allows me to reply to . A name is not an abbreviated description because its modal profile is different from the modal profile of any definite description. Consider Gell Mann. Suppose, for the sake of argument, that "Gell Mann" is identical with the description "The man called 'Gell Mann' who is a renowned physicist and etc.". In that case, the sentence "Gell Mann is the man called Gell Mann who is a renowned physicist and etc." is a necessary truth. But that can't be, because Gell Mann could have been called (say) Feynman and could have been a lawyer instead of a physicist, etc. So "Gell Mann" is not identical with any description.

    Alternatively, I may be wrong in all my beliefs about Gell Mann and still successfully refer to him. Suppose I actually believe that Gell Mann is a woman lawyer from Austria. Those are erroneous beliefs I have about Gell Mann, not correct beliefs I have about someone else. So I must be able to refer to Gell Mann somehow, independently of the descriptions I associate with him, since those are all, ex hypothesi, incorrect. Indeed, it is the ability to keep track of the reference here that allows me to correct my beliefs: if "Gell Mann" was an abbreviated description, then any change in the description would be a change of meaning, so that, instead of having corrected my beliefs about Gell Mann, I'd instead have replaced my beliefs about a woman lawyer (who? Gell Mann?) with my beliefs about the American physicist. But this can't be right.

    I think the source of your confusion (if it is a confusion) is that you think (1) reference is achieved either by acquaintance or by description, (2) reference by way of names is not acquaintance, (3) so it must be a description. Hidden here is the premise (4) reference by acquaintance is reference by perceptual means. But even if we accept (1), I think we should reject (4): names allow us to acquaint ourselves with things with which we don't entertain perceptual relations. Here I can do no best than quote Kaplan:

    Contrary to Russell, I think we succeed in thinking about things in the world not only through the mental residue of that which we ourselves experience, but also vicariously, through the symbolic resources that come to us through our language. (...) I may apprehend you by (more or less) direct perception. I may apprehend you by memory of (more or less) direct perception. And finally, I may apprehend you through a sign that has been created to signify you. — David Kaplan, Afterthoughts, p. 604

    How can this be? This happens because, in a sense, it's not really that I apprehend, or reach out to you through a name; rather, it's more like that you impress yourself on me through that name. Again, as Kaplan puts it:

    The notion that a referent can be carried by a name from early past to present suggests that the language itself carries meanings, and thus that we can acquire meanings through the instrument of language. This frees us from the constraints of subjectivist semantics and provides the opportunity for an instrumental use of language to broaden the horizons of thought itself. — David Kaplan, ibid., p. 603
  • Is Kripke's theory of reference consistent with Wittgenstein's?


    First, note that the fact that semantics is primarily concerned with truth conditions does not mean that it cannot account for speech acts other than assertions. For it may account for them in a derivative way. Here's an example of such a treatment: we may take questions to be a partition of propositions, that is, a collection of equivalence classes of propositions, namely its (conflicting) answers, such that each (conflicting) answer lies in exactly one equivalence class. (I think David Lewis adhered to something like this) So the semantics of questions is derivative to the semantics of its possible answers, which in turn are (generally) assertions. So one may take a term's contribution to the semantics of questions to be its contribution to the truth conditions of its answers.

    Leaving this to the side, and using your linguistic community vocabulary, I'd say that the normativity associated with a name is very thin: the only constraint is that linguistic users of a given community all share the intent to refer to the same person by that name. This includes referring to the person in counterfactual conditions, so that we may wonder whether Clark would be called "Clark" if he had been raised by Thomas and Martha Wayne, or if Clark would still be a journalist under those circumstances. But if counterfactuals appear to bring in foreign standards, consider the following case. I am on the bus and, since I am without my headphones, I can't help but overhear the people behind me talking. Since I'm reading Naming and Necessity, I'm not focusing on them, but I still hear the name "Clark". I don't distinctly hear anything else about its referent, but I start to daydream about it, and begin to imagine that Clark is a philosophy professor who challenged Kripke's theory, and I may even mutter to myself "Clark has raised an interesting challenge to Kripke!". But, unbeknownst to me, Clark is a journalist, not a philosophy professor, and has never even heard of Kripke. So my assertion is false. Indeed, the people behind me hear me muttering and immediately correct me on that. Hence, I must have referred to Clark, the journalist, with my assertion, even if I have no knowledge about him and even if my behavioral dispositions towards the name are very thin and don't include reacting-as-if-he-was-a-journalist among them. How can this be?

    This can happen because the only thing that establishes a chain of reference is the intentions of the relevant language users to use the name to refer to the same person. I picked the name from the people behind me, and intended to use it to refer to the same person as they did. That's it. That's also why the way that the reference was first established, say by description ("I hereby declare that the first Kryptonian journalist in this planet will be called 'Clark'!"), is irrelevant to the semantics of the name; it's an offline mechanism, that may start the chain of reference, but is not part of it. That's also why, contrary to what seems to assume, there is no problem if I intend to refer to different people by different names, or even by the same name. To use another of Kripke's example, suppose I pick up the name "Paderewski" from a musician and the name "Paderewski" from a friend involved in Polish politics. Since I believe no musicians are politicians, I believe this name refers to two different people; unbeknownst to me, there is just one Paderewski, namely Ignacy Jan Paderewski. But that only means that, unbeknownst to me, I actually referred to the same person using that name in different contexts. What matters is that I intended to use the name to refer to the person referred by whoever it was I picked the name from.