• Is Kripke's theory of reference consistent with Wittgenstein's?


    I'm not entirely sure what you are calling "behavioral regimes", but I think it's entirely clear that they are contingently associated to the referent. Clark Kent needed not be a journalist. Consider: DC had a nice line of comics that mimicked Marvel's "What if..." series. In one of those, the rocket containing baby Kal-El did not land in Kansas, but rather near Gotham. So Kal-El was adopted by Thomas and Martha Wayne, instead of Jonathan and Martha Kent. There, he was called "Bruce Wayne" (though, obviously, he wasn't Bruce Wayne, since Bruce Wayne didn't exist in that world, as Thomas and Martha Wayne didn't have a biological child there) and never became a journalist. Still, he was Clark Kent, which is why we can truthfully say (or rather, would truthfully say if those were real people and not fictional characters) "Clark Kent might have been a millionaire in Gotham and never have become a journalist".

    Here's another example. Suppose there is a user here who is also a colleague of mine (I don't know if there is one, but let's pretend there is). Then that person may not be aware that the Nagase who is her colleague is also the Nagase who posts here, and hence may have different behavioral dispositions towards my name. All that this shows is that this person has come into contact with my name by way of two different routes, not that my name may mean different things on different occasions. Indeed, this is clear, since my spouse, who knows that user and also knows that we both post here, does know that the two Nagases are in fact the same person. So she has the same behavioral dispositions towards the two instances of the name. So the behavioral dispositions cannot be part of the name (unless we want to posit massive ambiguity).

    You also asked about semantic content. As I said, the semantic content of an expression is its contributions to the truth conditions of the sentences in which it appears. My contention is that the semantic content of a name just is the referent of the name, since that is what contributes to the truth conditions of the sentences in which it appears.
  • Is Kripke's theory of reference consistent with Wittgenstein's?
    A couple of points: one important distinction that some direct reference theorists make is between the semantic value of a word (its contributions to a sentence's truth conditions) and its psychological or cognitive value (its contributions to an agent's course of action or epistemic states). So it may very well be that I have some bizarre associations with the name "Doria"---it calls to my mind an instinctive disgust and hate and may even drive me towards some irrational action, as (say) punching a wall. But these associations are not surely part of the meaning of the name, "Doria". The point generalizes: the fact that a name carries extra-information to a (specific) speaker than just its reference does not mean that this extra-information is part of the semantic content of the name. Indeed, I'd say this extra-information may belong to the worthwhile endeavor of cognitive psychology, but not to the more impoverished domain of semantics!

    This is similar to the case of demonstratives. To use an example of Howard Wettstein ("Has Semantics Rested on a Mistake?"), suppose a friend of mine takes me to a rock concert. Unfortunately, being poor, we don't go in, but merely watch the show from outside, through some windows. Since the vocalist is using some intense makeup, I get confused when looking at his distinct profiles through two different windows, and I say to my friend that the band's two vocalists are amazing. Angry at my ignorance, he slaps his face, points to the vocalist and says "He... [then my friend drags me around to the other window] is he!". The two different pronouns obviously carry different information for me, for otherwise he would just be telling me the triviality that x=x, not the astounding (to me!) revelation that the two people I thought were amazing are actually the same amazing person! However, this is not a semantic difference in the functioning of the associated pronoun. It may not even be a difference in the character of the pronoun, i.e. the way the demonstrative determines reference in a given context. It may be just a difference in the context of use, and hence even extra-linguistic (Kaplan often makes the point, at least at the time of "Demonstratives", that the mechanism by which we determine a reference generally operates "offline", so to speak, and hence is not part either of the character or the content of what is said).

    This, I think, goes some way towards assailing 's worries about the Clark-Superman example. Indeed, to some people the different names may carry different information and thus suggest different courses of action (though not to everyone, obviously: if Lois already knows that Clark is Superman, in most contexts it would be indifferent for Batman to tell her that Clark is looking for her or that Superman is looking for her). But this is not a semantic property of the name, so it doesn't tell against Kripke's points. Note that this is also not a "quietist" stance: there may well be a valuable theory to be developed about how semantics interact with cognitive psychology, or even a more general science of information. It just won't be a theory about semantics.

    Similarly, this also answers the problem raised by , that a speaker may use descriptive information in tracking down the reference of a name. Indeed, that may very well be the case (and Kripke does address this in his book), but it is totally irrelevant to the semantics of the name. Notice also that in many cases the speaker may not have any descriptive information that allows this tracking: for instance, consider myself. I am shamefully ignorant of Gell Mann. The only thing I know about him is that he is an excellent physicist. But this is not enough for me to distinguish him from, say, Einstein. Yet I still may have true beliefs about Gell Mann (e.g. that he is an excellent physicist), and may refer to him in conversations, for instance by requesting more information about him. But this reference cannot be done via descriptive means, since the only descriptive information I have of Gell Mann is that he is an excellent physicist, and if this was the information used by me to refer to him, I could be referring to Einstein, and not Gell Mann, when I say that "Gell Mann is an excelent physicist". In fact, notice that I may not even have any descriptive information about a person, only non-descriptive information (say, of the perceptual kind) and still be able to refer to a person. So reference cannot be tied to descriptive information (and this even in cases in which the reference was first established by description).

    On a completely different note, says that Kripke "sidesteps the issue of imaginary names". This is factually incorrect: he has an entire book, Reference and Existence, dedicated to this issue! Note that the issue with imaginary names is a bit difficult, so I'll just give a sketch here of Kripke's ideas (I recommend that you read the book, if you want the details). Kripke resorts to (at least) two moves: the first is to note that such names are generally not involved in assertions. It is generally agreed that an assertion, in order to be an assertion, must aim at truth. If I don't think I'm telling the truth when I say something, I'm not asserting anything, but rather doing something else, such as lying or telling a narrative. So he argues that such names are not introduced in the typical way, that is, they are not introduced to name anyone (since there isn't someone to be named to begin with, and the author of the fiction or imaginary tale knows that), and hence are not really names. So sentences such as "Sherlock Holmes is a detective" are not really assertions, so are not really talking about anything, and hence there is name or reference involved.

    Of course, there is the problem that "Sherlock Holmes is a famous fictional character" is talking about a real state of affairs, and hence is an assertion (in fact, a true assertion). So here is Kripke's second move: he introduces fictional characters as abstract objects that are ontologically dependent (or grounded) on the existence of the fictional work as referents of such names. Note that these names refer to abstract objects, works of fiction: so, when considered from this point of view, "Sherlock Holmes is a detective" is false, because Sherlock Holmes is an abstract object, and abstract objects are not detectives! Hence, there are two levels of discourse involved here. First, there are the pretende-assertions, which are not assertions at all, involving the pretend-world of Sherlock Holmes. Second, there are the real assertions involving the real world fictional character (or abstract object) Sherlock Holmes. Obviously, all sorts of complications arise from this picture (though it is a very attractive picture: Amie Thomasson has a similar theory in her Fiction and Metaphysics), but that is the gist of it...
  • Is Kripke's theory of reference consistent with Wittgenstein's?
    It's been a while since I read Naming and Necessity (I need to go back to that book!), but, if I recall correctly, he never uses the expression "causal chain". Rather, he talks about a historical chain of reference transmission, and he is pretty clear about the mechanism by which the reference is historically transmitted: after the initial dubbing by an individual, other people intend to use the name to refer to whatever was dubbed by that individual. It is this intention to refer to the same thing that ensures that reference is transmitted, in spite of the way it was initially fixed or of errors in the descriptive conditions passed along the chain. Since what is primary is the intention to refer, and not the intention to describe, it is the reference that passes along the chain, and not the descriptions (if any!) associated with the name.
  • Predicates, Smehdicates
    It feels kind of dirty to be arguing about a specific counter-example to a far more general idea, if you'd want to transfer this discussion about how mathematical abstractions work to another thread I'd be interested in it. We're not really talking about Sellarsian nominalism any more, we've simplified to an avenue which the logical falsity of the idea turns on but we're not learning much about the idea through the discussion. At least, if you're similar of mind to me on how to learn about stuff.fdrake

    If you wish to do so, I have no qualms about it. I may not be able to engage it fully, however, since I'm rather busy at the moment (posting here is basically a way of procrastinating correcting tons of undergrad logical exercises...).
  • Predicates, Smehdicates
    In other words: Any property is susceptible of being indicated through a simple notation, and any property also harbors a complexity which comes out in explication and usage (I think that's the significance of the 'myth of the given' - things aren't simply 'given' because to know what something is requires having some minimal ability to explicate)csalisbury

    I definitely agree with this, in that I'd hold that most properties that appear simple aren't really simple (I was just agreeing that, even if redness or triangularity are simple, there are other properties that aren't obviously simple). Of course, they can't all be complex, since any theory must have its primitives. So there is a distinction here to be made.

    My only disagreement is that I don't think (Carnapian?) explication is what is doing the work here---rather, I think just plain explanation is. We are not explicating a usage, we're explaining the structure of the world.
  • Predicates, Smehdicates
    Not the intention, the fact of it being asserted to be so (and not: 'the fact of it being asserted to be so').StreetlightX

    So if you didn't assert it, it wouldn't be so? And what is a fact?

    I need to respond to this in more depth in a bit, but I don't see why this would pose any problem, in principle, to a treatment in terms of metalinguistic illustration: there is a way of speaking about an x, such that, when the thing so designated an x dissolves in water, we call x soluble. And to learn this way of speaking, is to understand just that language game involving certain rule-governed correlations between linguistic and non-linguistic objects.StreetlightX

    That may be so (though note that we have an appeal to types of rules here...), yet to learn this way of speaking is not to learn Jumblese.
  • Predicates, Smehdicates
    Are transfinite cardinals or ordinals natural kinds? Are categories natural kinds? What about partially ordered sets or finite fields?fdrake

    I'd say that yes, those are examples of natural kinds. As Frege would say, they have all passed the "acid test" of concepts, namely their fruitfulness.

    I prefer to read it as the evolution of a concept whose boundaries are fuzzy - something like a composite of continuity/differentiability/IVP - becoming more demarcated when the level of mathematical precision was elevated through the emphasis on axiomatisation. I don't see what insisting that mathematical objects are real does to help someone actually doing/teaching maths (the community thereof + history is where mathematics comes from).fdrake

    I have the opposite idea. I think the semantics of mathematical kind terms is very similar to the semantics Kripke-Putnam sketched for scientific natural kind terms, in the sense that mathematical kind terms are (primarily) non-descriptive, i.e. their semantic value is their reference. That is why we can say that Euler and Cauchy were mistaken in (e.g.) treating convergence as uniform convergence (yes, I'm aware that the historical debate here is controversial). So, according to my story, it's not that we started with fuzzy, open-texture concepts and proceeded to precisify them; rather, we started with a collection of examples and proceeded to unveil their structure. Of course, this story needs to fleshed out, and fleshing it out is one of the projects on which I'm currently working.

    I don't know if this narrative would have pedagogical value, though, it seems to me, it's almost the standard narrative that you will find in typical history books (say, Stillwell's famous one), just not articulated like this. Be that as it may, I think there's a real philosophical gain here, since, in my opinion, it does better justice to the mathematical practice of starting from examples and then generalizing (as we can see with the history of continuity, algebraic integers, group theory, etc.).

    If the questions is 'what makes the tokens stick together in the type' - why isn't 'they're used together with some commonalities between them' a sufficient answer? Something like 'I see red' and 'I see red', if there's an underlying redness it better also reference anger; 'red' is a lot more complicated than 'redness' in the colour. Where do you get the ontological or epistemic resources to glue tokens together relationally? Also, how's this furnished through there being an abstract object or natural kind to save us and instantiate itself?fdrake

    In some cases, I do think there may be enough structural commonalities to ground the existence of the type, but in general these commonalities won't be acceptable to a nominalist-about-properties. And in some cases there may be superficial commonalities that are acceptable to a nominalist-about-properties but which don't correspond to anything structural, and hence don't form a natural kind (say, the case of jade, which is not a natural kind). So the ontological resources come from accepting both structures and properties as bona fide entities. Types come in with that.

    As for the epistemic resources, the story is more complicated, because we need (as per Kripke) to distinguish epistemic possibility from metaphysical possibility. To give Kripke's example, given that tigers are mammals, they are (metaphysically) necessarily so. On the other hand we might discover in the future that tigers are really well-crafted robots, made by an alien space to test us in some way. This is an epistemic possibility, i.e. for all we know we could be wrong about tigers. Of course, if we are not wrong, then this epistemic possibility doesn't correspond to any real (metaphysical) possibility.

    Another aside, I'd be quite surprised if there wasn't an equivalence relation approach to tokens and types here, in terms of exemplifications. Like in the construction of fractions. 2=1+1=4/2=...(every other possible expression for 2), then the number 2 is defined as [2], that set of expressions which evaluate to 2. And arithmetic is defined in terms of operations on equivalence classes rather than on their constituent terms (this is why algebraic substitution works). Thus simultaneously every object exemplifies the type.fdrake

    There are many things to discuss here. Roughly speaking, when abstracting from equivalence relations, there are three possibilities: (i) either you take a canonical representative the class to be your "abstracted" object (if memory serves me correctly, that's Kronecker's approach to quadratic forms, because you can use the canonical representative as a calculation tool); (ii) you take the equivalence class itself as the "abstracted" object (in some cases, such as when dealing with quotients, this may be the only sensible option); (iii) or you introduce a new object corresponding to the abstraction (that's what Dedekind opted for when he introduced his cuts; he explicitly rejected Weber's suggestion of simply identifying the real numbers with cuts, preferring instead to say that each real number corresponded to a cut; this is also the neo-Fregean way).

    Generally, option (iii) gives a cleaner theory, in the sense that you don't end up with "junk" theorems such as "2 is an element of pi" or whatever (Dedekind mentioned this as one of his reasons for preferring (iii)). I myself was attracted to some kind of abstractionism that introduced abstract objects via equivalence relations. Unfortunately, there is a catch here: if you allow for too many abstraction principles, you end up with an inflated ontology (this is a real problem: Kit Fine has shown that the resulting theory may be inconsistent---cf. his very interesting The Limits ob Abstraction). So you need some way to select which equivalence relations give rise to abstracta and which doesn't. The most appealing way of doing this is to appeal to natural equivalence relations. But then you need to explain what it is for an equivalence relation to be natural (or you could take Lewis's route and consider "naturalness" a primitive, but I don't think that's very satisfactory). And I think that when you do so, it may open the door to use naturalness itself as a ground for the existence of abstract objects, bypassing the appeal to equivalence relations.
  • Predicates, Smehdicates
    What would you say about the ontology of abstract objects? Why is math a valuable tool for describing the world?frank

    I'm not sure what you want me to say about the ontology of abstract objects. I don't hold any systematic views on the topic (e.g. I don't know if they are grounded on equivalence relations, as proposed by neo-Fregeans, or if they are more fundamental). I do however hold a very generous ontology of abstract objects, and I also don't think that the usual way of distinguishing them from concrete objects is all that useful---I think that some abstract objects (say, works of art and fictional characters) exist in time, for instance.

    As for the unreasonable effectiveness of math, I'd say that math is effective because it carves natures at its joints. It describes types of objects and structures that are natural. These objects and structures then play (at least) a double role in describing the world: (a) some mathematical structures constrain the space of possibility, thus effectively ruling out some possibilities from happening (if you adopt a Stalnaker-like approach to inquiry, this means that part of our knowledge is won by eliminating some worlds from the space of possibility); (b) relatedly, some mathematical structures are instantiated in the physical world, so their physical instantiations inherit their mathematical properties (e.g. groups).
  • Predicates, Smehdicates
    Can you take another run at this? This says that to explain the relationship between tokens we will generally have recourse to something that does not itself explain the relationship. The only sense I can make of that is that objects don't talk, people do.Srap Tasmaner

    I don't know how you got that out of what I said. There are a number of approaches to explaining the type-token relationship. You can do it the way you originally proposed, by exploiting action types. You can do it Bromberger's way and exploit properties which are projectible across the type. You can try to exploit structural properties shared by the tokens and which figure in certain scientific explanations. There are a myriad of strategies, but all strategies which I know of exploit the existence of types (e.g. action types) and properties (e.g. projectible properties), and hence are incompatible with nominalism.
  • Predicates, Smehdicates
    I imagine a similar trick would work for every logically complex property - by transposing its logical vocabulary into set form (which is always possible up to the objects being too big).

    If the difficulty you're highlighting is with regard to predicates requiring higher order quantification, I imagine that this is an obstacle in terms of details rather than one which refutes the central idea Street's been expositing.
    fdrake

    That's not the difficulty I'm highlighting. Yes, we can use "tricks" to introduce atomic predicates to stand for logically complex one---but that has nothing to do with the complexity of the properties in question. In fact, my point was in the opposite direction: a property does not become less complex just because you can symbolize it with a simple expedient. Notice that this has nothing to do with higher order quantification.
  • Predicates, Smehdicates
    The point of giving all that historical detail is to illustrate that math, especially math, is just so because it's how we made it just sofdrake

    I strongly disagree with this statement. I actually think the opposite is true: by paying attention to the history of mathematical concepts, we see that they emerged not because we "made it just so", but were rather forced on us by the nature of the entities in question and the problems surrounding them. To my mind, mathematical entities form natural kinds, and the most fruitful mathematical definitions (such as continuity) capture the structure of those kinds.
  • Predicates, Smehdicates
    But what role does the type play in determining whether two given inscriptions are (intended to be) tokens of the same type? We can imagine an effective procedure for comparing two inscriptions directly and determining whether (following some community standard, ignoring differences of typeface, for instance) they're intended to be the same.

    Type plays no role in the comparison. How could it? If it were necessary instead to compare each inscription to an abstract type, rather than comparing them directly to each other, then we would seem to need some meta-type to enable comparing the given token to a type. We'll never get there.
    Srap Tasmaner

    I think you're misreading the problem here. The point is not that the type itself explains the relationship between its tokens. Rather, it is that in order to explain the relationship between the tokens, we will generally have recourse to some type, though not necessarily the type of which they are tokens. More generally, we will need at the very least to invoke some properties (of the token, of the linguistic system, of the community, whatever) to explain this relationship, so I don't think a nominalist about properties can get around this challenge.
  • Predicates, Smehdicates
    I suppose I don't quite understand how (S) is a property, at least in the sense that 'redness' or 'triangularity' might be a property. I honestly mean this out of sheer ignorance - what is the subject of that property (the iterability is confusing me! - I'm much better at natural language than math)? How do you make sense of (S) as a property?StreetlightX

    The subject of the property in question ("being continuous at point a") is a (real valued) function, say f. Continuity at a point is a property of functions, and a rather important one at that. Be that as it may, my point was the following. Jumblese can handle well simple properties, which are generally represented by atomic predicates. But there are other kinds of properties, complex properties, which are not represented by atomic predicates. A more prosaic example may be the following: x is soluble iff if x is put into water, then x dissolves (this is a very rough characterization of solubility---a more exact approach would need to use counterfactuals and ceteris paribus clauses, but bear with me for the moment). The property of being soluble is not an atomic property, but a complex one, since it is structured. My point is: Jumblese cannot capture this internal structure of the property.

    Surely it's the fact of it being asserted to be so. This might be a disappointing answer but I really think that's it: consider the case of one misspelling (as I used to do alot!) rogue and rouge, where I meant to say rouge. Where someone to call me out on it, where it's obvious that I mean to use rouge (esp. in the context of 'rogue [sic] is red'), my immediate response would be something like 'oh shut up you pedant and deal with the point at hand'.StreetlightX

    But in the example at hand, you're grounding the type-token relation on the intention of the speaker/writer to use the relevant type. So the type is (again) explanatory prior to token, and in fact it must exist, for how else would you intend to write it? You can't intend to write a non-existent thing!

    This is why I've insisted so strongly upon the fact of exemplarity at work here: examples are neither tokens nor types, but are, as it were, tokens that assert their own typicality. To put it in a strong manner: everything is exemplary: the very capacity to assert something as token or type is parasitic or derivative upon exemplifying a token as a token or type as type (each typically in relation to each other of course...). This is why I particularly like Sellars' example of { 'und' (in German) means 'and' } where the first thing he points out is that 'and' is obviously not functioning here as a sentential connective, before going on to point out that this sentence "doesn't merely tell us that 'und' and 'and' have the same meaning; it in some sense gives the meaning." In truth I think that even thinking in terms of tokens and types as anything other than useful shorthand or tools for conceptual organisation is philosophically dangerous and should be kept to a minimum.StreetlightX

    Well, I'm of the complete opposite opinion: I think types, and abstract objects more generally, are indispensable and that nominalism is a bankrupt approach.
  • Help with logic exam:
    It'd be nice if you provided the specific rules that you are working with. This may sound surprising, but there are a lot of natural deduction systems out there, each one with its own idiosyncrasies, so it'll be hard to help without seeing the specific system with which you are working. Some examples of what you are attempting to do would be nice, too. That said, I'd say that at least with regards to question 2, the answer is probably yes.
  • Predicates, Smehdicates
    I'm apologize for the density of this presentation, but I've tried to fit a theory of meaning in three paragraphs! The point of all this wrangling is that for Sellars, language already functions in the way that jumbelese does: it is already free from commitment to properties. Jumbelse just makes it easier to 'see'.StreetlightX

    And this is where we disagree. As I said, Jumbelese can (perhaps) handle simple translations for atomic properties, but what about logically complex properties? How do you represent the property of "for every e>0 there is d such that for every x if |x-a| < d then |f(x) - f(a)| < e"? That is, how do you represent the iterated quantifiers and the implication sign?

    In other words, in the case of "a is red", it may be the case that, as you say, the predicate is not doing any job. But what about in cases in which it is doing a job, such as in the case above? Or even in cases in which we quantify over properties, such as (to use a random internet example) "Alice is everything that Bill hopes to be"?

    The purpose of all this wrangling is to show that what are being correlated here are particular linguistic tokenings rather than abstract linguistic types. There is, in other words, a kind of short-circuit between types and tokens, insofar as meaning is a matter of illustrating functions 'all the way down'. At every point you simply have exemplars. Functions are exemplified by other functions, and at no point do you reach a 'hard-core' of 'fact'; instead you simply have (particular) linguistic objects correlated to other (particular) linguistic objects and whose rules of correlation are themselves functions of uniformities of behaviour by language using animals.StreetlightX

    But here you end up with the problem I pointed out before. What is it that makes it the case that this particular inscription is a token of, say, rouge? What binds all the tokens of rouge together in a single class? Notice that, if you are a nominalist about properties, you can't even invoke any property that all the tokens share; are we supposed to just take that as a brute fact?
  • Predicates, Smehdicates
    But language is a much messier affair than this. In a language such as English, there is a considerable range of sounds that count as a given phoneme. Not just anything, but also not all that sharply circumscribed because we change what will count based on context. There are allophones allowable when singing that would seem strange in everyday conversation. Toddlers utter sentences in which the prosody is right and just a couple of the phonemes are close to standard, and that counts. You use different allophones when whispering or screaming, and so on.Srap Tasmaner

    Exactly, that was part of my point. It is because language is this messy that reference to types is unavoidable. A typical nominalist strategy to do away with types is to employ a resemblance relation between the tokens, and explain reference to types as really being reference to a resemblance between particulars. As you yourself noted, however, this is impossible, because many tokens of the type bear little resemblance at all to each other. So the nominalist's explanaton is implausible.

    Incidentally, I don't disagree with anything else in our post.
  • Predicates, Smehdicates
    Similarly, with respect to your point, I think the rejoinder will be: we need 'sentences', yes, but not sentencehood; 'above', yes, but not above-ness. Having winnowed away what he calls abstract singular terms (anything which can have a suffix like '-ity,' ' -hood,' ' -ness,' ' -dom,' and '-cy'), the challenge is then to show that we can treat 'sentences' and 'above' in the nominalistic manner so outlined in the OP. That is, he answers the dangling question above in the negative: no, we don't need expressions which are names of non-particulars: we need expressions which are linguistic objects, which cannot in turn be treated as attributes with ontological standing. That types must be admitted is unavoidable, but - to put it cheekily - what kind of types?StreetlightX

    No, that's not what I had in mind at all. But let me rephrase the whole argument in order to make it clearer.

    The first thing to notice is that the existence of "Jumblese" is not an argument for nominalism, since the mere existence of a language with certain features can't be considered decisive when choosing between rival ontologies, as, presumably, these rival ontologists could all craft a language suited to their needs (so, e.g., someone who did not believe in objects, but only in properties, could use Quine's predicate functor language). Hence, there must be a further argument as to why we should think that Jumblese is better suited to our ontological needs.

    One such an argument is that its (supposed) ontological commitments are more parsimonious. So theories couched in Jumblese would be more "economical" than theories couched in a more expressive language, say the first-order predicate calculus. Of course, parsimony, as a virtue, does not trump all other considerations. We want to, say, do science in such a language. Is that possible at all? I contend that it's not.

    For one, notice that Jumblese is very good at translating atomic predicates. So, given a (finite?) list of atomic predicates, we could in principle introduce a new graphical convention to express each predicate in the list. But we don't want to work with only atomic predicates: there are also predicates defined by complex logical expressions. A typical example is continuity: a function f is continuous at a point b iff for every e>0, there is d such that for every x in the domain of f, if |x-b|<d, then |f(x) - f(b)| < e. Notice the triple quantification and the use of the conditional in the definition of this predicate; how can Jumblese deal with these? Of course, you could introduce a graphical convention corresponding to the symbol, say, "continuous", but the graphical convention would be unable to express the logical relationships that are crucial in the definition of the predicate. And this is just a rather simple mathematical predicate: how can we do chemistry, biology, etc., in Jumblese?

    This is a typical challenge to the defender of a nominalist language: to show that it is adequate not just for run-of-the-mill atomic predicates or sentences, but that it can also capture complex scientific language. As far as I know, the only philosopher who took that challenge seriously was Hartry Field, in Science Without Numbers. There, he showed how to nominalize Newtonian mechanics; his nominalization involved an ontology of continuum many spatio-temporal points and many logical resources, as well as "platonistic" resources in proving that the non-nominalistic theory is conservative over his nominalist one. I'm not entirely sure what is the advantage of trading an ontology of abstract objects and properties for continuum many spatio-temporal points. Regardless, notice that he was able to do that solely for Newtonian mechanics. He didn't even attempt to face the challenge of developing a similar nominalization for quantum mechanics, and as far as I know most people are skeptical that this is possible.

    Of course, one can then simply say: well, so much the worse for our scientific theories! If they do not employ a nominalistically acceptable language (if they are not formulated in Jumblese), then they do not accurately reflect our preferred ontology and, in spite of their usefulness, they should be taken with a grain of salt, to say the least. But then comes my argument: the nominalist herself is not employing a nominalistically acceptable language, when, e.g., formulating Jumblese. Notice that Sellars, for instance, talks about sentence types when setting up the translation of "X is above Y" in Jumblese---presumably, he is not talking about a translation of a particular inscription of "X is above Y", but a translation of the sentence itself. And that is my point: although perhaps Jumblese itself is free from commitment to properties, our metalanguage when talking about Jumblese is not. So the only way of avoiding reference to types is to use Jumblese itself as its own metalanguage. But I don't see anyone doing this.
  • Predicates, Smehdicates
    The argument is not that because we can dispense with predicates, they have no ontological standing. It's more along the lines of, given that we can dispense with predicates (as per demonstrated), what kind of ontology can we forge on this basis? The motivations for doing so are not internal to the argument; rather, the demonstration functions within a larger project in which the goal is to construct a naturalist ontology and with it, a naturalist theory of representation. One of the 'fallouts' of this desideratum is that such a theory must be a nominalist one, with respect to attributes like 'redness' or 'triangularity'.StreetlightX

    I see. But then we have another problem. In the passages, Sellars talks about inscriptions of the X above Y variety. How are we to interpret such inscriptions? Are they sentence tokens? If so, they must be instances of sentence types. And as soon as we admit types, then nominalism is out. So the problem is less with "redness" or "triangularity" and more with "Xness", "Yness" and "aboveness".

    Notice that it won't do to just say that there are no types, just inscriptions that we are regularly disposed to react in a certain behavioristically specified way, since there must be a regularity for us to respond to. That is, there must be something that accounts for the similarity of the various X inscriptions, and this can't be just our behavior, as we are dispositionally inclined to regularly react to Xs because they are similar, not the other way around.

    There's a particular asymmetry at work which I tried to detail here.StreetlightX

    I don't think that post responds to the concern I raised, which is connected to your reply quoted above. I thought there was an argument from the translation to the acceptability of nominalism. But now it seems that the argument is from nominalism to the acceptability of the language. Of course, then the problem is about how to even set up a nominalistically acceptable language; I don't think it can't be done, because talk of (e.g. sentence) types is unavoidable here.
  • Predicates, Smehdicates
    I don't understand this argument. We can also devise languages (see Quine's predicate functor logic) which dispense entirely with individual variables or constants and thus have only predicates. Does that show that objects are dispensable and only properties have ontological standing? Perhaps we can even "dispense" with both written and oral language and communicate using only a rudimentary sign language. Does that mean that neither properties nor objects have any ontological standing?

    Notice also that the translation goes both ways: we can "translate" 'X is larger than Y' with your spatial arrangement, but we can also "translate" your spatial arrangement by 'X is larger than Y'. So why is one translation preferable to the other?
  • Mathematical Logic Graduate Programs Requirements
    Abstract algebra is IMO prerequisite because it teaches you to handle mathematical abstractions that at first glance don't make sense or that seem unmotivated.fishfry

    I'd say more than that: current research in mathematical logic is for the most part intimately connected with abstract algebra, algebraic geometry, and combinatorics more generally (including especially matroids). In model theory, for instance, most current research focuses on classifying algebraic structures (Shelah's classification program), so if you don't have a thorough understanding of the structures themselves, the discussion is impenetrable. Moreover, many concepts and techniques employed in this area are inspired by their algebraic analogues (e.g. algebraic closure, Zariski geometries and Zariski topology, etc.). So a certain familiarity with them is required. Even areas apparently more remote from algebraic concerns benefit from this background: e.g. there is a lot going on in proof theory that requires a good knowledge of category theory.

    Next, I'd also add to the mathematical prerequisites at least some knowledge of topology. It's incredible how much topological notions are employed in basic every area of logic. If you have already taken a course on basic topology (which covered, say, metric spaces, compactness, separation axioms, etc.), I strongly recommend reading Steven Vickers's Topology via Logic. Vickers motivates the traditional topological axioms (a set of open sets must contain the whole underlying set and the empty set, be closed under arbitrary unions and finite intersections) by noting that they are essentially axiomatizing the notion of a semi-decidable property. This is crucial for understanding why "topologizing" things is such a useful technique in logic (and in general!).
  • A very basic take on Godel's Incompleteness Theorem


    You forgot to add that: T is consistent and G is a sentence in the vocabulary of T.
  • "All statements are false" is NOT false!?!
    This argument fails when applied to a natural language, because there is no precise definition for <is a well-formed formula>. Only our intuition can tell us what do we consider a WFF. There isn't a fixed set of relation symbols either.Meta

    I don't agree with the premises here, but even granting them that's irrelevant: you originally claimed that the op's argument was not formalizable, since it contained self-referent statements. So the claim was about the expressive powers of formal languages, not natural languages. The Carnap-Gödel diagonal lemma literally disproves this assertion: if T is a nice arithmetic theory, for any one-place predicate of the language, we can form a sentence which says of itself that it has that predicate.

    Any anyways we are not talking about real self-reference just some kind of reflection. Let's modify the statement of the Berry paradox:
    "The definition with the least Godel number not definable in fewer than 20 words."
    This is also paradoxical.
    Meta

    This just shows that "definable" is not, well, definable in a formal language (even this, strictly speaking, is not true: we can both define definability for a given language in a meta-language with richer expressive resources, or, some times, we can make do with "local" definability---e.g. the constructible sets in ZFC). The problem is with the concept of definability, not with self-reference.

    Or "The statement with the least Godel number that does not contain the first letter of the alphabet."
    This can be a paradox.
    Meta

    But it's not, and is perfectly definable in, say, Robinson's Arithmetic.
  • "All statements are false" is NOT false!?!


    There is no need to use higher-order logic. Let T be a first-order theory containing enough arithmetic to capture the primitive recursive functions (you can let T be primitive recursive arithmetic, Robinson's Arithmetic, etc.). Then T has enough resources to code its own syntax: in particular, it has enough resources to express the following notions: <is a (well-formed) term>, <is a well-formed formula>, <is a sentence>, <is an atomic formula>, <is a theorem>, <is Q> (where Q is any particular expression of the language: it can be part of the alphabet, a term, a formula, anything), etc. The basic idea here is that we can use numbers to stand as codes for things, for example people (ID cards), products (bar codes), strings of symbols (word processing), etc. Anyway, the important point is that using any reasonable coding scheme, T can prove the following lemma:

    Carnap-Gödel Diagonalization Lemma: Let Q(x) be any formula of the language with one free variable. Then there is a sentence D such that T proves "D <-> Q("D")", where "D" is the number coding D.

    Notice that the above implies that, for any property Q, there is a sentence D which says of itself that it has the property Q (notice also that this immediately implies that, if the theory is consistent, truth is not definable, otherwise we would have a version of the liar paradox---again, which is not to say we cannot introduce a predicate Tr by axiomatic stipulation). In particular, if we order the alphabet of the language and construct a predicate F(x) such that a (code of a) sentence satisfies F(x) iff it contains the first letter of the alphabet (a tedious, but entirely routine exercise, once you get the hang of the coding machinery), there will be a sentence D which says of itself that it contains the first letter of the alphabet. So that particular sentence can be given a formal definition.
  • "All statements are false" is NOT false!?!


    I'm not sure I understand. What do you mean by "can't speak about all statements in a definition of a statement"? In any case, it seems to me false; in defining the predicate "S", we will presumably use universal quantifiers, e.g. "(x) (Sx <--> ...)", where "..." is the definition of "S". But then, this quantifier will range over all statements (and more besides), so we will be in a sense "speaking about" all statements. If you mean that we can't mention all statements in the definition of a particular statement, that seems false too. We could define a new statement, say Q, such that Q is "All statements imply themselves", which is presumably true.

    As for your point regarding taking statements naively, I'm unconvinced. Formal linguistics try to capture what is meant by a "naive" statement, and it's not unreasonable to suppose that in such a theories we quantify over all such statements. There may be other problems lurking in the background, though: if you construe statement broadly enough (naively?), consider a statement about every statement which is not about itself. Is it about itself?
  • "All statements are false" is NOT false!?!


    You are right that we can't define a truth predicate for the language in question in the language itself---that's Tarski's theorem (though you can define it in a metalanguage, and one can then restrict the quantifier of the op to "All statements of the object language"; admittedly, this would make his argument evaporate for rather trivial reasons). But there are other options, namely to introduce a new predicate, say "Tr" to the language, try to fix its extension by introducing new axioms, say "Tr("Q") <--> Q" for every sentence Q. Of course, you need to be careful if you want both to preserve classical logic and avoid inconsistencies, and things can get complicated rather quickly here, especially considering iterations ("Tr("Tr("Q")")", for instance). But there are some reasonable ways of doing it.
  • "All statements are false" is NOT false!?!
    2. Since we can always go from "All x are y" (x may stand for 1,2,3 here) to for instance "All x are y and 2 are y" without changing the truth value, it must hold that A is logically equivalent to (A') "All statements are false and this statement is false", so A <-> A'.Pippen

    As I mentioned in a private message, I'm not entirely sure this step holds. What is the reference of "this" above? If it is "this statement is false" (taking it to have small scope), then your proposed rule would take us from "All statements are false" to ""This statement is false" is false". But the latter one is false, not truth-valueless. So the whole conjunction is false.
  • "All statements are false" is NOT false!?!
    Because Your arguments can not be formalized. You can't speak about "all statements" formally.Meta

    That's incorrect. Assuming a modicum amount of arithmetic, it's possible to formalize the syntax of first-order logic inside a given mathematical theory (say, primitive recursive arithmetic). This technique is known as the arithmetization of syntax and is due to Gödel. Given one such formalization, we can define a predicate S(x) which is true of all and only the (codes of) sentences of the language. So a sentence such as "all statements" would be regimented as "for every x, if S(x), then ... ". Notice that this allows for self-reference, by employing the Carnap-Gödel diagonalization lemma, which is an element in the original proof of Gödel's incompleteness theorem (though it's not essential for proving the result).
  • The society depicted in Kubrick's Eyes wide shut


    I don't think the two issues are separate. For one may hold that Kubrick is not depicting (directly or indirectly) how members of the elite class conduct their business, but rather how Bill fantasizes that members of the elite class conduct their business. So the movie would not involve the elite's business in any way, but rather involve Bill's fantasies about the elite; which would, in my view, make the movie less about social critique (unless indirectly from the depiction of Bill himself) than about the psychology of Bill and his relationship to Alice('s desire).
  • The society depicted in Kubrick's Eyes wide shut


    Exactly. The op assumes that the events depicted in the movie (e.g. the orgy scene) are depicted as real, but I don't think this is obvious. For instance, Kubrick hints that the perspective from which we see the movie---Bill's perspective---is heavily distorted---notice how basically everyone acts towards him as if they were in love/flirting---and, well, the title of the movie is Eyes Wide Shut, which indicates both a more dreamy interpretation and also to the fact that the main characters is shutting his eyes towards things (such as his wife's desires). So I don't think the movie is much about a dystopic elite, so much as about Bill's inner conflicts.
  • The Principle of Bivalence and the Law of the Excluded Middle. Please help me understand
    Bold emphasis mine.

    So it is excluded ''that neither Socrates is mortal nor Socrates is not mortal''

    Let P = Socrates is mortal

    That means it is excluded that (P & ~P). That's the LNC: ~(P & ~P).

    So, LEM isn't the inclusive OR at all.
    TheMadFool

    First, the disjunction is defined as being inclusive. So any formula with it as its main connective will be inclusive, by definition. So, e.g., p v (q & ~q) is inclusive, even if (classically) both disjuncts can't be true at the same time.

    Technically, what is excluded is ~(p v ~p), that is, the negation of the excluded middle, so what you have is ~~(p v ~p). But this is only equivalent to LNC if you assume classical logic (and then it will also be equivalent to infinitely many other tautologies, such as p -> (q -> p), so this equivalence is completely uninteresting). In order to see if they are indeed equivalent, you need to see what happens when you go to other logics. Again, in intuitionist logic, you have LNC, but not LEM, so LNC does not imply LEM. Conversely, in a paraconsistent logic such as the logic of paradox, you can have LEM, but not LNC. So LEM does not imply LNC either.
  • The Principle of Bivalence and the Law of the Excluded Middle. Please help me understand
    So, if LEM doesn't exclude [P & ~P] what is this ''middle'' that's being ''excluded''?TheMadFool

    I don't think it's helpful to concentrate so much on the name of the principle, in this case. The name comes from a time when the separation of syntax from semantics was not so clear, so it is inevitable that there will be some confusion attached to it. In any case, historically, the idea has been that, for any statement p, either p is true or ~p is true, and there is no third or middle option. You could perhaps think of p and ~p as "poles" and LEM saying that there is no intermediary position between them (indeed, we still refer to the polarity of a proposition, i.e. whether it is an assertion or denial).

    That is to say LEM and LNC are logically equivalent.TheMadFool

    Assuming classical logic. Note that you used both double negation and DeMorgan in your proofs; intuitionists, for instance, deny both these principles (well, they accept weaker forms of them which will not by themselves be able to prove this equivalence). As an exercise, try proving the equivalence without using these or r.a.a (which is also rejected by intuitionists).

    EDIT: Note that, if you assume classical logic, any tautology is equivalent to any other (they are always true), so, assuming classical logic, of course LEM is equivalent to LNC. It is also equivalent to p -> (q -> p), and infinitely other formulas. The question is if we can prove them equivalent without assuming classical logic.

    So, what I can't get is what you mean when to my question ''Is ''the apple is red'' AND ''the apple is not red'' also excluded [by LEM]?'' you said ''No, since disjunction is inclusive.''???!!!TheMadFool

    I'm not sure what is the problem. LEM literally says "either p or ~p", not "either p or ~p, but not both". That is, the disjunction there is inclusive, not exclusive.
  • The Principle of Bivalence and the Law of the Excluded Middle. Please help me understand
    Is ''the apple is red'' AND ''the apple is not red'' also excluded [by LEM]?TheMadFool

    No, since disjunction is inclusive. A few posts ago I described a system in which you have a truth-value glut (i.e. a proposition being true-and-false), but in which LEM was upheld. Notice that, strictly speaking, from a contemporary point of view, the so-called law of excluded middle just says that, for any formula p, the string "p v ~p" is acceptable in the system. Note also that the disjunction relates p to not p, and not p to its falsity or whatever. This gap can be exploited to give a different semantics to negation, in such a way that (what I have called) weak bivalence is upheld.

    1. If it is then why? Also raises another issue viz. why have the law of noncontradiction? It seems to be a corollary of LEM.TheMadFool

    Actually, the situation is probably the reverse. Assuming classical principles, such as double negation and reductio ad absurdum, it's possible to prove LEM from LNC. Of course, those are precisely the principles questioned by intuitionists...

    2. If it isn't then it leads us to a contradiction and also, why?TheMadFool

    Well, dialetheists can live well with contradictions, since they also drop ex falso quod libet, so contradictions don't trivialize the system.
  • The Principle of Bivalence and the Law of the Excluded Middle. Please help me understand
    Is this a bad example?:sTheMadFool

    Why is it a bad example?

    In any case, it helps to distinguish between: semantics, which is how we interpret our system, from syntax, which is concerned with which expressions are allowable in the system. Bivalence is a matter of semantics (how are going to interpret the system's formulas? Are going to allow only two truth-values? Is every formula required to have a truth-value?), whereas excluded middle is a matter of syntax (can we show that, given an initial set of sentences---the axioms---, plus certain rules of transformations---the rules of inference---, we are able, for every formula p, to reach "p v ~p"?). Of course, these two dimensions are not wholly independent, in the sense that we usually want there to be a certain parallelism between the two, but there is a certain amount of freedom in how to enact this parallelism.
  • The Principle of Bivalence and the Law of the Excluded Middle. Please help me understand


    It may be useful to introduce some distinctions. Let's call the principle of weak bivalence the idea that there are only two truth-values, and the principle of strong bivalence the idea that every sentence must have exactly one truth-value (so it is either true or false). Then most intuitionists accept weak bivalence (there is no third truth-value), but reject strong bivalence. Note that rejecting strong bivalence is not the same as accepting that there is a sentence which doesn't have a truth-value (or that has both, or a third one), since intuitionists don't accept the equivalence between "not every" and "there is an x such that not...".
  • The Principle of Bivalence and the Law of the Excluded Middle. Please help me understand
    Yes, as far as I know, most readings of bivalence include an explicit clause such as "but not both". Also, note that, in my formalization, B can be (formally) treated as a third truth-value, though its interpretation (for a dialetheist) is the one I gave.
  • The Principle of Bivalence and the Law of the Excluded Middle. Please help me understand
    Here's an easier example than fuzzy logic. Suppose some statements are true (T), some statements are false (F), and some statements are both true and false (B, also called a truth-value glut)---so bivalence does not work. If statements have their usual truth-values, then negation and disjunction are defined as in the classical case. If a statement is both true and false, then its negation is also both true and false. If both disjuncts of a disjunction are both true and false, then the disjunction is both true and false; if one is true and the other both true and false, the disjunction is true; and if one is false and the other both true and false, then the disjunction is both true and false. Also, suppose that we are less interested in false statements than in not true statements (so both true and false is a designated value). This is a paraconsistent system, by the way, and some who defende dialetheism employ it.

    In that case, we have a failure of bivalence but we still have excluded middle: if a statement has a classic truth-value, then the disjunction is classically valid and hence true. If the statement is both true and false, then the disjunction will be both true and false, so true (or, at least, valid).
  • The Principle of Bivalence and the Law of the Excluded Middle. Please help me understand
    Yes, that's it. On the other direction, note that intuitionists generally uphold bivalence, but they reject excluded middle.
  • Definition of arithmetic truth
    However I was reading a book today about logic and I faced the same problem again.
    What if I want to define existential truth in ZFC or in a more powerful system? ZFC provability is not enough anymore as the set of provable sentences of ZFC form a real subset of the set of true sentences of ZFC. How do we define existential truth of ZFC? I think that we must use a metaphysical existence concept.
    Meta

    There are a couple of options, here. One way is to employ a weaker meta-theory, say primitive recursive arithmetic, and simply talk about what ZFC can prove. In fact, many mathematicians and logicians (such as Feferman) consider questions which are independent from ZFC, such as the continuum hypothesis, to be simply meaningless. So, while we can't formulate a truth predicate for ZFC, we can bite the bullet and say that the extra power of such a predicate would be meaningless anyway.

    Another one would be to exploit something that is a bit bizarre about ZFC. Since ZFC can also code its own syntax, you can use ZFC as a meta-theory for ZFC. Since ZFC is able to formulate the satisfaction relation, you could try to use that to define a local truth predicate for ZFC. In fact, we know that, for every finite set of formulas from ZFC (remember, ZFC has infinitely many axioms), ZFC can prove that they are in a sense true (these are called "reflection principles"). So you could formulate local truth predicates and use these as a proxy for the global truth predicate.
  • Rough sketch of Goedels Theorems
    Because of the First Incompleteness Theorem we know that if S is consistent then G is unprovable in S. Since "G is unprovable in S" is our function G (see above) we can re-formulate that statement as: If S is consistent then G. Now, we "just" formulate this statement in S and we know it's provable in S. Now, we assume we could also prove in S that S is consistent. Then by mp G would follow (and thus be proven) in S which is impossible due to the First Incompleteness Theorem. Because of this contradiction our assumption must have been false.Pippen

    It's very rough, but yes, that's the gist of it. Incidentally, this is a very good bare-bones summary of both theorems.
  • Rough sketch of Goedels Theorems
    So that is a fancier way to say "This statement is false"?BlueBanana

    Technically, to say "This statement is unprovable". But what is remarkable is that the fanciness consists in showing (i) isolating a class of interesting functions (the recursive functions, which would pave the way for the computers you and I are using to communicate), (ii) showing that this class is capturable using simple arithmetic operations, and (iii) showing that truth is not so capturable. That is, Gödel's theorems can be summed up quickly as: (i) the set of theorems is recursively enumerable (i.e. there is an algorithm which tells us whether a given sentence is a theorem) and (ii) by Tarski's theorem, the set of truths is not even arithmetic, let alone recursively enumerable. Thus, the set of truths is not the same as the the set of theorems.