Comments

  • Why did logical positivism fade away?


    A couple of remarks:

    1) The logical positivists were influenced by the Neo-Kantians, but weren't themselves Neo-Kantians. So Neo-Kantian worries do not automatically carry over to them.

    2) Logical positivism also wasn't a completely homogeneous movement, by the way. So whereas strictly epistemological projects may have driven them initially (especially Schlick), such projects were not pursued by all members of the group. In the specific case of Carnap, he later came to the view that foundational disputes about what is or is not intelligible as fruitless, since they depend on previous criteria that were not necessarily agreed by all parties. That is why he proposed his Principle of Tolerance, and suggested the replacement of traditional philosophy by the logic of science:

    To eliminate this standpoint, together with the pseudo-problems and wearisome controversies which arise as a result of it, is one of the chief tasks of this book. In it, the view will be maintained that we have in every respect complete liberty with regards to the forms of language; that both the forms of construction for sentences and the rules of transformation (...) may be chosen quite arbitrarily. (...) By this method, also, the conflict between divergent points of view on the problem of the foundations of mathematics disappears. For language, in its mathematical form, can be constructed according to the preferences of any one of the points of view represented; so that no question of justification arises at all, but only the question of the syntactical consequences to which one or other choice leads, including the question of non-contradiction.

    And a little later:

    The first attempts to cast the ship of logic off from the terra firma of the classical forms were certainly bold ones, considered from the historical point of view. But they were hampered by the striving after 'correctness'. Now, however, that impediment has been overcome, and before us lies the boundless ocean of unlimited possibilities. (LSL, Preface, p. xv)

    In other words, Carnap is essentially proposing: let a thousand flowers bloom! If you have a proposal for the logic of science or for a new scientific theory, then write it clearly, preferably in a formal or semi-formal system, and we can then assess its usefulness. But there is no sense in trying to decide a priori which forms are acceptable, since, again, such a decision would have to employ a logical framework, and then the question arises about the validity of this framework. Rather, people should be free to employ whatever framework they need, and the validity of the framework is decided not by a theoretical argument, but by pragmatic considerations. Does it achieve its goal? Does it promote human flourishing? And these pragmatic considerations are not guided by rules established once and for all, but by negotiation among the relevant parties.

    3) This moves the debate in a rather different direction. Instead of asking whether something is science or pseudo-science, it asks whether a given theory is a fruitful research program or a degenerate one. This seems (to me) a much more interesting question, and much more amenable to debate.
  • Why did logical positivism fade away?


    Just to be clear, I'm not saying that Popper is irrelevant or whatever. It's just that popular accounts of science tend to portray him as being the be-all-end-all of philosophy of science, and particularly his falsificationism as being almost consensual when that is far from the case in the philosophy of science. I mean, maybe it should be consensual, but as a sociological observation, I don't think it is.



    That is a complex historical question, and one that I don't have a definite answer. Still, here are some pointers:

    1) First, it is undeniable that the reception of logical positivism in the USA was largely colored by Ayer's Language, Truth and Logic (cf., for example, Scott Soames's very whiggish history of analytic philosophy, Philosophical Analysis in the Twentieth Century, devotes considerable space to Ayer in his narrative). Now, in that work, Ayer gives pride of place to the verification criterion of meaning, and his version of it does suffer from some problems (though he was aware of them and tried to successively refine it). Thus, if all one read was that work, it is easy to come away with the impression that the movement was largely concerned with the demarcation problem, that Ayer's version of the verificationist criterion was the one proposed by the movement, and that it failed. Since I think more people read Ayer than (say) Carnap or Hempel, it is no wonder this view is still widespread.

    Notice that reading logical positivism through Ayer also has a further deleterious effect, namely of isolating logical positivism from its historical roots. Ayer presents the movement as being largely a better version of British empiricism, as if Carnap, Schlick, and Neurath were largely involved in a research program that went back to Locke, Berkeley, and Hume. But that is a serious distortion: they were much more engaged with the Neo-Kantians (in their various guises), for example, than with British empiricism. This is a problem because it makes one read, e.g., Carnap's Aufbau as being an exercise in a phenomenalistic reduction of science to sense data, when it's actually an exercise in the uncovering of the logical structure of science. That is, unlike the British empiricists, but similar to the Neo-Kantians, Carnap thought that what is important viz. science is that it leaves behind its sensory origins and attends to the structure of our experience (in fact, Carnap is clear that he thought that he objects of science can all be captured by purely structural descriptions).

    More to the point, one consequence of this is that the verificationist criterion of meaning appears as if it was merely an empiricist weapon against traditional metaphysics, when in fact it was part of a larger program to rationally describe the logic of science (and thus that had one of its roots in Neo-Kantism), which in turn was just one branch of a larger political project to promote the unity of science, a goal that Carnap and Neurath in particular thought as advancing the cause of the rational reconstruction of society.

    2) Relatedly, another major factor in the reception of logical empiricism was Quine. And, again, even though he was a close friend of Carnap's, it is undeniable that some of his remarks on Carnap's philosophy are highly misleading, to say the least. This is especially true of the highly cited "Epistemology Naturalized" (an otherwise brilliant essay, by the way), in which Quine also assimilates Carnap's Aufbau to the British empiricism program. Moreover, I think the early Quine simply misread Carnap, confusing his philosophy with that of C. I. Lewis. More specifically, Quine read Carnap as engaged in an epistemological project of explaining the truth of mathematics and logic, and of appealing either to truth by convention or to analyticity in order to explain this. This is true of Lewis, but it is importantly not true of Carnap, who had by then abandoned the old epistemological project and was more interested in a conceptual engineering project of devising new tools for the development of science. Unfortunately, Quine's conflation (particularly acute in "Two Dogmas") was widely circulated, and still today you see people complaining that Carnap's distinction cannot carry the "epistemological" burden he imposed on it, when the truth is that Carnap was simply not interested in epistemology anymore (part of the problem here was that Quine's German apparently wasn't all that good when he read Carnap; moreover, as he often said, he read the Logical Syntax as it "came out of Ina's [Carnap's wife] typewriter", which means that he most likely read the first version of LSL, one that did not contain Carnap's Principle of Tolerance).

    Anyway, the net result is that most people read Carnap as engaged first in a reductionist project in the line of British empiricism, and then as engaged in an epistemological project to certify the credentials of mathematics and logic. In both cases, we have a picture of Carnap as engaged in a broadly foundationalist project which tries, first, to draw a clear line between science and metaphysics, and, second, to show that this line does not exclude mathematics and logic. The verificationist criterion then emerges as a natural solution to both problems. Statements are divided into analytic and synthetic. The analytic ones are true by convention or definition, whereas the synthetic statements are those which have empirical consequences. This provides the demarcation line---metaphysical statements are neither true (or false) by convention, nor have empirical consequences, and are therefore meaningless)---and also solves the problem of mathematical knowledge (it is analytic). Again, this may be a fair depiction of Ayer's (and perhaps C. I. Lewis's) philosophy, but not of Carnap's.

    In short, although much more needs to be said about this, I definitely think that the reception of logical positivism was influenced by Ayer and Quine, and that had as an effect to obscure the main contributions of the movement.



    There are many reasons for that, but the main one seems to be that no criterion is forthcoming. Moreover, much of philosophy of science has turned to more concrete matters, being more interested in how science is actually developed and justified than in a priori pronouncements of what is legitimate or not. In other words, that particular line of research did not prove much fruitful, I think.



    Yes, it is somewhat fashionable nowadays to associate scientism with (covertly) right-wing ideologies, but, historically at least, that was simply not the case. Carnap and Neurath were firmly leftists, and even the more conservative members of the Vienna Circle were mostly progressists (certainly by today's standards). In fact, one interesting line of research today is whether Horkheimer, and the early Frankfurt School more generally, could be considered as an ally of the logical positivists against, e.g., Heidegger.
  • Why did logical positivism fade away?
    First, I would like to dispute that "fallibilism" is any better criteria of significance than verificationism, or even that it is mainstream today. It is true that most popular accounts of the scientific method mention Popper in this regard, but these accounts do not reflect mainstream thinking in the philosophy of science. If anything, mainstream philosophy of science today has largely abandoned the search for criteria of demarcation, being more interested either in specific questions regarding specific sciences (e.g. what is the correct interpretation of QM), in what makes a scientific research program fruitful (following Lakatos), or else in general questions of what constitutes a good scientific explanation (cf. the work of Nancy Cartwright, Wesley Salmon, and others).

    As for logical positivism and its twilight, three historical remarks:

    1) It's important to note that the movement was born in the very specific European context of the inter-war period, and that, in the hands of Carnap and Neurath, it had a very specific political dimension. Carnap's major work of the period was called Der logische Aufbau der Welt, which better translates to The Logical Reconstruction of the World. This is relevant, since this title alludes not only to Carnap's rational reconstruction procedure in the book (i.e. reconstructing the world of experience out of a slim conceptual basis), but also, and more importantly, to the rational reconstruction of a society that had fallen apart during the First World War. In other words, this title was carefully chosen by Carnap to signal also his alliance with a broader political movement that aimed at bringing about a more rational and just society (which for Carnap meant some form of socialism). As he himself puts it in the preface to the work:

    We feel that there is an inner kinship between the attitude on which our philosophical work is founded and the intellectual attitude which presently manifests itself in entirely different walks of life; we feel the orientation in artistic movements, especially in architecture, and in movements which strive for meaningful forms of personal and collective life, of education, and of external organization in general. We feel all around us the same basic orientation, the same style of thinking and doing. It is an orientation which demands clarity everywhere, but which realizes that the fabric of life can never quite be comprehended. It makes us pay careful attention to detail and at the same time recognizes the great lines that run through the whole. It is an orientation which acknowledges the bonds that tie men together, but at the same time strives for the free development of the individual. Our work is carried by the faith this attitude will win the future. (Carnap, Preface, p. xviii)

    Note the reference to an "intellectual attitude which presently manifests itself in entirely different walks of life", in particular the mention of architecture. Carnap is here referring, among other things, to the Bauhaus movement, which had close ties to the logical positivists (for more on this connection, cf. Peter Galison's work). This makes clear that Carnap and Neurath did not think of their work as just some narrowly technical philosophy of science, but rather as a contribution to a whole new way of life. This also makes clear, e.g., his opposition to Heidegger: more than a philosophical opposition, it was a political opposition. As he puts it at the beginning of the above quote paragraph:

    We do not deceive ourselves about the fact that movements in metaphysical philosophy and religion which are critical of such an orientation have again become very influential of late. Whence then our confidence that our call for clarity, for a science that is free from metaphysics, will be heard? It stems from the knowledge, or to put it somewhat more carefully, from the belief that these opposing powers belong to the past. (ibid.)

    That is, Carnap saw Heidegger as a reactionary, right-wing philosopher which still clung to the old world order, and saw his own participation in the Vienna Circle as heralding a new way of life. Of course, we all know how that turned out. Still, the important point is that logical positivism began as a vibrant movement that had many ties to the political and artistic context of Europe. In that context, it was revolutionary, and had revolutionary ambitions. Thus, after the rise of Nazism and the immigration of its leading exponents to the USA, the movement lost touch with its revolutionary roots (the Cold War context was also important: once they arrived in the USA, they were kept under surveillance by the FBI---cf. George Reisch's work). That is not to say that they lost all political touch. Carnap, for instance, continued to sponsor leftist causes, being apparently cited several times in the socialist newspaper The Daily Worker and being very explicit in his "Autobiography" for The Library of Living Philosopher's volume on him that even by 1963 he still considered himself a socialist of some form (cf. pp. 82-83, which I think are very enlightening in this regard). And scholars such as André Carus have been at pains to argue that Carnap's broad philosophical outlook, with its emphasis on conceptual engineering and explication, is best viewed as still part of a program for the rational reconstruction of our way of life (cf. his excellent Carnap and Twentieth Century Thought: Explication as Enlightenment). But it is to say that these political efforts were no longer part of a larger movement, with connections to all spheres of life, as they were in the European context.

    Thus, once transplanted into the USA, logical positivism lost much of its vitality and eventually lost its character of a movement and became completely integrated into academic life (and even then they were still under scrutiny by the Hoover administration!).

    2) Once they became a rather academic movement, however, they still retained much of their importance, only this importance was now relative to academic debates, and not to larger political movements. Thus, for instance, Hempel's deductive-nomological model of scientific explanation (cf., for instance, his "Studies in the Logic of Explanation", reprinted in Aspects of Scientific Explanation) is still considered a landmark in the field: most accounts of scientific explanation still begin by reference to this model (even if ultimately to reject it). Similarly, Carnap's Meaning and Necessity was extremely important for the development of formal semantics, especially after the overall framework was refined by Kaplan (who was a student of Carnap), Lewis, Montague, and integrated with linguistics by Barbara Partee. Carnap also had a hand in rational decision theory (especially through his studies in the logic of probability, for example in his partnership with Richard Jeffrey) and was an early scientific structuralist who resurrected the Ramsey sentence approach to scientific theories (cf. the work Stathis Psillos in this regard).

    This is all to say that, once they became integrated into academic life, their impetus and technical innovations still animated much of the debate. Indeed, I would say that, in this sense, logical positivism is still alive, as their specific research programs (in the logic of explanation, in formal semantics, in rational decision theory) are still alive and well. Of course, their particular proposals have been superseded, but that was only to be expected, and, indeed, encouraged by the logical positivists themselves. Going back to the Preface to the Aufbau, Carnap there says:

    The basic orientation and the line of thought of this book are not property and achievement of the author alone but belong to a certain scientific atmosphere which is neither created nor maintained by any single individual. The thoughts which I have written down here are supported by a group of active or receptive collaborators. This group has in common especially a certain basic scientific orientation. (...) This new attitude not only changes the style of thinking but also the type of problem that is posed. The individual no longer undertakes to erect in one bold stroke an entire system of philosophy. Rather, each works at his special place within the one unified science. (...) If we allot to the individual in philosophical work as in the special sciences only a partial task, then we can look with more confidence into the future: in slow careful construction insight after insight will be won. Each collaborator contributes only what he can endorse and justify before the whole body of his co-workers. Thus stone will be carefully added to stone and a safe building will be erected at which the following generation can continue to work.

    This spirit certainly animates much of current philosophy and especially current philosophy which works in problems first set by the logical positivists. So, again, I think that in this sense logical positivism has not faded away, and is still with us.

    3) Finally, a word about the so-called verifiability criterion. Carnap did not put forward this criterion as an empirical observation. Rather, he put it forward as a proposal about how to best conduct scientific investigations. It is in his sense analytic, and therefore it does not apply to itself, since it only mentions synthetic statements. Note that for a statement to be analytic for Carnap is not for it to capture some pre-existing meaning. Instead, a statement is analytic if it is part of the setup of a (formal or semi-formal) linguistic framework. Linguistic frameworks, and therefore analytic statements, in their turn, are not be judged by empirical adequacy criteria (indeed, for Carnap, linguistic frameworks are empirical adequacy criteria), but rather by their usefulness in the advancement of science (this is very clearly stated in "Empiricism, Semantics, and Ontology", but was already clear in the early 30s in his The Logical Syntax of Language, as encoded in his Principle of Tolerance, and also in "Testability and Meaning", which is very relevant for the discussion here).

    So the idea that the whole movement foundered because of an obvious logical inconsistency is just bizarre (and even more bizarre when one considers that its members were all logical proficient).
  • Is Truth an Inconsistent Concept?


    Incidentally, I don't think boundary policing ("Is philosophy a science?") is much helpful. Philosophy is whatever is practiced at philosophy departments. In many cases, this involves a lot of interdisciplinary work (with cognitive scientists, linguists, physicists, medical doctors, etc. etc.), so that the boundaries are not very sharp. In other cases, it is more abstract, perhaps more reflective, and so clearly more distant from whatever it is we consider to be science. But good philosophy is good philosophy, and I don't see much value in pushing one conception of what philosophy should be over others.
  • Is Truth an Inconsistent Concept?


    I'm happy that my posts have been helpful, though I'm not too sure if I'm representative of academic philosophy---I'm finishing my PhD in a third-world university, after all, with little or no contact with the big players.

    As for your assessment of Williamson's claim, I personally think Scharp's books is exemplary of the trend he was discussing. Knowledge about a concept can include knowledge that a concept is inconsistent, after all! And Scharp's discussion is thoroughly informed by the relevant technical literature, so much that Ripley can actually point out that, according to his own light, perhaps it is not the concept of truth that is inconsistent, but the concept of derivability or validity. Notice that Ripley's position, according to which the problem is not with the concept of truth but with our reasoning practices would be difficult even to formulate, let alone emerge as a serious contender in this debate were it not for the formalism of the sequent calculus. So this whole debate surrounding Scharp's book can be considering the pudding, if you will.

    If that's not enough, I think there are at least two more interesting formal results that should be considered in this debate (and that, given Williamson's reference to Halbach, it is plausible to hold that it is what he had in mind). After Tarski formulated his T-schema (itself a formal achievement!), radical deflationism appeared to be almost inevitable. For suppose that there is something substantive about the predicate "... is true". Then, presumably, it contributes something to the truth-conditions of the sentences in which it appears. But, by Tarski's T-schema, for any sentence S, "S" is true iff S, whence the truth conditions of "'S' is true" are the same of S, so the predicate can't contribute anything to the truth conditions of the sentences in which it appears. By modus tollens, there is nothing substantive about truth.

    That would appear to be the end of the story, but Tarski and Gödel proved further that it is impossible to add a truth-predicate to a theory and preserve consistency. This seems weird: if truth is not substantive, how can the addition of a truth predicate generate a contradiction? Anyway, perhaps there is something funny about the interaction of the truth-predicate with other formal devices, so here is a proposal: just take as many instances of the T-schema as are consistent, and this will fix at least the extension of the predicate. This proposal was one of the first versions of minimalism: the truth predicate is entirely exhaustible by the maximally consistent set of instances of the T-schema. Unfortunately, Vann McGee showed that there are many maximally consistent sets of instances of the T-schema, so this procedure will not uniquely pin down the truth predicate.

    At the same time, there is a growing suspicion that perhaps there is something substantive, after all, to the truth predicate. This suspicion is buttressed by the following formal result: even adding a truth-predicate that obey very minimal compositional principles to a theory is sufficient to obtain a stronger theory. That is, if the truth predicate were indeed non-substantial, we would expect that its addition to a theory would not result in new theorems being proved. But that is precisely what happens. In particular, the consistency of the old theory can be proved in the new theory. So it does seem that there is something to truth, after all. Part of the problem seems to be that the truth predicate is not exhausted by the T-schema, but can also function as a device for generalization (i.e. "Everything she said is true"), which can only be eliminated through infinitary resources (an infinite disjunction "Either she said P and P, or she said Q and Q, etc."---though do note that it's not entirely clear that even this disjunction exhausts the truth-predicate in its generalization function).

    So at least two prima facie plausible positions (minimalism and a certain naive deflationism) have been refuted by formal considerations. As a result, we have gained a deeper understanding of the truth-predicate and how to handle it. We know now that it is not just an innocuous predicate and that it is tangled up with all sorts logical considerations. Of course, that is not to say that all questions have been settled, very far from it (there are still minimalists and deflationists around, after all). But it is to say that we have a deeper understand of what is in question when discussing truth.
  • Is Truth an Inconsistent Concept?


    I take it that there are two ways of interpreting your objection to the use of formal systems. One is a ban on the significance of formal systems tout court---they are merely a game. The other is a ban on formal systems as tools for interpreting ordinary practices. That is, perhaps formal systems have their uses in sharpening our concepts or in helping to predict a given phenomenon, or whatever, but not in understanding ordinary practices. I will consider first the stronger reading, which is I think more easily disposed of, and then I will offer some remarks as to why I think you're mistaken even on the second reading.

    The basic thrust of my argument in favor of formal models is this. The world is complex, and to understand its structure in its entirety is a hopeless endeavor. Fortunately, science has provided us with a very useful paradigm for making progress, namely to understand one bit at a time. Think of Galileo's inclined plane, here. By abstracting away from complexities such as friction, etc., he was able to isolate the effects of gravity on the fall of objects. Similarly, by abstracting away from, say, the pragmatic aspect of communication, we may usefully isolate important aspects of a given concept. Now, one may say that this is only possible in physics because physical phenomena are much simpler. I think this is a misconception produced by the tremendous success of idealization in physics; in truth, physical phenomena are extremely messy, and it is only our idealization practices that introduce some order into this chaos (cf. the work of Nancy Cartwright in this regard).

    Hence, formal models can be extremely useful in understanding a concept, if only because their simplicity provides a good testing field for our hypotheses. As Timothy Williamson says:

    Philosophy can never be reduced to mathematics. But we can often produce mathematical models of fragments of philosophy and, when we can, we should. No doubt the models usually involve wild idealizations. It is still progress if we can agree what consequences an idea has in one very simple case. Many ideas in philosophy do not withstand even that very elementary scrutiny, because the attempt to construct a non-trivial model reveals a hidden structural incoherence in the idea itself. By the same token, an idea that does not collapse in a toy model has at least something going for it. Once we have an unrealistic model, we can start worrying how to construct less unrealistic models. ("Must Do Better")

    The case of the concept of truth is one of the examples adduced by Williamson to illustrate this claim. As he puts it earlier in the essay:

    Another example: Far more is known in 2007 about truth than was known in 1957, as a result of technical work by philosophical and mathematical logicians such as Saul Kripke, Solomon Feferman, Anil Gupta, Vann McGee, Volker Halbach, and many others on how close a predicate in a language can come to satisfying a full disquotational schema for that very language without incurring semantic paradoxes. Their results have significant and complex implications, not yet fully absorbed, for current debates concerning deflationism and minimalism about truth (see Halbach (2001) for a recent example). One clear lesson is that claims about truth need to be formulated with extreme precision, not out of knee-jerk pedantry but because in practice correct general claims about truth often turn out to differ so subtly from provably incorrect claims that arguing in impressionistic terms is a hopelessly unreliable method. Unfortunately, much philosophical discussion of truth is still conducted in a programmatic, vague, and technically uninformed spirit whose products inspire little confidence. (ibid.)

    So I hope it is clear that formal methods have their place in understanding the concept of truth. Let us then turn to the question of whether formal methods have their place in understanding our ordinary practices. Again, I think the answer is yes. Specifically, I think it's highly plausible that ordinary reasoning conforms to the Cut and Contraction rules. Obviously, this does not mean that, when people reason, they consciously employ the formalism of the sequent calculus! But it does mean that this formalism aptly describes their practices.

    In order to understand how this can be so, it is useful to recall here Sellars's distinction between pattern-governed behavior and rule-obeying behavior. Both types of behavior occur because of rules, but only the latter occurs because the agent has a conscious representation of the rule. Here is one of Sellars's examples of pattern-governed behavior that is not rule-obeying behavior: the dance of bees. In order to indicate the position of a given object of interest, bees developed a complicated dance that codifies this direction for the other bees. This gives rise to a norm of correctness for the dance: the dance is correct if it indeed points in the direction of the object. If the bee performs the dance, and the dance leads nowhere, clearly something has gone wrong. In Millikan's helpful terminology, that is because it is the proper function of the dance to indicate the object, so that, if it is not so indicating, it is failing its purpose. Notice that this does not require anything spooky, just natural selection, and notice also that although we can clearly describe the dance in normative terms by employing a normative vocabulary, obviously the bees can do no such thing (the question of whether or not the bees must have a conceptual representation of space in order to perform such a dance is a separate and more difficult question. For a surprisingly good case for answering it in the affirmative, cf. Carruthers, "Invertebrate concepts confront the generality constraint (and win)").

    So my claim is that our ordinary reasoning practices are, in this respect at least, much like the bee dance. They are pattern-governed behavior, that is, a behavior that happens because it has been selectively reinforced (either through natural selection, if such reasoning is innately specified, or through socialization; here, game theory can provide some nice formal models of how this can happen without a conscious effort by the agents), but not because the agents are aware of the rules governing their behavior. These rules are, however, implicit in our practices, and the role of logical vocabulary is (among other things) to make them explicit, since it is only by making them explicit that they become subject to rational evaluation.
  • Is Truth an Inconsistent Concept?


    You said that the sentence "This sentence is not false" was meaningless. I then asked, supposing it meaningless, why it is meaningless. Notice that there is an important respect in which it differs from your other examples of meaningless sentences: whereas your other examples all display violations of thematic relations (and, if you like generative grammar, theta roles), "This sentence is not true" does not display such violation. So there must be some other reason why it is meaningless. You then say that it does not describe a state of affairs. Well, the sentence "the cat is on the mat and the cat is not on the mat" also does not describe a possible state of affairs, yet it is clearly meaningful---in fact, it is false. So what is the difference between that sentence and the liar?
  • Is Truth an Inconsistent Concept?


    A bit of both, I suppose. I'm following Scharp (and Ripley's account of Scharp), here (this is not meant as an endorsement of his position, I'm just trying to explore it). If I got the gist of his position right, he argues that the constitutive principles of truth, as present in our ordinary practice, are inconsistent. Ripley then presents the following suggestion: perhaps it is not our practices regarding truth that are inconsistent, but rather our ordinary practices regarding validity or perhaps derivability which are inconsistent. Now, once we identify a certain conception as inconsistent, either we accept the inconsistency and attempt to live with it (a dialetheist would perhaps adopt this view), or else we may try to reform it (this is Scharp's position). One way of reforming it is by jettisoning the principles that got us in a pickle in the first place. If you accept Ripley's diagnosis, then that means creating a new formal system that gets rid of Cut (I think you can keep Contraction if you get rid of Cut).

    So it is both "If only Epimenides (and everyone since) hadn't used Cut and Contraction!" (the diagnosis part) and "At least we don't have to worry about that in our new and cleaner system!" (the prognosis).

    The position is very reminiscent of Carnap's ideal of explication. According to Carnap, it is common for an ordinary concept to be vague and imprecise. This is not troubling---indeed, it may be an advantage---in our day-to-day dealings, but it may hamper scientific progress. Thus, one of the main tasks for the philosopher is to create precise (typically formalized) analogues of those ordinary concepts which are important for science. These analogues need not capture every feature of their ordinary counterparts---if they did, they would not be as precise as we need! But they should capture enough for the scientific purpose at hand. This activity of engineering precise concepts for the needs of science is called by Carnap explication. My point is that, as far as I can see, both Scharp and Ripley are engaged in explication.
  • Is Truth an Inconsistent Concept?


    I don't have much to add besides what I have already mentioned in my first reply to you. Take arithmetic, for instance. It is not difficult (though it is laborious) to show that it can code any syntactic notion. In particular, given any reasonable alphabet and vocabulary, it is possible to code it using numbers (this should be obvious in this digital age). Less trivially, it is possible to code any syntactic operation using predicates about numbers (this is called arithmetization of syntax). Importantly, the operation of replacing a variable x in a formula P(x) by another term, say n, is also expressible in the language of arithmetic. Using these ingredients, we can build, for any formula P(x), a sentence S such that S is equivalent to P("S"), where "S" is the code for S. So S effectively says of itself that it has P. This is called the fixed-point or diagonal lemma in most treatments (for more details, cf. Peter Smith's Gödel book, esp. chaps. 19-20).

    Since this construction uses only typical arithmetical resources (addition, multiplication, numerals), it follows that any arithmetic theory will contain self-reference. Hence, self-reference, by itself, is not to blame for the problems resulting from the liar. You now say that the problem is with self-reference coupled with truth. That's not exactly right, since we can construct paradoxes without the use of self-referentiality (see Yablo's Paradox), but let us suppose you are right for the sake of the argument. The question then arises about what it is about truth that, when coupled with self-reference, generates the paradox. That this is a problem specific to truth is clear from the fact that other notions, when coupled with self-reference, do not generate any paradox (e.g. provability). And here we are back to square one.
  • Is Truth an Inconsistent Concept?


    Let us suppose you are right and the Liar is meaningless. This raises the question: why is it meaningless? Let us suppose, for definiteness, that the liar is "This sentence is not true". It is composed of meaningful parts meaningfully put together. That is, "This sentence", "is not" and "true" are each meaningful expressions and the sentence is grammatical. So why does it fail to be meaningful?

    One possible answer to this is 's: the problem is with self-reference. Self-reference is a meaningless construction. This is tempting, but, as I have pointed out in my first post in this thread, self-reference is built in our best syntactic and arithmetic theory. So unless we are also willing to throw out arithmetic, self-reference must be considered unimpeachable. But if it is not self-reference, then what is the culprit?
  • Is Truth an Inconsistent Concept?


    In his book (p. 81), Scharp mentions that the following triad is inconsistent for a logic L:

    (i) L accepts modus ponens and conditional proof;
    (ii) L accepts the standard structural rules for derivability (in particular, it accepts cut and contraction);
    (iii) The theory consisting of capture (from S infer T("S")) and release (from T("S") infer S) is non-trivial in L.

    Scharp argues that the culprit is (iii). But, as Ripley argues in his review of Scharp's book, it may be that the culprit is (ii). In order to understand what is going on, it helps to recast your derivation in terms of the sequent calculus. For those who don't know, the sequent calculus is a calculus that instead of operating with sentences, operates with sequents, or sequences of sets of sentences. The basic idea is this: we interpret a sequent S : R as saying that the disjunction of the sentences in R is derivable from the conjunction of the sentences in S. So it allows us to study the structural properties of the derivability relation. Here are a couple of important rules (I'll use S, R as variables for sets of sentences and A, B, C for sentences):

    Structural Rules

    Weakening: From S : R, infer S, A : R; from S : R, infer S : A, R (i.e. if a disjunction of a set of sentences is derivable from S, it is derivable from S and A; if a disjunction of a set of sentences is derivable from S, then adding a further disjunct preservers derivability);

    Cut: From S : A, R and S', A : R', infer S, S' : R, R' (i.e. if A implies B and B implies C, then A implies C);

    Contraction: From S, A, A: R, infer S, A : R; from S : A, A, R, infer S : A, R (i.e. we can reuse premises during a derivation).

    Identity: A : A can always be inferred.

    Rules for negation:

    ~L: From S : A, R, infer S, ~A : R (if A v B and ~A, then B);

    ~R: From S, A : R, infer S : ~A, R (if A & B implies C, then B implies ~A or C)

    Rules for truth:

    Capture: From S, A : R, infer S, T("A") : R;

    Release: From S : A, R, infer S : T("A"), R.

    Moreover, a contradiction is symbolized in this system by the empty sequent, : .

    Using these rules, we can show that the liar implies a contradiction as follows:

    L : L (Identity)
    T("L") : L (Capture)
    : ~T("L"), L (~R)
    : L, L (Definition of L)
    : L (Contraction)

    L : L (Identity)
    L : T("L") (Release)
    L, ~T("L") : (~L)
    L, L : (Definition of L)
    L : (Contraction)

    And from : L and L : , we may derive, by cut, : .

    This derivation is obviously more complicate, but, on the other hand, it makes clear what are the structural principles involved in Scharp's (ii): Cut, Identity, and Contraction (it also has the advantage of making clear that conjunction is not involved, so that the only logical connective involved is negation). Now, Identity is unimpeachable. What about Cut and Contraction? Now, it is well known that Cut and Contraction are strange rules. In particular, they are the only rules whose premisses are more complicated than the conclusion. In particular, they are the only rules that allow for a formula to "disappear" from the conclusion (that the interaction of Cut, Contraction, and quantification produces anomalies is well-known. Cf., among others, the comments from Jean-Yves Girard on Contraction in his The Blind Spot and this interesting paper by Carbone and Semmes). So we know from logical investigations alone that there are problems with Cut and Contraction.

    But there is more. Cut and Contraction are not just responsible for the Liar. As Ripley notes in his review of Scharp (linked above), they are also responsible for a whole host of paradoxes. So, if we get rid of those, we get rid not only of the liar, but also of those other beasts as well (Ripley elaborates a bit in this paper). So why is Scharp so sure that the inconsistent concept is truth? Maybe the inconsistent concept is validity or derivability, if we think of Cut and Contraction as constitutive of those...
  • Is Truth an Inconsistent Concept?


    Self-referentiality may seem like a problematic concept, but Gödel, Tarski, and Carnap have shown that it is possible to construct a self-referential sentence (or something close enough) inside arithmetic. That is, given a sufficiently strong arithmetic theory T and a coding schema "...", we have the following:

    Fixed-point or diagonal theorem: If P(x) is a formula of the language of T with only "x" as its free variable, then there is a sentence F in the same language such that T proves that F is equivalent to P("F").

    In other words, for any property P, if T is a sufficiently strong arithmetical or syntactic theory, then there are sentences of the language which ascribe P to themselves (or close enough). Hence, unless syntax or arithmetic is an incoherent enterprise, self-referentiality can't be the problem.
  • Godel's Incompleteness Theorems vs Justified True Belief


    If a theory is such that: (i) it has a reasonable proof system (i.e. one can check by an algorithm whether or not a sequence of formulas is a proof) and (ii) is recursively axiomatized (i.e. there is an algorithm which tells whether or not something is an axiom of the system), then that theory will have a computably enumerable set of theorems.

    And yes, this applies also to classical mathematics such as classical analysis, to the extent that it can be formalized in a reasonable proof system (be it by formalizing in second-order arithmetic or by formalizing it in first-order ZFC or something similar).
  • Godel's Incompleteness Theorems vs Justified True Belief


    If by G you mean the Gödel sentence, then, yes, the algorithm will miss it. But that's because the algorithm lists all the theorems of PA, and the Gödel sentence is not a theorem of PA!
  • Godel's Incompleteness Theorems vs Justified True Belief


    Yes, there is a proof of the consistency of PA, though whether or not it is finitistically acceptable is debatable. Gentzen proved that the consistency of PA can be proved in PRA + Epsilon_0 induction, i.e. primitive recursive arithmetic augmented by the principle that the ordinal epsilon_0 is well-ordered (it should be noted that PRA + Epsilon_0 induction and PA are incomparable in strength, so the result is not a triviality). See this very nice article by Timothy Chow for more on the topic.
  • Godel's Incompleteness Theorems vs Justified True Belief


    I'll be very explicit, then: there is, in fact, an algorithm that lists all and only the theorems of PA. This algorithm therefore provides an exhaustive and infallible enumeration of theoremhood of PA. It is exhaustive, i.e. every theorem appears in the list. And it is infallible, i.e. every formula in the list is in fact a theorem. Since for some reason you apparently missed it from my last post, I will reproduce it here again. Choose your favorite Gödel numbering for formulas and sequences. Given this Gödel numbering, there will be an algorithm, call it DecodeS, which, given a number m, first decided whether or not m is the Gödel numbering of a sequence of formulas and, if it is, returns the sequence of formulas for which m is a Gödel number. We also have an algorithm, Check Proof, which, given a sequence of formulas, decided whether or not the sequence of formulas is a proof in PA. Given these, the algorithm is as follows:

    Step 1: Input n (starting with 0). Use DecodeS to check if n is the Gödel number of a sequence of a formulas. If YES, go to the next step, otherwise, start again with n+1.

    Step 2: Use DecodeS to print the sequence of formulas coded by n. Apply Check Proof to this sequence. If the result is YES, go to the next step. Otherwise, go back to step 1 with input n+1.

    Step 3: Erase all the formulas in the sequence except the last. Go back to step 1 with n+1.

    Call the above three step algorithm Theorem List. I claim that Theorem List is an exhaustive and infallible enumeration of theoremhood in PA. It is obviously infallible, since Check Proof is infallible. It is also exhaustive, since the algorithm will basically go through every sequence of formulas of PA, so, if P is a theorem of PA, it s bound to find a proof for it eventually.

    In other words, the set of theorems of PA is what we call computably enumerable. This is a well-known fact (and the proof is clearly constructive---it is not like the concept of r. e. sets is somehow constructively suspect), so I'm surprised that you are still insisting that it is somehow impossible to generate an exhaustive and infallible list when the above demonstrates that it is not only possible but actual (and it also exhibits the algorithm in question!). Now, you claimed that this alleged impossibility somehow followed from the fact that PA |- Prov('G') --> ~G, but I don't see the relevance of this for listing all and only the theorems.
  • Godel's Incompleteness Theorems vs Justified True Belief


    I think you are focusing too much on the fact that theoremhood is not strongly representable in PA, with the consequence that you are ignoring the fact that it is weakly representable in PA. Indeed, while theoremhood is not computable, it is computably enumerable. In other words, there is an algorithm which lists all and only theorems of PA. It exploits the fact that, given your favorite proof system, whether or not a sequence of formulas is a proof of a sentence of PA is decidable. Call the algorithm which decides that "Check Proof". Here's an algorithm which lists all the theorems of PA, relative, of course, to some Gödel coding:

    Step 1: Check whether n is the Gödel number of a sequence of formulas of PA (starting with 0). If YES, go to the next step. Otherwise, go to the next number (i.e. n+1).

    Step 2: Decode the sequence of formulas and use Check Proof to see if it is a proof. If YES, go to the next step. Otherwise, go back to Step 1 using as input n+1.

    Step 3: Erase all the formulas in the sequence except the last. Go back to Step 1, using as input n+1.

    This (horrible) algorithm lists all the theorems, i.e. if S is a theorem of PA, it will eventually appear in this list. Obviously, this cannot be used to decide whether or not a given formula is a theorem, since, if it is not a theorem, then we will never know it isn't, since the list is endless. But, again, it can be used to list all the theorems. My point is that there is nothing comparable for the truths, i.e. there is no algorithm that lists all the truths. In fact, by Tarski's theorem, there can be no such algorithm. So, again, the two lists (the list of all the theorems, the list of all the truths) are not the same, whence the concepts are different.

    The upshot of all this is that, in my opinion, constructivists should resist the temptation of reducing truth to provability. Instead, they should follow Dummett and Heyting (on some of their most sober moments, anyway) and declare truth to be a meaningless notion. If truth were reducible to provability, then it would be a constructively respectable notion. But it isn't (because of the above considerations). So the constructivist should reject it. (Unsurprisingly, most constructivists who tried to explicate truth in terms of provability invariably ended up in a conceptual mess---cf. Raatikainen's article "Conceptions of truth in intuitionism" for an analysis that corroborates this point.)
  • Godel's Incompleteness Theorems vs Justified True Belief


    A couple of observations:

    (1) First, note that the theorem, in the form I stated, is a bit more general, since it does not rely on any specific unprovable sentences. Thus, even if we know by construction (or other means, such as Kirby-Paris) that a given sentence is unprovable but true, and then add it to the theory, we can't be sure that we have exhausted all the truths---there could be many more true but unprovable sentences of which we are unaware, so merely adding one to the theory will not make it complete. Of course, we could simply take all the true sentences and add it to the theory---but then we wouldn't know what is the resulting theory! (In other words, there would be no algorithm to tell me whether or not a given sentence belongs to the theory)

    (2) This failure to identify the truths to be added could be circumvented if we could isolate an easily identifiable set of sentences and prove that adding just this set of sentences is enough to add all the truths. For instance, a natural candidate for such a set is the set of sentences which express consistency statements. That is, we could begin with Con(PA) (a statement saying that PA is consistent), and it to PA to obtain the theory PA + Con(PA), then consider the consistency statement for this theory and add it to obtain PA + Con(PA) + Con(PA + Con(PA)), so on and so forth. By Gödel's second incompleteness theorem, we know that each resulting theory is stronger than its predecessor, so there is at least some hope that this would bear some fruit.

    In fact, we can obtain a more interesting result if, instead of working with consistency statements, we work with reflection principles of the sort exemplified by Löb's theorem. That is, define Ref(T) to be the statement "For every sentence S of T, ProvT('S') -> S", where "ProvT" is the provability predicate for T. We can then consider progressions of the form PA + Ref(PA), PA + Ref(PA) + Ref(PA + Ref(PA)), etc. Now, an immediate problem appears here: for each natural number n, we can define Tn to be Tn-1 + Ref(Tn-1 + Ref(Tn-1)). And then we define T(omega) to be the union of all such Tn's. But what happens then? We can obviously continue the procedure, i.e. considering T(omega) + Ref(T(omega)). Here, however, we will need a way to codify ordinals greater than omega inside PA. This is done by what are called ordinal notations.

    So how far can we go? Will the process eventually stop somewhere? It is a remarkable theorem by Feferman (cf. "Transfinite Recursive Progressions of Axiomatic Theories") that, in the case of PA, it does stop somewhere, and he gave a precise stopping point for this (if you must know, the stopping point is at ). That is, Feferman found that there is a way to code ordinals such that according to this code, there is a certain iteration of the addition of reflection principles that proves every true arithmetical sentence! This is often referred to as Feferman's completeness theorems and is remarkable indeed.

    Unfortunately, there is a catch (there's a Brazilian song that says that a ripe fruit hanging near a well-trodden path must be either rotten or its tree full of wasps...). There is more than one way to code ordinals, and Feferman's proof depends heavily on the choice of code. Indeed, he also showed (together with Clifford Spector) that there are infinitely many ways of coding ordinals for which this result is not true. Moreover, to discover the correct coding is as difficult as discovering what are all the true sentences of PA (this is basically because he employs a rather bizarre code that ensures that in every iteration of the construction we "sneak in" a true formula). Hence, there is no real algorithm for extending PA in such a way to obtain a complete extension using reflection principles. The obstacle is the same as before, at some point we don't know anymore what is the theory we are obtaining by this procedure. (For those curious about this, the book by Torkel Franzén, Inexhaustibility: A Non-Exhaustive Treatment is still the best, though be warned that it is technically demanding).

    (3) So what is going on? Again, the point is that provability (in the sense of obtaining a proof that we can recognize as such) is different from truth. Note that, in spite of my talk of models in a previous post, this need not be cashed out in strictly platonist terms. One can think about this in terms of the principles we are committed to when reasoning about arithmetic (where the vocabulary of principles and commitment can be taken in a broadly Sellarsian fashion). These principles can be taken to be encoded in the axioms for second-order arithmetic, and here the problem becomes even more evident: second-order logic does not have a complete proof-procedure. So, again, the semantic content of our principles invariably outstrip our capacity to prove things from them.

    (4) Lastly, about your question, it is ambiguous. "To distinguish" can be taken to mean "how do we establish that something is true?", in which case you are right that the answer is "by proving it". But it can also be taken to mean "how do we characterize a sentence as true", in which case the answer is not "by proving it", but rather according to Tarski's satisfaction definition (or, if you will, you can say that a sentence of arithmetic is true if it follows semantically from the second-order axioms of PA).
  • Godel's Incompleteness Theorems vs Justified True Belief


    Don't mention it. I'm glad this has been useful to someone...
  • Godel's Incompleteness Theorems vs Justified True Belief


    Well, proof is relative to a system of axioms. That is, we usually define proof, relative to a theory T, as follows: a sequence of statements A1, ..., An is a proof of An in T iff for every i < n+1, either Ai is an axiom of logic, an axiom of T, or it follows from the Aj's (j < i) from the rules of logic. So it doesn't make much sense to ask about a sentence whether it is provable or unprovable tout court, whence the relativization to a given axiomatic system (unless you're asking about whether a formula is a truth of logic, in which case you don't need the relativization to a theory, though you will obviously relativize to the background logic).

    As for your example, in the case of some particular theories (such as PA or real analysis), we are interested in theory primarily as a description of a particular object. For example, when in studying PA, we are generally not interested in any discretely ordered ring, rather, we are interested in the natural numbers. So when we are asking about the truths of PA, we are asking about truth in the natural numbers, not truth in any model. Similarly, when we are asking about the truth of a sentence about the reals, we are generally interested in, well, the reals, not the hyperreals. That's not to say that these other objects are not useful or interesting in their own right, just that in the case of those theories, we have an intended model in mind.

    Here's a comparison that I find useful. Consider the theory of groups. It consists basically of three axioms (I'll use the additive notation because it is easier to type): x+(y+z)=(x+y)+z, x+e=x, and x+(-x)=e. Now, since there are commutative and non-commutative groups, this theory does not decide the sentence x+y=y+x, and is therefore incomplete. But this is not surprising in the least, because when studying groups, we are not studying a privileged group, so we don't expect the axioms to completely characterize this object. Indeed, the axioms were developed to be as general as possible while still characterizing an interesting class of objects (namely, groups), so it is a virtue of the theory that it is incomplete.

    On the other hand, in the case of PA, the theory was crafted to completely characterize a particular object, namely the natural numbers (object here taken in a broad sense, to include structures). Moreover, in the case of second-order PA, Dedekind proved that it does completely characterize its intended object, in a sense: the theory is categorical, which means that every model of the theory is isomorphic to the standard model. Hence it completely captures the structure of the standard model. So it is surprising that there are truths about this system that it the theory is unable to prove.
  • Godel's Incompleteness Theorems vs Justified True Belief


    There would only be a contradiction if Gödel claimed that his own theorem was unprovable. Fortunately, he was not an idiot, and therefore did not claim that. What he did claim was that some truths are unprovable (his own theorem not being among those). And the reason why some truths are unprovable was sketched in my first post in this thread, namely the theorems are all capable of being listable by an algorithm, whereas no algorithm can list all truths. Hence, there are truths that will not appear in the list of all theorems, hence, not every truth is a theorem. Again, note that, in this version, no mention is made of a specific unprovable statement, so your reasoning about the "meta-cognitive" level does not apply.
  • Godel's Incompleteness Theorems vs Justified True Belief


    There is no contradiction. One can hold that proof is necessary to establish truth, yet hold that it is not necessary for truth (cf. my point about unknowable truths). And, in the form I have presented, his theorem does not require him to establish the truth of any unknowable proposition.
  • Godel's Incompleteness Theorems vs Justified True Belief


    You seem to be confusing knowledge with truth. Obviously, to establish a proposition as true, I need to, well, establish as such. And, in mathematics, to establish a proposition as truth---i.e. to know it---is precisely to prove it. But there can be true propositions that are unknowable, and hence that we cannot establish, and hence that we cannot prove. That there are such propositions is established by Gödel's theorems. Again, note that, in the form I have presented, the theorem does not rely on pointing to one such proposition and saying "This proposition is true, but we can't know it as true". Rather, it proceeds from general properties about truth and provability to show that these concepts must come apart.
  • Godel's Incompleteness Theorems vs Justified True Belief


    (1) Gödel (well, Gödel, Church, Tarski, Rosser, etc.) showed a bit more than what you are implying. He showed that, for any consistent system containing enough arithmetic (i.e. extending Robinson's arithmetic), the set of theorems of that system is not identical to the set of truths of that system. So, if we just move one level up and claim that truth is provability in a higher system, this won't do, because the theorem will reapply at the higher level separating these sets again.

    (2) As for why truth is not trivial, the surprising result here is that any minimally adequate theory of truth (i.e. one that respects the compositional nature of truth) is non-conservative over PA. That is, given a truth predicate, one is able to prove more things about the original system than one could before the introduction of such a predicate. In particular, one is able to prove the consistency of the original system. So adding a truth predicate is not a triviality. If you think about it, this is not that surprising, since the truth predicate can be used to express generalizations that wouldn't be possible without it except by the use of infinitary resources, such as infinite disjunctions or infinite conjunctions, i.e. "Jones always tells the truth".

    (3) With regards to Fair's article, I've just read it and didn't find it much convincing. He basically argues that truth must be relative to a theory (instead of a model), introducing a family of operators "In T" (i.e. "In PA", "In the theory of algebraically closed fields of characteristic 0", "In ZFC", etc.) and argues that truth ascriptions are always prefixed by this operator, i.e. the hidden logic form of "2+2=4" is actually "In PA, 2+2=4". I personally think this is an instance of a bad (and unfortunately widespread) philosophical habit of postulating hidden linguistic forms without appealing to linguistic evidence (and thus being ad hoc), but let us leave that to the side. In order to deal with incompleteness, Fair is then lead to work with an analogy from fiction: just as the Sherlock Holmes stories don't settle how many hairs he has in his body, so PA doesn't settle G or other undecidable sentences.

    In order to support this, he basically assimilates mathematics to fiction, saying that mathematical objects are mind-dependent, etc. In particular, he argues that mathematical objects "lack the 'open texture' we would expect of mind-independent physical objects" (p. 368), by which he means (presumably) that the properties of mathematical objects are fully determined by our beliefs about them (at least, that's what I gathered from the preceding discussion) and that we could not "failure to notice" some of their important properties. I think this is mistaken. Two historical examples: continuity and computability. In both cases, mathematicians had been working with objects that had those properties, but they didn't fully realize the nature of the properties themselves. It was only centuries after working informally with these objects that we began to understand their nature---and even today there are still disputes about them (e.g. is the continuum formed by points, such as Dedekind cuts, or is it formed by regions, like the intuitionists claim?). Moreover, greater clarity about these properties revealed surprising consequences (the existence of continuous functions that were nowhere differentiable or the fact that any local and atomistic process can be simulated by a Turing Machine).

    So, at the end, I think there is a great disanology between mathematical truth and fiction. In the latter case, there is almost no friction, and our conceptions can be given free reign. With the former, however, our conceptions are tightly constrained. That is why truth in mathematics is not as problematic as truth in fiction, I gather.
  • Godel's Incompleteness Theorems vs Justified True Belief


    I don't think Löb's theorem supports the constructivist position. That's because truth is generally taken, prima facie to obey the capture and release principles: if T('S'), then S (release), and, if S, then T('S') (capture). But what Löb's theorem shows is that proof does not obey the release principle. So there is at least something suspicious going on here.

    Moreover, one can show that the addition of a minimally adequate truth-predicate to PA (one that respects the compositional nature of truth) is not conservative over PA. Call this theory CT (for compositional truth). Then , where "T" is the truth predicate. As a corollary, CT proves the consistency of PA. So truth, unlike provability, is not conservative over PA.

    Finally, you have yet to reply to my argument regarding the computability properties of the two predicates, namely that one does have an algorithm for listing all the theorems of PA, whereas one does not have an algorithm for listing all the truths of PA. So the two cannot be identical.
  • Godel's Incompleteness Theorems vs Justified True Belief


    In my (philosophy) department (here in Brazil), undergrads are generally introduced to the basics of logic or formal reasoning, say through Priest's A Very Short Introduction or Steinhart's More Precisely. That is the only required logic course for philosophy undergrads. There are a couple more optional courses they can take, but these vary wildly in content. At the graduate student level, they are not required to take any logical course, though there is generally at least one such course every term, again on a wide variety of subjects. I myself (a grad student) have (unofficially) taught a course loosely based on the first part of Boolos, Jeffrey & Burgess's Computability and Logic, and last term we also had a course based on the first four chapters of Shoenfield's Mathematical Logic.

    As for the math department (again, at my university, here in Brazil), they are not required to take logic courses, though there are optional classes on logic and set theory. The logic class generally goes through the completeness, compactness, and Löwenheim-Skolem theorems, some times also into incompleteness. The set theory classes generally go through the usual cardinal and ordinal stuff, some times going into a bit more detail on combinatorics, other times scratching the surface of the constructible universe, and one or two crazy professors go as far as forcing.
  • Godel's Incompleteness Theorems vs Justified True Belief


    Refer to my reply to for a clearer sketch of why truth is not provability, and which has nothing to do with G or whatever. And, again, you're evading my point that, if you want to reduce A to B, the mere fact that B is sufficient to A is not enough: you must also show that B is necessary for A. But this is just what Gödel's theorems deny.

    Your juridical reasoning may be convincing in the intuitive level, but, as Williamson has said, in these matters it is not enough to argue impressionistically. As counter-intuitive as it might be, it is demonstrable that truth (on most reasonable axiomatizations of the notion, anyway) is not redundant and is not the same as provability.
  • Godel's Incompleteness Theorems vs Justified True Belief


    Let me put it this way. Suppose truth were equal to provability (or even just extensionally equivalent). Then any algorithm for enumerating all the theorems would ipso facto enumerate all the truths. But, in fact, we have an algorithm for enumerating all the theorems (they are computably enumerable) and none for enumerating all the truths (truth is not even arithmetically definable, let alone definable by a Sigma_1 formula). Therefore, they truth is not equal to provability.

    So no, I'm not merely claiming that theoremhood is not fully capturable in PA. I'm claiming that whereas theoremhood is weakly capturable, truth is not capturable at all, and therefore these are different concepts.

    As for the supposed redundancy of truth, again, I'm not referring just to the fact that theoremhood is only weakly capturable in PA. Rather, I'm referring to the fact that (i) if truth were redundant, the addition of a truth predicate to PA would result in a conservative extension of PA and (ii) the addition of a truth predicate to PA does not result in a conservative expression of PA. Hence, truth is not redundant.

    As for your ii.c), you're saying that PA proves a formula of the type ~P <--> P. But then we have: ~P <--> P is equivalent to (P -> ~P) & (~P -> P), which is equivalent to (~P v ~P) & (~~P v P), which is equivalent to ~P & P. Hence, if PA proves ~Prov('G') <--> Prov('G'), PA proves a contradiction. Here's another way of seeing the matter. PA is built on classical logic, whence it proves every instance of P v ~P. So suppose PA proves ~Prov('G') <--> Prov('G'). Suppose then ~Prov('G'). Then, by this equivalence, Prov('G'), whence by conjunction introduction ~Prov('G') & Prov('G). On the other hand, suppose Prov('G'). Then, again by the equivalence, ~Prov('G'), so ~Prov('G') & Prov('G'). Thus, ~Prov('G') v Prov('G') -> (~Prov('G') & Prov('G')). But the antecedent is an instance of the excluded middle, so we can detach the consequent (again, remember that PA is classical) and obtain ~Prov('G') & Prov('G'), which is a contradiction.

    As I said, the problem is that you're passing from PA |- S iff PA |- Prov('S') to PA |- S <--> Prov('S'). The former is ok, but the latter is not true; we only have PA |- S --> Prov('S') (that's one of the derivability conditions), whereas Prov('S') --> S is a reflection principle (soundness). In fact, by Löb's theorem, we have that, for any S, PA |- Prov('S') --> S implies PA |- S (cf. section 4.1 of this paper for a sketch of the proof).

    (Interestingly, Boolos comments on this theorem on pp. 54ff of The Logic of Provability as follows: "Löb's theorem is utterly astonishing (...). In the first place, it is often hard to understand how vast the mathematical gap is between truth and provability. And to one who lacks that understanding and does not distinguish between truth and provability, [Prov('S') --> S], which the hypothesis of Löb's theorem asserts to be provable, might appear to be trivially true in all cases, whether S is true or false, provable or unprovable.")
  • Godel's Incompleteness Theorems vs Justified True Belief


    Let us suppose that everything you say is true. This still does nothing to address two facts: (1) the set of true formulas is not arithmetically definable, but the set of provable formulas is, whence the two must be distinct; (2) truth is not conservative over PA, whence it can't be redundant. I sketched that argument in my first post here precisely so we did not get entangled in fruitless discussions about how we can know that G is true or about the Kirby-Paris theorem.

    Obviously, that particular argument assumes the soundness of PA, which you have disputed (this is a minority position, but one that I respect, if only because in the case of Nelson it generated some interesting mathematics). But this is not necessary for the argument to go through: one can start with Q and argue that any recursively axiomatized theory that extends Q will fall into the same problem, namely truth will be arithmetically undefinable and theoremhood will be arithmetically definable. Since no one that I know of doubts the soundness of Q (not even Nelson), the argument should go through.

    By the way, if your ii.c) is correct, then PA is inconsistent. In any case, that is not a valid substitution instance of ii.a): ii.a) says merely that (assuming soundness) PA |- S iff PA |- Prov('S'), not that PA |- S <-> Prov('S') (the latter is a reflection principle and is actually not provable in PA).
  • Godel's Incompleteness Theorems vs Justified True Belief


    Unfortunately, from the fact that provability is sufficient for truth, it does not follow that it is necessary for truth (in general, being a sufficient condition is not, well, sufficient for being a necessary condition). But truth would be redundant only if provability were sufficient and necessary for truth. Gödel's theorem, in the particular form I mentioned in my previous post, shows, however, that provability is not necessary for truth. So you must revise your position (note that the form I mentioned in my previous post does not exhibit any particular sentence, and therefore avoids your roller-coaster dilemma).

    Incidentally, the idea that truth is simply redundant, at least in its naive form, is untenable. We already know that simply adding a predicate T to PA with the axioms T(x) iff x is the code of a true sentence results in inconsistency, which is already surprising to a redundant theorist, since it shows that adding a truth predicate with this simple property already goes much beyond the original theory (indeed, the simple property is so powerful that it results in inconsistency! In a sense, it is too powerful). But even more surprisingly, suppressing that axiom and adding weaker ones invariably result in non-conservative extensions of the base theories, i.e. theories that are more powerful than the original (both Friedman-Sheard and Kripke-Feferman have this property, for instance). So truth cannot be redundant.

    The lesson here is that truth is an exceedingly difficult subject. As Timothy Williamson puts it in "Must Do Better":

    One clear lesson is that claims about truth need to be formulated with extreme precision, not out of knee-jerk pedantry but because in practice correct general claims about truth often turn out to differ so subtly from provably incorrect claims that arguing in impressionistic terms is a hopelessly unreliable method. Unfortunately, much philosophical discussion of truth is still conducted in a programmatic, vague, and technically uninformed spirit whose products inspire little confi dence.

    Hence why we need formalization, as Williamson himself defends later in the essay:

    Philosophy can never be reduced to mathematics. But we can often produce mathematical models of fragments of philosophy and, when we can, we should. No doubt the models usually involve wild idealizations. It is still progress if we can agree what consequences an idea has in one very simple case. Many ideas in philosophy do not withstand even that very elementary scrutiny, because the attempt to construct a non-trivial model reveals a hidden structural incoherence in the idea itself. By the same token, an idea that does not collapse in a toy model has at least something going for it. Once we have an unrealistic model, we can start worrying how to construct less unrealistic models.

    I think the ideas that truth is provability or that truth is redundant, at least in the naive forms presented here, are precisely of the type that "differ subtly from provably incorrect claims" and "do not withstand even that very elementary scrutiny, because the attempt to construct a non-trivial model reveals a hidden structural incoherence in the idea itself".
  • Godel's Incompleteness Theorems vs Justified True Belief
    There is a better proof of Gödel's theorems that considerably clarifies the situation. It relies on two other theorems: (1) Gödel's proof that the set of theorems of a recursively axiomatized theory is computably enumerable; (2) Tarski's theorem that truth is not arithmetically definable. Let us tackle each of these by turn.

    (1) We say that a theory is recursively axiomatized if there is an algorithm which tells us whether or not a given formula is an axiom of the theory, i.e. you input a formula and the algorithm prints YES or NO if the formula is an axiom or is not an axiom, respectively. By a long, tedious process of coding, Gödel was able to show that in the case of such theories, whether or not a sequence of formulas is a proof is also checkable by an algorithm. This implies that all there is another algorithm which prints all the theorems of the theory in a sequence, i.e. there is an algorithm that enumerates the theorems of the theory (the algorithm in question is horrid, by the way). In other words, the theorems of such a theory are computably enumerable. Now, there is another theorem that roughly shows that computably enumerable predicates are definable in PA by a formula with just one existential quantifier. So, in particular, the predicate "x is a theorem of T" is definable in PA for any recursively axiomatized theory T.

    (2) But Tarski also showed that being a true formula of PA is not definable in PA by any formula, let alone one with just one existential quantifier. The idea is very roughly as follows: if truth were definable in PA, since PA allows for self-referential predicates, a version of the liar paradox would be replicated inside PA. But then PA would be inconsistent. So truth is not so definable.

    Putting these together, we obtain the following: (1) Being provable is definable in PA, but (2) being true is not definable in PA. On the other hand, we have (3) if something is provable, then it is true. Hence, since, by (1) and (2), being provable and being true are different, and, by (3), everything that is provable is true, we have that there must be something that is true without being provable. In other words, PA is incomplete.

    This version is better than the one originally presented by Gödel because it deals directly with intrinsic properties of the predicates of being a theorem and being a true formula, i.e. properties that are not relative to any system (this stems from the rather astonishing fact that the notion of algorithm is system-independent). So it avoids retorts of the sort "but we just prove [whatever] in a higher system".
  • The rational actor


    (1) I'll give two examples that I think can be illuminated by considering economic models. Consider what is generally taken to be Smith's doctrine of the invisible hand (I'm sidestepping here issues of attribution). According to this idea, by pursuing their own interests, consumers and producers interact in a certain way that settles prices and product distribution that maximizes the benefits for everyone. Anyway, there are obviously a lot of qualifications to be made, but the point is that one can use game theory to show that this happens only in conditions of perfect competition. Since these almost never obtain, we can therefore predict that the current distribution is not maximizing the benefits for everyone, but is rather skewed in one direction (I'll leave you to guess which direction).

    The second example is related to Gary Becker's famous argument that neoliberal economies would eventually lead to the disappearance of discrimination. The basic idea is this: discrimination is irrational, since it is not supported in objective differences between the discriminated groups; therefore, a rational employer (for instance) can attract the discriminated groups by (say) offering equal wages, and so obtain an advantage over those employers who discriminate in their hiring practices, eventually driving them out of business. Now, game theory can be employed to show that Becker's reasoning is wrong. Again, there are a lot of qualifications that could be raised here, but the gist of it is that game theory shows that once a conventional behavior has been established, it can be very difficult to unsettle it, because the agents benefit from acting according to the convention and lose from not so acting. So, if discrimination becomes entrenched, it can be very difficult to eliminate it, because the agents will have incentives to discriminate.

    See also this very nice link for more on game theory: https://ncase.me/trust/

    (2) I think you're putting too much weight on the biological dimension, but, regardless, I do think you are right that it is not easy to implement changes. This is discussed by Bernard Williams in his famous paper "Internal and External Reasons", in which he argued basically as follows: for an agent to change his basic motivations, he needs to be motivated to so change, so this motivation has to be part of the basic motivation in the first place. So there is no real motivational changes (this is a bit crude, but the crudeness will not affect my main point---incidentally, Philippa Foot also develops similar ideas in her "Morality as a System of Hypothetical Imperatives"). Now, I agree with Williams's reasoning, but I think his conclusion is mistaken. The idea is that people can be motivated to improve themselves, and this motivation can be used to restructure their basic motivations. Of course, this assumes that people can have the motivation to improve in the first place, but I believe this can be achieved by the right upbringing (i.e. one that instilled self-criticism as an important virtue in the person).
  • Kaplan's Logic of Demonstratives and the Symbol Table
    For Kaplan, the two utterances of "Bob" that you described are not occurrences of the same word, but of different words that just happen to share the same "acoustic image", so to speak. In particular, a proper name attaches to just one person, which explains why it has non-variable content (i.e. its character is a constant function). On the other hand, he considers that different utterances of "this" are, if utterances of a demonstrative (and not, say, of an anaphoric pronoun), utterances of the same word.

    Of course, this just invites the question about how to individuate words. Kaplan develops his account of words in his very creatively titled paper, "Words". There, he argues against seeing words as types, preferring to see them as what he calls continuants. Part of his point (and this is constant throughout his career) is that words are individuated by the intentions that animate them. Since he accepts Kripke's story about names (roughly, that we use names primarily with the intention to refer to their bearer), it is clear that the intentions governing the uses of "Bob" in the two occasions are different (in one case, you intend to refer to your brother, whereas, in the other, you intend to your friend), and therefore the words are different. Anyway, this is a simplification, but I think it does get the gist of his position.
  • The rational actor


    (1) On idealization: yes, I do think it is a successful strategy in most, if not all, sciences. Note that idealization is not used (just) to isolate and formulate fundamental laws; rather, we use idealizations primarily to understand causal chains, where these need not be governed by strict laws. I do not think every science has "fundamental laws", but I do think that science is mostly in the business of uncovering causal chains.

    (2) On biology: supposing that you are right about the biology, it does not follow (at least, not without some highly contentious premises) that you are right about our needs and desires, because these can change without a corresponding change in our biology. So, for example, standards of attractiveness have varied wildly across ages and cultures. Or, to give a more personal example, it's been a couple of years now that I'm a vegan and I have had no need or desire for meat in quite a while. The point is, I think it is undeniable that people can shape at least some of their needs and desires rationally. If that is so, I think it is reasonable to ask whether our institutions could reflect this.
  • Dialetheism vs. Law of Non-Contradicton
    A couple of points:

    (1) First, I'd just like to second 's claim that Priest (and most dialetheists I know) does not claim that all contradictions are true, just that some of them are. One compelling example of an alleged true contradiction is, of course, the Liar sentence. It is surprisingly difficult to develop a classical account of the Liar that satisfies everyone and that is not prey to revenge paradoxes. Dialetheism provides a very straightforward solution to this and related paradoxes.

    (2) Second, paraconsistent logics in general are concerned with controlling the trivialization that follows from the principle of explosion. That is, such logics provide a workaround for when we find contradictions in our belief set or in our model. Now, you may say, why would we want such a workaround? Shouldn't we just jettison the contradiction and be done with it? Well, yes, but the problem is, how do we do this? Suppose I have beliefs , and from these beliefs I eventually derive a contradiction, say . This means that I should give up one of the 's, but which one? There may be no obvious way of selecting such an , since there may be equal evidence for each of them. In that case, a reasonable course of action would be to investigate further into the source of the contradiction so that I can eventually revise my beliefs. In the meantime, however, do I need to act irrationally, as if I believed everything (which would follow from the explosion)? Of course not. But this means that I will need to employ a paraconsistent logic, since I will need to ignore explosion. So paraconsistency may be a useful tool in "controlling" a contradiction during belief revision.
  • The rational actor


    I think the two discussions (about economics, about punishment) are a bit different, perhaps in the direction gestured at by . In the case of rational decision theory, game theory, and other economic models, what is being constructed are, well, models, that is, deliberate falsifications of reality for the purpose of simplifying a complex causal network to aid our understanding. Briefly, when phenomena get too complex, it is very difficult to get a hold of it, so we idealize the complexity away (think of Galileo's inclined plane, which ignores things like friction, etc.). Obviously, all sort of things can go wrong, especially if we forget that we are dealing with idealizations, but the general strategy is sound. So I think those that criticize rational decision theory as being too abstract are missing the point: the point is the abstraction.

    On the other hand, you're criticizing some philosophical theories on punishment as unreasonable, i.e. the issue here is normative. Of course, the two are related, since part of the problem (according to you, if I understood correctly) is that such theories have an impoverished conception of our human needs. Here, the above strategy won't work, since it is not a question of understanding a causal network anymore, but of how to best satisfy our human needs (that is why I think your criticism is independent of how to assess rational decision theory). My question here comes then from another direction: granted that we presently have a need for retribution, should we simply give in to this need, or can we shape it in some way? That is, perhaps there are some of our needs that are not conducive to the good life, so to speak, and therefore should (if possible) be dropped. If that is so, shouldn't our institutions be such to help in this task?
  • What do you experts say about these definitions of abstraction?


    I don't understand. Consider two actual (in contrast to potential?) circles on the Cartesian plane, say, one described by and the other described by . Are they instances of each other?
  • What do you experts say about these definitions of abstraction?


    I don't personally think there is any property that is common to every two pair of objects (except in a gerrymandered way), but let us leave this to the side. What about the circle defined by on the Cartesian plane? What are its instances?
  • What do you experts say about these definitions of abstraction?


    Again, from the fact that some (perhaps all) universals, like "circleness" (the universal), are abstract, it does not follow that every abstract entity is a universal. Consider a particular circle, say the one described by the equation in the Cartesian plane. This is obviously an abstract entity. But it is not a universal (what would be its instances?). Similarly for particular numbers, say the number 2.
  • What do you experts say about these definitions of abstraction?
    It seems that you are confusing abstractness with being a universal. Some may defend that universals are abstract (but not all do: some people defend that universals are wholly present in their instances, and are therefore concrete), but I don't think anyone defends that every abstract object is a universal. Numbers, sets, and mathematical objects in general are abstract, but they are not universals. Consider the number 2. It is abstract, but it has no instances, so it is not a universal.

    The problem of characterizing abstract objects is very difficult. If you want some pointers, I strongly recommend reading Sam Cowling's Abstract Entities. Most people define abstract entities as entities which are not located in space-time. However, this raises the problem of how to characterize types. Consider, for instance, the letter "a". It is a type which can be tokened in many ways (by ink, by pixels, etc.). It clearly has no spatial location (in contrast to its tokens), but what about a temporal location? It seems intuitive to say that it was created at some point in time, and that perhaps it could cease to exist (if every token of it, including the tokens that are encoded in our memories, considered as physical states of the brain, were to be destroyed). On the other hand, types seems to be abstract, since we do not seem capable of interacting with them in the same way we interact with concrete objects.

    One way to amend this would be to require only that abstract objects not be located in space, but allow them to be located in time. But now, suppose substance dualism is true and the mind is not physical. It would follow that the mind is not in space (though it is in time), and therefore that it is abstract. But this seems counter-intuitive. Of course, most people think that substance dualism is not true, but the point is conceptual. So we would like to avoid this, if possible. It is not entirely clear how to fix the definition, though (for some---to my mind, ad hoc---suggestions, cf. Bob Hale's Abstract Objects).