Comments

  • A -> not-A
    "I disagree with regards to ordinary language" I'm not quite getting it, what is the disagreement you have concerning ordinary language? You think someone would make an inference from A->not-A to therefore not-A in ordinary language?NotAristotle

    The formal meaning of negation in intuitionistic logic refers to the syntactical inconsistency of the negated sign, rather than to a purported semantic counterexample denoted by the negated sign. Classical logic inherits the same meaning of negation from intuitionistic logic, except for infinitary propositions that appeal to the Law of Excluded Middle, which have no scientific or commonsensical application. So we should stick to discussing negation in intuitionistic logic, before proceeding to other formal logics such as affine linear logic, whose concept of negation is closer to ordinary use. In such cases (A --> Not A) --> Not A is not derivable, corresponding to the fact that Not A obtains the same semantic status of A.

    But can we elucidate the meaning of (A --> Not A) --> Not A in the systems for which it is valid, by appealing to the mutually exclusive states of the weather? Suppose that a weather forecaster said "It is raining in Hampshire therefore it is not raining in Hampshire". Jokes about the english weather aside, wouldn't you assume that they were talking about anything apart from the weather in Hampshire? in which case your abstaining from assigning a meaning to their words would resonate with the formal meaning of negation in intuitionistic and classical logic.

    As for formalities,

    (A --> ~A) --> ~A is little more than the obvious identity relation ~A --> ~A, due to the fact that ~A is definitionally equal to A --> f , where f denotes absurdity. So we at least have

    (A --> f) --> ~A

    But the only means of obtaining f from A is via the principle of explosion (A And ~A) --> f. And so it is sufficient that A implies ~A.

    (A --> ~A) --> ~A

    And since the converse direction is immediately true, we could in fact define the negation of A to be the fixed point of the expression X => (A --> X) that Haskell programmers call a Reader Monad.

    ~A = A --> ~A
    ~A = (A --> (A --> (A --> ..... ) ))

    which serves to highlight the meaning of Negation As Failure (NAF); A proof of ~A amounts to a finite proof that the right hand side doesn't converge, which represents an infinite failure to prove A by random search. But if we haven't managed to prove either A or ~A using our available time and resources, then we are at liberty to declare ~A by decree and reason accordingly, in which case ~A serves to nullify any hypothesized A by turning it into ~A, so as to ensure consistency with our failure to decide the issue, at least for the time being...
  • A -> not-A
    So in common-sense Kripkean semantics,

    A --> NOT A says: all worlds that satisfy A also satisfy NOT A.

    But in Kripke semantics, a world satisfies NOT A if and only if it doesn't satisfy A. So the set of worlds S that satisfy this condition is empty. A forteriori, there aren't any worlds in S satisfying A. Therefore
    NOT A is true, and A refers to nothing.
  • A -> not-A
    So you would say that a reductio ad absurdum is not an inference in the proper sense?Leontiskos

    It is an inference in the syntactical sense of implication, but not in the semantical sense of implication as ordinarily used by scientists and legal practitioners who are in the business of inferring facts as opposed to uninterpretable sentences.

    In a consistent deductive system , If the sign "Not A" is either taken to be an axiom, or is inferred as a theorem, then it means that the sign "A" is non-referring and hence meaningless in that it fails to denote any element of any possible world among any set of possible worlds that constitutes a model of the axioms. By symmetry, the same could be said of the sign "Not A" being meaningless if A is taken as an axiom, but by model-theoretic traditional the sign A is said to not denote anything in a model if ~A is provable.

    For instance, let the sign "A" denote the proposition that the weather is wet in some possible world. If "A" is deductively assumed or proved, then A is a tautology, meaning that the logical interpretation of "A" is stronger than being a mere possibility and denotes the weather being wet in all possible worlds. On the other hand, if "~A" is provable, then no possible world is wet, in which case the sign "A" fails to refer.

    In conclusion, A and ~A can only both be meaningful if they both stand for possible but unnecessary states of affairs, in which case neither are provable. So the OP's problem isn't a problem, because the signs of the implication A --> ~A aren't simultaneously meaningful.
  • A -> not-A
    If I uttered: "If it is raining then it is not raining." ... If formal logic is "mappable" onto ordinary language, then you should be able to infer "oh okay, it's not raining." But no one speaks like that and no one would make such an inference. At least, no one would consider such an "argument" "valid." That being so, while I would prefer there not to be equivocal definitions of validity, it appears that there are, one formal, the other informal.NotAristotle

    I disagree with regards to ordinary language, because we ordinarily reject contradictory premises for sake of avoiding contradiction; we naturally reject A whenever A implies (B And Not B) for any proposition B.

    One isn't inferring Not A in such cases, rather one is establishing a consistent set of premises for subsequent inferencing. This is reflected by the fact that the case you find to be problematic, is actually an alternative axiom used in the definition of negation in intuitionistic logic.
  • Continuum does not exist
    I believe you should review the definition of Dedekind cuts. First, they can't be open sets, since (as Tones pointed out) L and R are sets of rationals.fishfry

    Yes, and that's what i meant. To explain myself clearer, I meant L and R to refer to open sets of rationals together with the entire set of rationals representing +Inf and the empty set of rationals representing -Inf. I'm not sure why people might have jumped to a different conclusion.

    It is right and necessary to point out as I think you are meaning to imply, that traditionally Dedekind cuts are understood as being objects derived from sets of rationals, in which the rationals are understood to be constructed, or simply to exist, prior to the creation of open sets of rationals, which are then used to define the cuts called "irrational numbers". That approach to understanding the reals is very "bottom up", and possibly in contradiction with Dedekind's own understanding of his cuts, which i suspect might have been "top down" (see the SEP for more discussion on his thoughts about the continuum in relation to actual infinity).

    In my case, i am stressing the benefits of a "top down" approach, in which one uses lattice theory to define a lattice of abstract elements that is isomorphic to the open sets of the rationals extended by end points. The open sets of the rationals are only intended to serve as a model of this lattice, which is free to not assume the existence of points and other closed sets.
  • Continuum does not exist

    I agree with you, but i probably didn't make myself clear enough. I'm saying that if L| R is a Dedekind cut consisting of two open sets (as is the case when the cut defines an irrational number that isn't already contained in R), then the union of L, L|R and R is a disjoint partition of the continuum, which is semantically problematic in being disconnected (even if not "disconnected" according to the narrow topological definition of connectedness in terms of open sets only). The closed interval [r,r] is what I meant by writing [r].
  • Continuum does not exist
    Instead of points one works with lattices of open sets. I don't see this as improving the intuitive understanding of continua. Continuity in elementary topological spaces rests upon the idea of connectedness. The topology of the reals is fairly well established, so maybe start by studying this.jgill

    I'm tempted to think of Dedekind cuts as a mathematics joke, in the sense that when open sets L and R are used to define a Dedekind cut L|R for an irrational number r, the generated closed set [r] is disjoint from both L and R, and yet their union equated with the continuum. As I understand it, this disjointed representation of the continuum is in semantic conflict with the continuum's connected topology , which is ultimately the cause of the continuum being empirically uninterpretable and practically useless in real life without abuse of notation.

    I think the interesting thing about the open-sets of the extended continuum (with -Inf and +Inf introduced as end points), is that they can be interpreted as representing propositions, due to the fact that they form a distributive lattice with a top element (-Inf,+Inf), whose join operation is set union representing logical disjunction, and whose meet operation is set intersection representing logical disjunction, in spite of this lattice lacking a bottom element (since the empty set isn't an open set).

    Likewise, the closed-sets of the extended continuum can be interpreted as representing negated propositions, due to the fact that the intersection of an open set (-Inf, x) with a closed-set [x,y] is the empty set representing falsity. More specifically, any point [x] represents a false proposition under this interpretation, i.e. p[x] := NOT { p(-Inf,x) OR p(x,+Inf)}, where p denotes a predicate that maps open sets to propositions of some sort. This interpretation refrains from asserting the existence of a point x for which p is true, but it doesn't deny the existence of such points either. (To deny the existence of such points is to go from a pointfree topology to a pointless topology).

    In short, the open sets of the extended rational numbers can represent propositions derived by coinduction with respect to a 'top' proposition that is continuous in the sense that it isn't isomorphic to any union of propositions whose domains are disjoint. This top proposition is empirically meaningful. For example, we generally don't consider a priceless Ming vase to be the same after smashing it and gluing the pieces back together. By going point-free with our continuous topology, at least initially, allows us to consider "points" as being defects that are introduced when damaging a continuum to produce a non-continuum, such as in the destructive testing of a smooth object.
  • Empiricism, potentiality, and the infinite
    Consider the operational meaning of "infinity" that refers to circular control flow that lacks a termination condition. That is what the lemniscate symbol represents. So there is at least a pictorial, operational and geometric meaning of infinity. Such flows generate infinite processes that often produce observable data on each iteration, so there is also empirical meaning with regards to the execution of an infinite process. And the controllers of the execution of the process get to decide when to terminate it once it has served its purpose, and so the ability to control such a process, as well as it's forced termination by the user, gives controlled infinite processes empirical meaning.

    Karl Popper's principle of falsification refers to the case in which an observer hypothesizes that an infinite process that the observer does not control, is responsible for producing a stream of observations.

    For example, suppose that an investigator conjectures that all swans are white and that he will terminate his investigation upon observing the first non-white swan. Here, the only empirical meaning that the investigator can be ascribe to his conjecture are the conditions under which his infinite process conjecture is refuted by observation of a counter-example. For Popper, such potential refutation is enough for the investigators hypothesis to be considered as scientifically meaningful. However, suppose that the investigator considered his infinite process hypothesis to be true. As a true hypothesis, it would have no empirical implications, since if it were true then the investigator will never exhaust his stream of observations so as to know that it is true. So I interpret Popper's falsification criteria as implying that infinite process hypotheses are empirically and scientifically meaningful in the sense of the criteria that falsify them, but that such hypotheses cannot be interpreted as true hypotheses, since such interpretations are meaningless.
  • Continuum does not exist
    I am however puzzled how all the members of the natural number set are finite yet it has aleph_0 members.MoK

    Semantic puzzlement at the fact that one cannot finitely bound the natural numbers is understandable, even when having no syntactical puzzlement with regards to a formal definition of the naturals.

    Firstly, there is presumably no semantic puzzlement about the situation in which one constructs the naturals one by one by counting upwards from zero, for in that case one assumes full control of the number generation process that one never finishes, for which at any time one has only constructed naturals that are a constructively finite distance from zero.

    So i think that semantic puzzlement is in relation to arbitrarily large natural numbers that one assumes to exist but which aren't constructively defined.

    For example, consider a guessing game between Alice and Bob, in which Alice privately thinks of a natural number which Bob then has to guess. Suppose that Bob is given an unlimited number of chances for guessing Alice's number. If Alice doesn't cheat by changing her number in response to Bob's guesses, then Bob has a winning strategy that will eventually terminate, such as guessing each natural number in turn by counting upwards from zero. But then suppose Alice does cheat to ensure that Bob will lose. How could Bob ever know it? Alice could for example secretly define her number to be one greater than Bob's largest guess. In which case her natural number isn't constructively finite in relation to Bob's strategy, in spite of being constructively finite in relation to Alice's strategy in the situation where Bob eventually gives up.
  • Relativism vs. Objectivism: What is the Real Nature of Truth?
    Semantic contextualism needs to be distinguished from truth relativism. According to the former position, differences of opinion are not interpreted as reflecting differences in truth assessment with respect to the same set of facts, but as reflecting differences in the contextual meaning of what each opinion is asserting.

    Semantic contexualism when pushed to the extreme as a dogma, interprets all assertions as being necessarily true when contextually understood and trivialism ensues, which raises the question as to whether all of the problems of epistemology reduce to the trivialities of semantics.
  • Continuum does not exist
    Formally, the classical continuum "exists" in the sense that that it is possible to axiomatically define connected and compact sets of dimensionless points that possesses a model that is unique up to isomorphism thanks to the categoricity of second order logic.

    But the definition isn't constructive and is extensionally unintelligible for some of the reasons you pointed out in the OP. Notably, Dedekind didn't believe in the reality of cuts of the continuum at irrational numbers and only in the completeness of the uninterpreted formal definition of a cut. Furthermore, Weyl, Brouwer, Poincare and Peirce all objected to discrete conceptions of the continuum that attempted to derive continuity from discreteness. For those mathematicians and philosophers, the meaning of "continuum" cannot be represented by the modern definition that is in terms of connected and compact sets of dimensionless points. E.g, Peirce thought that there shouldn't be an upper bound on the number of points that a continuum can be said to divide into, whereas for Brouwer the continuum referred not to a set of ideal points, but to a linearly ordered set of potentially infinite but empirically meaningful choice sequences that can never be finished.

    The classical continuum is unredeemable, in that weakening the definition of the reals to allow infinitesimals by removing the second-order least-upper bound principle, does not help if the underlying first-order logic remains classical, since it leads to the same paradoxes of continuity appearing at the level of infinitesimals, resulting in the need for infinitesimal infinitesimals and so on, ad infinitum.... whatever model of the axioms is chosen.
    Alternatively, allowing points to have positions that are undecidable, resolves, or rather dissolves, the problem of 'gaps' existing between dimensionless points, in that it is no longer generally the case that points are either separated or not separated, meaning that most of the constructively valid cuts of the continuum occur at imprecise locations for which meta-mathematical extensional antimonies cannot be derived.
    Nevertheless this constructively valid subset of the classical continuum remains extensionally uninterpretable, for when cut at any location with a decidable value, we still end up with a standard Dedekind Cut such as (-Inf,0) | [0,Inf) , in which all and only the real numbers less than 0 belong to the left fragment, and with all and only the real numbers equal or greater than 0 belonging to the right fragment, which illustrates that a decidable cut isn't located at any real valued position on the continuum. Ultimately it is this inability of the classical continuum to represent the location of a decidable cut, that is referred to when saying that the volume of a point has "Lebesgue measure zero". And so it is tempting to introduce infinitesimals so that points can have infinitesimal non-zero volume, with their associated cuts located infinitesimally close to the location of a real number.

    The cheapest way to allow new locations for cuts is to axiomatize a new infinitesimal directly, that is defined to be non-zero but smaller in magnitude than every real number and whose square equals 0, as is done in smooth infinitesimal analysis, whose resulting continuum behaves much nicer than the classical continuum for purposes of analysis, even if the infinitesimal isn't extensionally meaningful. The resulting smooth continuum at least enforces that every function and its derivatives at every order is continuous, meaning that the continuum is geometrically much better behaved than the classical continuum that allows pathological functions on its domain that are discontinuous, as well as being geometrically better behaved than Brouwer's intuitionistic continuum that in any case is only supposed to be a model of temporal intuition rather than of spatial intuition, which only enforces functions to have uniform continuity.

    The most straightforward way of getting an extensionally meaningful continuum such as a one dimensional line, is to define it directly in terms of a point-free topology, in an analogous manner to Dedekind's approach, but without demanding that it has enough cuts to be a model of the classical continuum. E.g, one can simply define a "line" as referring to a filter, so as to ensure that a line can never be divided an absolutely infinite number of times into lines of zero length, and conversely, one can define a collection of "points" as referring to an ideal, so as to ensure that a union of points can never be grown for an absolutely infinite amount of time into having a volume equaling that of the smallest line. This way, lines and points can be kept apart without either being definable in terms of the other, so that one never arrives at the antimonies you raised above.
  • Communism's Appeal
    Usually these sorts of discussions begin on the wrong foot by conflating communism with state capitalism under a ruling party, that is a situation resembling modern day corporate America in many respects, which is ironically reinforced by "communist" hating conservatives refusing to support progressive taxation.

    I'm no Marx expert, but understand that he viewed communism descriptively as an inevitable outcome of capitalism, as much as he did as a moral imperative. With modern society's inevitable transition to universal income in the coming years, the appeal of communism seems besides the point.
  • The Liar Paradox - Is it even a valid statement?
    Suppose y = sin(cos(x)). Which (sin or cos) would you say is inside, and which outside?bongo fury

    Quine was presumably referring to the stratification of types originally proposed by Russell, which ensures that the a map between type universes resides in a universe that is higher than both of the input and output universes. We might recall the fact that each universe contains a subclass that is isomorphic to the previous universe, as represented by the quotation marks in the liar sentence. So if we start with the highest level universe that we say contains everything we regard to be true, and use it to build in stepwise fashion an infinitely descending chain of so-called object languages that are each the meta-language of their predecessor, the liar sentence can be interpreted as stream of fluctuating truth values with respect to isomorphic, but non-identical terms of different types.

    By contrast, both Sin and Cos are maps of type Real --> Real, i.e maps between terms of Reals, where the type Real --> Real resides in the same universe as the type Real, as does any function of functions of ... functions of reals; for they all reside in the universe definable in terms of second order logic. Following their example, we could alternatively interpret the liar sentence as directly referring to a stream of fluctuating values, where the stream and its values all exist in the same universe as a binary approximation to those trigonometric functions.
  • Evidence of Consciousness Surviving the Body
    The identification of anything is subjective and relative to convention. E.g, we don't get hung up about whether or not today's chair is said to be same as tomorrows chair, so why should we treat persons as having objective haecceity ?

    Tibetan monks might have their politico-cultural reasons for objecting to the Chinese government choosing the next Dalai Lama, but do they really have a metaphysical leg to stand on?
  • Evidence of Consciousness Surviving the Body
    So the idea of persons as real and local spatial-temporal objects with objective physical boundaries is fundamentally incompatible with the idea that persons can be reincarnated. — sime


    Beings are not only objects, they are also subjects of experience, and the nature of subjective experience is not necessarily describable in those terms.
    Wayfarer

    Are subjects of experience observable and identifiable or not?


    Also consider the discovery of tulkus in Tibetan Buddhism. They are sought out by various means and subjected to examination and are said to be clearly discerned as incarnations of previously-existing figures. As already mentioned, Buddhist culture assumes the reality of rebirth as a matter of course, even despite the tension with the no-self principle.
    Wayfarer

    So Tulkas are observable subjects of experience who are interpretable in terms of types of objects?
  • Evidence of Consciousness Surviving the Body
    In respect of the question of identity, Buddhists will respond, if you ask them, ‘are you the same person you were as a child?’ ‘No’. ‘Then are you a different person?’ Also, ‘no’. There is a continuity, but also change. I don’t think Buddhism has a difficulty with that. Overall, I find the Buddhist attitude congenial in these matters.

    So I’m not really seeing your philosophical objection at this point.
    Wayfarer

    I'm not objecting to Buddhist sentiment to the extent that they understand that identity relations are arbitrary psycho-linguistic constructs that necessitate their semantic conclusions. In the case of the no-soul rebirth paradox, if concepts related to personhood aren't part of one's fundamental ontology, for example because one considers concepts of personhood to be unreal because one considers persons to be semantically reducible to impersonal forces of nature, then rebirth follows as a tautological conclusions, since the personhood concepts of life and death are both eliminated in the final analysis of of reality. In which case empirical evidence for rebirth is meaningless.

    By contrast, if one conceives of persons as being fundamentally real and local token-objects whose existence is ontologically fundamental, then permanent death without reincarnation follows as a matter of tautology, and there cannot exist evidence to the contrary - for even allegedly successful past-life regressions must be discounted as illusory if one holds ones concept of persons as tokens as sacrosanct.


    So the idea of persons as real and local spatial-temporal objects with objective physical boundaries is fundamentally incompatible with the idea that persons can be reincarnated. One of the concepts must give way to the other, and the question cannot be settled by appealing to empirical evidence, for the very meaning of "empirical evidence" lies downstream of this ontological decision.
  • Evidence of Consciousness Surviving the Body
    You may not be familiar with the research. It wasn’t based on 'past-life regression'. The cases Stevenson sought out were those where children claimed to be someone other than who they were known to be e.g. would start saying 'your not my family' or 'this is not my home, I live in (some other place)' etc. Then the researchers would look for evidence of that claimed previous identity, trying to identify death notices, locations, and other details to corroborate the infant's story.Wayfarer

    Yes, I wasn't questioning the veracity of anecdotes such as the one you mention, rather I'm pointing out that to interpret such cases as being "evidence for reincarnation" is relative to a convention that defines personal identity in terms of memories, by which the person is said to be reincarnated. Which is why I do not take such cases with special seriousness - not because I am assuming that such reported cases cannot be happen as described, but because I consider the identity of persons to be arbitrary and decided by convention, and ultimately grounded in either psychological habits and prejudice or in the utility of adopting the chosen identity criteria.

    For example, lets assume that the account you mention is accurate and defies mundane natural explanations. Then unless one has defined personhood in terms of personal memories, one cannot conclude that the child is a reincarnation of the previous person he is said to remember. In which case all that one concludes is that the child presently has abnormal access to novel information of historical significance.

    Certainly, the child-as-token is not a previous person - by definition of "token". Compare this situation to a caterpillar-token that is said to become a butterfly-token. In that case, we don't insist that the butterfly remembers his life as a caterpillar in order for us to identify the caterpillar with the butterfly, rather we identify their tokens as being parts of a greater token on the basis of temporal continuity. Whereas in the case of the child, there is no apparent spatio-temporal continuity for us to say that the child was the becoming of the previous person, and instead we bridge their lives via a notion of "memory continuity", in spite of the fact that we rarely if ever employ such criteria in our own lives when we ordinarily identify ourselves and our loved ones over time.
  • Evidence of Consciousness Surviving the Body
    As I mentioned to Philosophim, the point about the children with past-life recall is that there is at least the possibility of validating their statements against documentary and witness accounts, something which is obviously not possible with near-death experiences, as they are first-person by definition.Wayfarer

    Our memories are mutable. We continually create, delete and edit our memories in real time, including the memories that we interpret as being veridical. In general we don't interpret amnesia as constituting proof of personal absence during the past. So why should the possession of a veridical memory be interpreted as constituting proof of having witnessed the past? When it comes to conceptions of personal identity, why should ownership of memories be taken more seriously than ownership of a collection of disposable photographs?

    The idea that studies of past life regression can verify or refute reincarnation, is in relation to a convention that defines personal identity in terms of memory possession, together with a block-universe conception of the past that memories are considered to refer to in a manner analogous to time travel. So I don't interpret studies of past life regression as drawing deeper metaphysical conclusions, regardless of whether such conclusions are positive or negative, than our pragmatic judgements of object identification.
  • Evidence of Consciousness Surviving the Body
    That’s all well and good if your criteria of reincarnation is as slack as a good impression of that person or just imitation. Personhood has a more strict definition of what a person is as it covers what that person has experienced in life their memories made, habits personality traits and just general character. The issue boils down to personal identity and what it means to be you.kindred

    I'm arguing that even a supposedly strict definition of personhood is slack. Slackness is an inexorable feature of identity criteria; any application of identity criteria to any problem of philosophy leads to superficial and incomplete conclusions that are products of linguistic convention. At best, one's conclusions are circular and merely reiterate the identity criteria that one employed.
  • Evidence of Consciousness Surviving the Body
    What conditions are required for the reincarnation of Elvis Presley?

    In my pragmatic view, a good Karaoke singer who does a reasonable impersonation of Elvis on stage, can be said to be of roughly the same "type" as Elvis, at least until the end of the impersonation.

    I don't consider the questions of reincarnation to run deeper than that, because identity criteria are inexorably vague, conflicting and decided by convention or psychological prejudices. So why should it be assumed that the question of reincarnation has a definite and absolute answer that transcends our conventions?
  • Do (A implies B) and (A implies notB) contradict each other?
    The main problem for me is, why can we read a→(b∧¬b) as "a implies a contradiction" but not ¬(a→(b∧¬b)) as "a does not imply a contradiction?Lionino

    In general, the consistency of an axiomatic system isn't provable in an absolute sense due to Godel's second incompleteness theorem; the upshot being that consistency is a structural property of the entire system that isn't represented as a theorem by the system if it is sufficiently powerful.

    Suppose that the logic concerned is weaker than Peano arithmetic, such that it can prove its own consistency. Then in this case, a proof of ¬¬a metalogically implies that ¬a isn't provable, i.e that a does not imply a contradiction.

    But if the axiomatic system contains Peano arithmetic such that the second incompleteness theorem holds, then a proof of ¬¬a does not necessarily imply the absence of a proof of ¬a, since Peano arithmetic cannot prove its own consistency.
  • Do (A implies B) and (A implies notB) contradict each other?
    Sure, but that's not really what the example is there to assert, as is clear from the rest of the paragraph. They mentioned replacing the fact about dogs 2+2 = 4 in the next line. It's "if a statement is true, then that statement is implied by any statement whatever," which is straightforwardly counter intuitive.Count Timothy von Icarus

    That's true of classical logic, and more specifically it's fragment known as intuitionistic logic, due to the fact that the respective rule of implication is essentially the logic of functions (including those that ignore their arguments to produce a constant value), rather than the logic of causality - which is described by relevance logic and linear logic.

    As for "Modern Symbolic Logic", it doesn't have a well-defined meaning since it refers to a plurality of logics that are separately described in terms of different mathematical categories.
  • Even programs have free will
    But consider the fact that the halting behaviour of two identical algorithms stands and falls together. So although there does not exist an infallible universal halting tester, there exists an infallible special-case halting tester for any given algorithm, namely a copy of that very algorithm.

    Although an epistemic limitation falls short of a metaphysical proof, I am sympathetic to the idea of free will, because in my opinion the conceptual distinction between free will and determinism rests upon a belief in absolute infinity, which i reject.

    In my view, to say that "A => B is necessary true" in the sense of material causation, is to say that there exists a Z such that "A => Z is necessarily true" and "Z => B is necessarily true". If we reject the idea that this definition can appeal to actually infinite recursion, then the use-meaning of " A => B is necessarily true" in any given context must eventually bottom out to a finite chain of implicative reasoning, in which the meaning of "necessarily true" is left undefined.

    A simpler way of putting it, is to say that we make up the meaning of " A => B is necessarily true" as we go along. This proposition doesn't have precise a priori meaning, and so isn't contradicted by a future discovery that A => B fails to hold, rather the proposition meant by the sentence "A=> B is necessary true" changes on discovery that A => B fails to hold.
  • How do you interpret nominalism?
    Bertrand Russell's Principia Mathematica was nominalist; he treated sets as merely a means of referring to groups of particulars, partly in response to Set theoretic paradoxes, but the approach made it impossible to describe all of mathematics.

    This demonstrates an unconscious tendency of nominalism; why do nominalists have a tendency to appeal to an ontology based on the existence of particulars, as opposed to an ontology that starts from a united whole?

    In Bertrand Russell's case, it was in the hope of making analysis tractable in piecemeal fashion, in contrast to the British Idealists who might also be described as nominalist, but who considered reality to consist of a single holistically unified entity. But this makes analytics impossible, since it implies that a local material change to reality causes the meaning and hence definitions of the rest of reality to change.
  • Do (A implies B) and (A implies notB) contradict each other?
    A related example is Godel's trick in his ontological proof of God as discussed in the other thread, which was to define a property P so as to enforce the condition

    ¬(g → P(g) ∧ g → ¬P(g))

    i.e. ¬¬g, which is a classically acceptable proof of existence.
  • Probability Question
    Just because a set S is declared to be "infinite", doesn't imply that S possesses a literally non-finite number of elements, rather it only implies that a bijection between {0,1,2..n} and S isn't or cannot be specified a priori and that n cannot be bounded a priori.

    And even when a bijection between S and the set of natural numbers is specified, this only implies that S must be understood in terms of a monotonically increasing process rather than in terms of a completed basket of goods.

    In both cases, the use-meaning of "infinity" should be understood to mean "finitization is decided by circumstances that are external to the specification logic".

    So philosophical or practical questions about "infinite facts", as opposed to mathematical questions concerning the definition of mathematical infinity, should always be decided by elaborating assumptions until the facts concerned are "finitized". The presence of infinity in a non-mathematical question is only an indication that the question concerned isn't well-posed.
  • Gödel's ontological proof of God
    I don't see where that is implied in the argument.

    P(ψ)≡¬N(ψ) — sime


    If N is supposed to mean necessary existence, that is a rejection of axiom 5.
    Lionino

    N was supposed to mean the possibility modality (N standing for Negative Properties, in order to stand for the opposite of Positive Properties). The question here I was interested in, is how to give a syntactical definition of Positive Properties such that the resulting argument follows as a valid tautology in some modal logic. This was partly in order to help clarify the the definitions Godel provided, even his assumptions need to be altered slightly and the resulting argument and its conclusion aren't quite the same.

    For example, taking Positive properties to refer to what is necessarily true of all individuals in every possible world, turns Axiom A2 into the definition of a functor, which is rather tempting. It also makes the possibility of god follow as a matter of tautology.

    Also, Godel's definition of essences seems close to the definition of the Categorical Product. So why not take the essence of an individual to be the conjunction of his properties?

    One thing I overlooked was that God was defined as referring to the exact set of positive properties, which would mean that according to my definition of P, all individuals would be identical. But then supposing we weaken the definition of "Godliness" to refer to a set that contains all the positive properties and possibly some of the negative (i.e contigent) ones?

    I think there is quite a few pedagogically useful questions here.
  • Gödel's ontological proof of God
    It seems to me you are thoroughly confusedLionino

    about what?
  • Gödel's ontological proof of God
    I think the most remarkable and amusing part of Godel's argument, is in the beginning before the use of modal logic, in which he argues for the existence of a 'god term' by turning the principle of explosion on its head.

    Constructively speaking, an existential proposition is proved by constructing a term that exemplifies the proposition, as per the Curry Howard Isomorphism. Classically speaking, an existential proposition can also be derived by proving that it's negation entails contradiction, as per the law of double negation.

    In Godel's proof however, he defines a so-called Godliness predicate G, where as usual ~G(x) corresponds to the principle of explosion

    G(x) --> B(x)
    G(x) --> ~B(x)

    where B is any predicate.

    But in Godel's case, he defines G as only implying properties that satisfy a second-order predicate he calls "Positivity", which is a predicate decreeing that G(x) --> B(x) and G(x) --> ~B(x) cannot both be true.

    So in effect, Godel crafted a non-constructive proof-by-absurdity that implies the existence of a god term on the basis that non-existence otherwise causes an explosion! this is in stark contrast to the normal constructive situation of proofs-by-absurdity in which a term exemplifying a negated existential proposition is constructed in terms of a function that sends counterexamples to explosions.


    The rest of Godel's proof is unremarkable, since he defined G as implying it's own necessity, meaning that if G is said to be true in some world, then by definition it is said to be true of adjacent worlds, which under S5 automatically implies every world.
  • Gödel's ontological proof of God
    The irony of Modal Logic is that there are so many alternatives to choose from, corresponding to the fact that Logic and a forteriori modal logic, has no predictive value per se. But modal theologicians aren't using Modal Logic to derive or express predictions, rather they are using Modal Logic to construct a Kripke frame with theologically desired properties. So ontological arguments aren't necessarily invalid for achieving their psychological and theological purposes, provided they aren't construed as claims to knowledge.

    In fact, i'm tempted to consider Anselm's argument to be both valid and sound a priori, and yet unsound a posteriori. This is due to the fact that although our minds readily distinguish reality from fiction, I don't think that this distinction is derivable from a priori thought experiments.
  • Gödel's ontological proof of God
    S5 is the logic of epidemics in which every possible world is infected by a virus whose transmission is symmetric and transitive.

    As for Godel's argument, if we take the special case of his argument in which the positive properties P are taken to be the properties that are true for every possible individual, i.e by taking



    and if we replace axiom A1 above with



    where



    Then i expect that the resulting argument reduces to a trivial tautology of S5 in which all individuals are infected by the godliness virus.
  • Infinite Staircase Paradox
    I don't think it has anything to do with mathematics. This is perhaps clearer if we don't consider the button to turn the lamp on and off but instead consider it to alternate between two or more colours.

    What number would you assign to the colour red, and why that? What number would you assign to the colour blue, and why that? Shall we use e and i, because why not?

    The logic of the lamp just has nothing to do with numbers at all.
    Michael

    Yes, and that sounds identical to the philosophy of intuitionism :) It doesn't matter what type of object is associated with the lamp's output.

    By contrast, Benecerraf et al argue along more classical lines, by defining an abstract completion of the sequence that doesn't contradict Thompson's premises, but which requires changing the original problem by adding an point at infinity to accommodate such a completion.
  • Infinite Staircase Paradox
    Thompson's views sound intuitionistic, in that he didn't apparently consider the "completion" of button-pressing to have a meaningful answer in relation to his thought-experiment. Indeed, if one formally treats Thompson's Lamp as being an unfinishable choice-sequence, which is an object equivalent to what computer-science calls a "stream", then Brouwer's weak axiom of continuity explicitly forbids the construction of any theorem that postulates a property of Thompson's stream that isn't decidable in a finite number of steps. Indeed, Brouwer can be understood as inventing intuitionism to explicitly forbid the informal interpretation of total functions as representing super-tasks, which are concepts that are incompatible with temporal intuition.

    By contrast, Benacerraf changed the goal posts by giving Thompson's Lamp a formal treatment from the perspective of classical analysis, in which he interpreted Thompson's Lamp as being an incomplete description of an extended abstract function with a point at infinity, whose value can be chosen as being continuous with some property of the original sequence. However, Bencerraf's formal interpretation doesn't have a temporal interpretation in the sense demanded by the intuitionists. And his methodology runs into problems in situations where the function to be completed has conflicting notions of continuity, as in The Littlewood-Ross Paradox.
  • An Analysis of "On Certainty"
    But recall that Wittgenstein regarded the ordinary meaning of "to know" to not imply infallibility, in the sense that even if a fact P necessarily implies another fact Q, "knowing that P" does not necessarily imply Q.

    This stems from his epistemic consideration that in a literal sense nothing is knowable in the sense demanded by a philosopher. And yet he appreciated that everyone including himself ordinarily use the verb "to know" all the time. Therefore he concluded that the ordinary meaning of "to know" isn't an insinuation of ideal knowledge.

    If Moore's knowledge of his hands is interpreted in that light, then had Moore later discovered that he didn't actually have hands, his discovery wouldn't contradict his earlier ordinary claim to "know that he had hands"
  • Infinite Staircase Paradox
    I imagine that a supertaskist might agree with the formalist or intuitionist that supertasks aren't mathematically or logically defensible, whilst nevertherless insisting that reality contains supertasks, by arguing that mathematical logic is the map rather than the territory, and by conceding that they are only using mathematics as a descriptive tool for expressing their beliefs, rather than as a prescription for justifying their a priori beliefs in super-tasks.

    For instance, a supertaskist might appeal to the fact that one cannot say how many moments of time has passed during a minute, or how many physical operations took place in one's computer to sum 1 + 1, and they might appeal to this inability to measure, divide and count experience or events as grounds for being open minded to the idea that space-time is a literal continuum.

    But in that case, how does the supertaskist propose identifying what isn't a super-task? If super-tasks are to have empirical meaning and inferential value, the supertaskist must delineate task from super-task, but how can they delineate them on a non-ad hoc basis?
  • Fall of Man Paradox
    The inability for dimensionless points to be reconciled with the continuum is what motivated Whitehead's point-free geometry, a precursor to the field of Pointless Topology, as for instance formalised using Locales whose distributive law characterizes the meaning of a "spot". (It might be useful to test this law in relation to the SB tree, for both the truncated and infinite version).
  • Infinite Staircase Paradox
    See that phrase, "perfect information"? That's why I say formalism attempts to do the impossible. In other words, it assumes an ideal which cannot be obtained, therefore it's assumption is necessarily false.Metaphysician Undercover

    Perfect information isn't an assumption of formal reasoning, rather it is regarded to be a necessary condition of the meaning of "formal" reasoning in that it is by definition finitely deducible and does not require appealing to unformalized intuitions about infinite and ideal objects. Most importantly, the condition of perfect information ensures that formal reasoning cannot interpret an expression such as {1,2,3,...} as representing an abbreviation of some ideal object; the former expression must either be formally treated as a finite object of some type, else the expression must be considered illegal.

    It is actually by sticking to formal reasoning that the illusion of the ideal is never obtained. The opposite impression is due to Platonists disguising themselves as formalists, which might be said to even include Hilbert himself.

    Formalism makes the reasonable demand that whatever informal intuitions originally motivated the construction of an axiomatic system, and whatever informal interpretations one might subsequently give to the signs of that system, the methodology of theorem-proving should be purely algorithmic and make no appeal to such intuitions, whether such intuitions be rooted in platonism or in Kantian intuition.

    I view formalism as a form of Platonism. It's a Platonist game in which the participants deny their true character, that of being Platonist. Notice "perfect information" is the foundational feature of Platonist idealism. That perfection is the only thing which supports the eternality of Platonic ideals. So formalism and Platonism are really just the same thing, even though the formalists will claim otherwise.Metaphysician Undercover

    The irony of Hilbert, is that his formalism ultimately led to the rebuttal of his own informal intuitions about infinity, namely his presumption that a closed axiomatic system must possess a finite representation of it's own consistency. Had Hilbert better understood the implications his formalism, and especially the finite formal meaning of The Law of Excluded Middle which he apparently accepted for instrumental purposes, then Godels incompleteness theorem might not have come as a shock to him. It is evident that Hilbert was a methodological formalist who didn't mean to insinuate that mathematics was a meaningless game void of semantics, but only that the terms used to denote sets, formula and constants shouldn't require interpretation for the purposes of theorem proving. Unfortunately, his intuitions misled him regards to the outcome of his formal program.

    If we inspect the finite activity of theorem proving in a formal system, we see that every term that is informally interpreted as denoting an "infinite object" only possesses finite conditions under which the term is introduced into a theorem and under which the term is eliminated from a theorem.

    Different formal systems can be regarded as differing only in regards to their ability to distinguish types of finite object. E.g Intuitionism that formalizes choice-sequences can distinguish uncompleted finite sets from ordinary finite sets, whereas ZFC as a theory of first-order logic can only distinguish finitely defined functions from finite sets - so whilst ZFC might be informally said to be a theory about "infinite sets", this isn't the proof-theoretic formal meaning of ZFC, and so a formalist is free to reject the platonic myths that surround ZFC.
  • Infinite Staircase Paradox
    You can say that I have a problem with formalism, because I do. Like claiming that accepting certain axioms qualifies as having counted infinite numbers, formalism claims to do the impossible.Metaphysician Undercover

    Formalism as a philosophy considers mathematics to be reducible to a finite single-player sign game of perfect information in which proofs refer to deterministic winning strategies, and hence Formalism does not support the Platonic interpretation of abstract mathematics as denoting actually infinite objects, whatever the formal system concerned.

    So I think your problem is actually with Platonic myths that have become psychologically wedded to innocent formal definitions, and in particular the formal definitions of limits and total functions that are ubiquitously misinterpreted in both popular and scientific culture as denoting a non-finite amount of information, E.g as when the physicist Lawrence Krauss misleads the public with nonsense about the physical implications of Hilbert Hotels.
  • Infinite Staircase Paradox
    There is a fundamental problem with identifying supertasks with series limits — sime


    This is the kind of mistake that Benacerraf makes in his response to Thomson, as explained here.

    The lamp is not defined as being on or off at particular times; it is turned on or off at particular times by pushing a button.

    This is an important difference and is why so many "solutions" to Thomson's lamp (and other supertasks) miss the point entirely.

    If the lamp is turned on after 30 seconds then, unless turned off again, it will remain on for all time. This is why if you claim that supertasks are possible then you must be able to give a consistent answer as to whether or not the lamp is on or off after 60 seconds. If you cannot, because no consistent answer is possible, then this is proof that the supertask is metaphysically impossible.

    It is necessary that the lamp is either on or off after 60 seconds, and for it to be either on or off after 60 seconds it is necessary that the button can only been pressed a finite number of times before then.
    Michael

    My impression of Benacerraf is that he is defining Thomson's Lamp as a boolean valued function



    on the domain of the extended natural numbers which introduces an additional point at "infinity", and then arguing that the value at can be chosen arbitrarily and independently of the function's limiting value, if any. But if this the case, then he isn't engaging with Thomson's argument and has merely shifted the goal posts to declare victory in an incomparable axiomatisation.

    But the point about Frege's Law Vb also applies to the extended natural numbers; Thompson's lamp when defined as the function has a domain consisting of two definite and maximally separated points 0 and and a number of points between 0 and that is intensionally described as being countably infinite. However, if Frege's Law Vb is rejected for reasons mentioned previously, then although still has the aforementioned intensional properties, it does not possess an extensionally well-defined number of points, in which case it cannot be considered to represent the metaphysical notion of a supertask.

    Essentially, mathematical analysis will fail to persuade unless one is already a true believer of supertasks.
  • Infinite Staircase Paradox
    There is a fundamental problem with identifying supertasks with series limits, namely the fact that literally infinite summations are not expressible in calculus, given that they cannot be written down.

    A formalist is free to use the name "1/2 + 1/4 + ..." to denote 1, but the formalist cannot interpret "1/2 + 1/4 + ..." as an expression implicitly representing part of an infinite summation, because the formalist considers expressions to have no meanings other than being finite states of a syntactical parser when proving a theory in a finite number of steps.


    Frege fell into a similar trap as the supertaskers in the Grundgesetze when he proposed his law V. He wanted there to be a one-to-one correspondence between every function and it's representation as a table of values, even in the case of functions with infinite domans. So he proposed Basic Law Vb with disastrous consequences:

    {x∣Φx} = {x∣Ψx} → ∀x(Φx ↔ Ψx).

    To a finitist or potentialist, Law Vb can be interpreted as introducing fallacies of induction into Set Theory, since they will likely interpret the sets-as-extensions on the left hand side as denoting a finite amount of observable information, and they will likely interpret the function on the right-hand side as denoting an unbounded amount of implicit information, meaning that they cannot regard Law Vb to be a reliable rule of induction. Furthermore, according to their reasoning Law Vb cannot be regarded as constituting a definition of the right hand side, unless one gives up the idea of functions having infinite domains).