• What time is not
    Time is a human concept of convenience
    — sandman

    Yet there is a universal speed limit - the speed of light - and speed = distance / time, so it appears that something / some mechanism within the universe must be 'time-aware' else the speed limit could not be enforced - so time seems not just a human concept - it seems to be part of nature.
    Devans99

    An anti-realist with respect to time, might say that "nothing can travel faster than the speed of light" is a statement about the grammar of special relativity, rather than being a factual statement about the world.

    The reason why special relativity 'concludes' that nothing can travel faster than c relative to any inertial frame of reference, is because otherwise causality would be violated by faster-than light objects moving 'backwards' in time to inform the past.

    If we could make empirical sense out of this idea of causality being violated, then special relativity could not rule out the possibility of faster-than-light objects. But we cannot make empirical sense out of the idea of causal violations, since it leads to empirical contradictions. Therefore an anti-realist might argue that "faster than light travel" isn't a false proposition but a meaningless sentence. In which case "nothing travels faster than light" is a statement about the language of physics rather than a negative proposition about the world.
  • Why I gave up on Stoicism.
    Is stoicism really a personal philosophy?

    One of the things i find ironic about conservatives, and especially the more radical christian conservatives who preach self-responsibility, self-reliance, self-motivation etc which are values very much aligned with stoicism, is the fact they rely on preaching and perpetual communal gatherings to instil it, a permanent ongoing situation that is actually in direct contradiction to their message of self-motivation, self-reliability and self-responsibility.

    I think that the conservative/evangelical mind-set is probably more resilient to life's misfortunes, which explains conservatism's appeal, but their actual practice for instilling and maintaining that mindset is communal and involves the social regulation, coercion and motivation of individuals, rather than individuals regulating and motivating themselves.
  • Alternative proof for the Carnap-Gödel diagonal lemma


    I think you would be clearer by referring only to the syntactical notion of derivability, since the diagonal lemma does not refer to truth, and neither does it assume the law of excluded middle. Any derivation of the lemma that did appeal to LOM would not be constructively acceptable, invalidating any consequent formula that appealed to the lemma which would include Godel's and Tarski's theorems, which are in fact constructively accepted.

    The diagonal lemma only states that for every well-formed formula f(x) of one free variable, there exists a sentence s, such that the derivation of s implies the derivation of f('s') and vice versa, and that this fact is itself derivable.

    Earlier, I was referring to the syntactical fact that when you wrote

    ∀f∈F:N→{false,true}∃s∈S:s↔f(┌s┐)

    'f' isn't assumed to be a provably total predicate function. For example, when the lemma is applied to derive Godel's incompleteness theorem f refers to ~prov('s'), and whilst the disjunction "~prov('s') or prov('s')" might be derivable for any 's' via an appeal to LOM, axiomatic arithmetic cannot consistently decide which part of the disjunction is the case. Therefore any derivation of the diagonal lemma that appeals to such a hypothetical function isn't a permissible derivation of of Peano arithmetic.
  • Why aliens will never learn to speak our language
    I agree with you that we lack a good definition for general intelligence. But as my example of a thing that is clearly as intelligent as us but can't predict all our associations demonstrates, even our intuition doesn't agree with the Turing test as what is intelligent. We need to keep working to understand what intelligence is and as I currently see it, the way the Turing test is used in this work and in things like AI development, it diverts us into a path that is harmful. It is quite obvious that a transistor based general intelligence doesn't need to be able to speak any language in an indistinguishable way from humans and that that would be an inefficient and unnecessarily complex way to program general intelligence - yet people tend to see that as an important goal right now. Harmful, I say!Qmeri

    Whether or not a particular Turing test is appropriate in a given situation is largely a question concerning the breadth of the test. For example, if testing whether a computer 'really' understands Chess, should the test be very narrow and concern only it's ability to produce good chess moves? or should the test be very broad to even include the ability of the computer to produce novel metaphors relating chess to the human condition?

    Personally, I don't interpret the spirit of the Turing test as making or implying ontological commitments regarding how AI should be programmed or trained , or as to how intelligence should represent sensory information with language, or even as to what intelligence is or whether it is ultimately reducible to metrics. Neither do I understand the spirit of the Turing test as being prescriptive in telling humans how they ought to judge a participant's actions. Rather I understand Alan Turing as very modestly pointing out the fact that humans tend to recognise intelligence in terms of situationally embedded stimulus-response dispositions.

    In other words, the specifics of what goes on inside the 'brain' of a participant is considered to be relevant only to the functional extent that the brain's processes are a causal precondition for generating such situationally embedded behavioural repertoires; the meaning of language and intelligence being undetermined regarding the implementation of stimulus-response mappings.

    Indeed, an important criterion of intelligence is the ability to generate unexpected stimulus-responses. Hence any formal and rigid definition of intelligence solely in terms of rules, whether internal in describing computational processes inside the brain, or situationally in terms of stimulus-response mappings, would be to a large extent an oxymoron.
  • Why aliens will never learn to speak our language


    Yes the Turing test is anthropomorphic, but why is that a problem in the absence of an 'objective' alternative?

    Not even a logical language can be identified without mirroring. Recall Wittgenstein's example of an alien tribe stamping their feet and grunting in a way that is compatible with the rules of Chess. Only if we recognised their culture a being similar to ours might we assert they were playing Chess.
  • Why aliens will never learn to speak our language


    Recall that in the Turing Test, a human evaluator has to decide purely on the basis of reading or hearing a natural language dialogue between two participants, which of the participants is a machine. If he cannot determine the identities of the participants, the machine is said to have passed the test. Understood narrowly as referring to a particular experimental situation, yes the Turing Test fails to capture the broader notion of intelligence. But understood more broadly as an approach to the identification of intelligence, the Turing test identifies or rather defines intelligence pragmatically and directly in terms of the behavioural propensities that satisfy human intuition. The test therefore avoids metaphysical speculation as to what intelligence is or is not in an absolute sense independent of human intuition.
  • Why aliens will never learn to speak our language
    The very definition of 'alien' is in terms of the respective entity's tendency or capacity to mirror and predict our stimulus-responses for it's own survival. The Turing 'Test' is a misnomer; for the test constitutes a natural definition of intelligence. If we cannot interpret an entity's stimulus-responses as acting in accordance with the basic organising principles of human culture, then as as far as we are concerned, the entity isn't worthy of consideration. So to a large extent, the ability of aliens to speak 'our language' is presupposed in our definitional criteria.
  • True Contradictions and The Liar
    Married and bachelor are two seperate and opposing qualities.Harry Hindu

    As far as i'm concerned, I'm a married-bachelor until the ink of the registrars signature is dry.
  • True Contradictions and The Liar
    Could you continue on to an explanation of what counts as an alternating truth value? Is that what makes it self-negating? If it's true, it is false, etc...creativesoul

    Self-negation, or perhaps to state more accurately, the potential for self-negation, is a common property of negative universal propositions of meta-linguistics, metaphysics and epistemology that declare limits on sense, cognition or knowledge. For example, "All sentences have indeterminate meaning" , "All things are empty of intrinsic existence and nature" , and "Every belief is fallible" are all potentially self-negating propositions. Common coping strategies in the face of such potential contradictions are either to impose an artificial and rigid hierarchy of reference like Bertrand Russell did to avoid Russell's Paradox, or to quit philosophy and declare it to be nonsense as the Early Wittgenstein did, or to accept 'true contradictions' as Hegel did. Accepting alternating truth value is another coping mechanism that understands a person's concept of truth in terms of their present state and rejects the dogma of a static truth concept.

    Often self-negation occurs when a conclusion negates it's own arguments, as when Wittgenstein declared that the propositions of the Tractatus are meaningless, after they had served as a 'ladder' to understanding. The later Wittgenstein's "private language arguments" have similarly been interpreted as self-negating "ladder" arguments, and similar remarks have been said about Kant's Critique of Pure Reason. Pure reason certainly can lead to contradictions, yet we don't simultaneously entertain both sides of such philosophical contradictions, rather we use logic to hop from one conclusion to it's opposite and then usually quit philosophising.

    I also have a sneaking suspicion that alternating belief states might become a practical problem of artificial intelligence. After all, the human brain is a dynamical system and there is no compelling reason to assume that belief states converge to an equilibrium.
  • Alternative proof for the Carnap-Gödel diagonal lemma
    In Godel's incompleteness theorem f is taken to be ~Prov('s'), the negative of the 'provability predicate'.
    It is easy to show via the construction of Prov('s') that s -> Prov('s') when s is a theorem. However it cannot be shown, assuming consistency, that s -> ~Prov('s') when s isn't a theorem. Prov is only a computable partial function, therefore a proof of the lemma by way of contradiction cannot be obtained.
  • True Contradictions and The Liar
    Nothing is alternating, though.frank

    That depends on your notion of truth. Classically, you're right; for truth is not traditionally considered to be the property of a sentence or of it's construction, but of a timeless matter of fact referred to by the sentence that is existentially independent of, and external to, the sentence. From that perspective, the notion of 'alternating truth' i have sketched should be interpreted as referring to 'alternating belief' in the truth of a sentence, where a sentence is said to be 'true' merely if one accepts it and 'false' otherwise.
  • True Contradictions and The Liar
    The liar paradox has multiple interpretations and resolutions. For instance, if 'this sentence' is interpreted syntactically as being recursive self-quotation, we end up with an infinitely deep nest of quotes ' ' ' ... ' ' ' that isn't even a sentence, let alone a well-formed sentence.

    In my opinion, the Godel sentence used in the proof of the incompleteness theorem is best understood in an analogous fashion, since the proof is purely syntactical. Its semantic interpretation as a sentence asserting it's own lack of provability is a heuristic argument that isn't formally acceptable, because the Godel number supposedly referred to by the Godel sentence is infinitely long when the sentence is recursively unpacked by substituting the sentence into itself.

    On the other hand, if 'this sentence is false' is interpreted semantically as being a pair of sentences, each sentence belonging to a different language whose meaning is the negation of the sentence in the other language, then we get the traditional semantic understanding of the sentence as a contradiction.

    However, since the liar paradox is a paradox of natural language that is it's own meta-language, as opposed to being a paradox of formal language, my preferred resolution is to consider the liar paradox as being a meaningful sentence (since we can understanding the paradox), that isn't a contradiction, rather it is a self-negating sentence with alternating truth value. This interpretation best describes our use of the paradox. i.e. "It is true - hence it is false - hence it is true... etc"
  • Nature of time.
    Assertions of change are relative to a notion of identity, which is terminological.

    For example, suppose that every waking moment is labelled by the current reading of an atomic clock, such that no two situations are considered to be identical. Relative to such nominalistic terminology, there isn't a notion of identity, and hence there is neither a notion of change nor permanence.
  • Hard problem of consciousness is hard because...
    Personally, what I think the p-zombie thought experiment demonstrates is that my own feelings, imagination and judgements constitute a substantial part of my definition of 'other' minds. For I can, to a limited extent, choose to perceive and imagine those around me to be 'full of spirit and consciousness', or I can perceive and imagine them to be soulless robots or zombies, by perceiving them and thinking about them in different aesthetic ways. (The concept of the presence of another mind and the concept of the absence of another mind refer to different senses)

    So whilst I agree that 'other minds' exist, I don't agree that they exist independently of my aesthetic judgements of them. For sure, I cannot predict what a person might say and do next, and my predictions concerning that person's behaviour constitutes part of my definition of their mind (or lackof). But their actual behaviour and functionality are not my sole definitional criteria regarding my concept of their mind, for my own feelings and intuitions are very much also a part of my concept of 'other minds'. Hence I cannot be skeptical about the existence of other minds in the sense of truth-by-correspondence, for other minds are partly made true by my construction.

    Similarly, suppose that society is divided as to whether tomorrows robots possess consciousness. In my opinion, there is no mind-independent 'matter of fact' as to whether or not robots possess consciousness. Personally, if I feel that a robot is conscious then it is conscious. Any so-called 'objective' definition of robot consciousness will be defined, ultimately, in terms of the social consensus, which one might agree with or might not. I wouldn't regard any such dispute over the existence of robot consciousness to be a dispute over facts 'in themselves' - except for the parts of the dispute that have behavioural implications.
  • When is it rational to believe in the improbable?
    What is the relationship between defined subjective beliefs and undefined or unknown physical probabilities?

    Consider an urn containing N balls, each ball being one of K colours. Suppose we express our 'subjective indifference' to the colour c of a ball randomly drawn from the urn by saying that our 'prior belief' is P(c)=1/K for c=1..K. On the surface, this assignment might look objectionable on the grounds that it is the physicality of the urn we are actually interested in, which could contain any proportion of balls of each colour. Yet our supposedly 'subjective prior beliefs', are in fact equivalent to saying that the actual ball colour frequencies inside the urn are undefined in our problem; for notice that the colour frequencies of balls drawn from every possible urn containing N balls of K potential colours, is 1/K.

    In other words, in this problem our expression of a 'subjective' and 'definite' prior probability is rational, but it is actually a misnomer, since it is a frequency summary of logical deductions over the set of possible world states defined by our problem,in which the actual world-state is left undefined.

    In conclusion, whenever a subjective belief is expressed and explicitly quantified in a problem, it should be understood frequentially in terms of a set of logical deductions over a set of possible worlds.
  • When is it rational to believe in the improbable?
    Probability theory is fraught with problems when interpreted to refer to epistemic uncertainty. Unless one is a psychologist whose object of study is mental processes, it is better to avoid the epistemic interpretation of probability entirely, in my opinion.

    If your intuitions are the same as mine, you might prefer to represent epistemic uncertainty in terms of a set of probability distributions, as opposed in terms of a particular distribution. That way, the controversial use and interpretation of 'prior probabilities' is avoided entirely.

    To illustrate, consider trying to catch a fish in a lake with a net. Ideally the net would cover the entire volume of the lake. The net is then 'reeled in' to catch the fish. The initial placement of the net represents the 'prior search space' of the problem, and could be said to represent any 'prior probability distribution' that assigns positive mass over the same area.
  • Hard problem of consciousness is hard because...
    How does ghost in the machine solve the problem? How do you explain subjective experience of the ghost? And whose ghost is it? Mine? Or is it some shape shifting lizard alien playing some game through my avatar?Zelebg

    Ghost-in-the-machine metaphysical problems are what happens when a plurality of different phenomenological senses are (mis)interpreted as a plurality of substances.

    For example, I can directly imagine an object that I assert is 'red'. But here my private expressive use of 'red' does not assume nor appeal to external definitional criteria. Hence this use of 'red' has no necessary connection to the public use of the term 'optical red' which relates to physical experiments concerning the electromagnetic spectrum. This latter is use is representational and communicative rather than imaginative. This is all that needs to be said, phenomenologically speaking. There isn't a 'hard problem' to explain here, unless one conflates private expressives with public representations.
  • Hard problem of consciousness is hard because...
    The hard problem only exists for naturalists, because they consider the concepts they use to describe the world to be semantically divorced from sense.

    For phenomenological traditions, there only exists the 'easy' problem of explaining the unity of intentionality. For there is no gap within their concepts for the hard problem.
  • Karl Popper - Summoning Demons
    Popper might have done better to have restricted the notion of an empirical proposition to one whose domain of realisation is finite and closed rather than open and potentially infinite. That way he could have treated scientific laws and the demon summoning example uniformly and symmetrically, by saying that both lack empirical content - the former in not being verifiable in requiring infinite confirmation, and the latter in not being falsifiable in begging potentially infinite consideration.

    In my opinion, it is better to require empirically meaningful propositions to be decidable, i.e to be both potentially verifiable and to be potentially falsifiable, which requires of the proposition in question to have finite extent.

    Any universal proposition of the form "ALL X's have property Y", where the domain of the 'proposition' is potentially infinite, should be regarded as a pragmatic statement of scientific policy for guiding decision making rather than to be a proposition describing potential scientific fact.
  • Probability is an illusion
    Any purported factual distinction between determinism and randomness isn't clear, in light of the problem of induction and the difficulty in defending an objective analytic-synthetic distinction.

    How else can the 'factual' deterministic properties of a system be defended or even described, except in terms of the behavioural frequencies of ensembles of similar systems, to which a limit argument is then applied to produce a statement that has no correspondence to reality.

    Determinism is neither a factual nor a logical concept; logic refers to statements that we treat as being identical. But identicality isn't a factual statement. All we have is 'factual similarity judgements' that refers to distinguishable facts which 'share' a set of preconditions and consequences in the sense that we have put them into a rough correspondence. Identicality is our treat of 'similar facts' as synonyms.
  • Can Hume's famous Induction Problem also be applied to Logic & Math?
    There exists disagreement as to the extent to which Hume considered the problem of induction to be epistemological versus metaphysical. Considered epistemologically, the problem of induction is simply the problem of predicting the physical consequences of an experiment, given that there are invariably rival explanatory hypotheses that are equally agreeable with respect to general principles of logical deduction or causation.

    However when considered ontologically, logical deduction is also called into question. The underlying issue is that there doesn't exist agreement regarding the relationship of deduction to induction. For example, deduction might be considered to be special case of induction in which there is believed to exist perfect certainty for a conclusion with respect to a given premise.

    Some philosophers identify deduction with the semantic notion of synonymy, yet for most people for most of the time, deduction is identified with the consequences of physical calculation as demonstrated by our reliance on computers. Hence deduction in practice is treated as a special case of (fallible) induction.
  • Exploring analytical philosophy with Banno
    The logical problem here, the philosophical interesting side issue, is that beliefs overdetermine our actions. There are other beliefs and desires that could explain my going to the tap. — Banno's profile quizzed by Wallows

    It's remarkable that when it comes to simple AI agents such as Amazon Alexa, we tend to avoid attributing beliefs to them because we are confident in our causal understanding of their linguistic behaviour. So for instance, if Alexa expressed a false sentence we might for example say she was merely reporting the contents of an outdated database, or directly expressing a programming error or sensor failure, rather than accusing her of literally harbouring a false belief. In short i think we tend to be externalists relying on the causal theory of reference when it comes to understanding artificial brains, which ironically makes us more forgiving of AI than of humans to whom we tend to subconsciously attribute miraculous causative and representational internal properties without scientific justification.

    Examples such as this tempt me into thinking that the notions of belief and volition will gradually be eliminated from human psychology and ordinary discourse, along with the epistemological notion of 'objective' truth, and replaced by a richer and environmentally-integrated holistic notion of behavioural semantics that is specific to each and every individual. Such a notion would appeal heavily to the causal theory of reference when it is used to understand the state of any human or AI agent, to the effect that the notion of a 'shared' linguistic semantics would effectively be abandoned.
  • Exploring analytical philosophy with Banno
    I think Chomsky avers (somewhere on youtube) that Hume and Heraclitus were privy to the same insight. Of course he draws a different lesson from it than Quine. But he doesn't say the doctrine itself is mistaken, or even that it is behaviouristic. And it isn't. It points out that you can't objectively ground reference in behaviour.bongo fury

    To that, one might want to add a long list philosophers who have rejected epistemological foundationalism on the basis of either phenomenological or causal arguments, for inscrutability is a simple corollary of holism and uncertainty.

    Whenever an engineer measures the 'false positive' rate of a prediction rule, it is always in relation to a definition of ground-truth, that varies from experiment to experiment. For example, in a face-recognition machine-learning problem the definition of 'ground truth' is the particular image dataset used to train the classifier algorithm. But there cannot be an all-encompassing data-set for defining what a face image is across every face recognition problem, because every situation has different and conflicting auxiliary premises, such as what counts as a 'disguised' or occluded face.

    I don't think Quine meant to imply anything more than that.
  • Exploring analytical philosophy with Banno
    Any absolute or all encompassing notion of inscrutability is self-inconsistent,something that Quine was presumably aware of. We can only understand the notion of inscrutability on a case specific basis when translating terms of one language into terms of another language. For example, we can understand the inscrutability of 'Gavagai' terms belonging to a native speaker's language relative to our own linguistic practices including our use of the word rabbit. Likewise, we can understand the inscrutability of 'rabbit' references in our own language relative to our understanding of potential scientific experiments in behavioural linguistics. None of these uncontroversial senses of inscrutability add up to a grand philosophical thesis.
  • The Subjectivity of Moral Values
    ↪TheMadFool No, because to say that something is 'subjective' is to say something about its composition.

    Pain is subjective because it is made of states of a subject.

    Pain cannot be true or false. Truth and falsity are properties of propositions.

    The proposition "Mike is in pain" is true if Mike is in the subjective state constitutive of pain, false if he is not.

    So, subjective and objective are terms that I am using to refer to something's composition.

    Truth and falsity are properties of propositions.
    Bartricks

    What makes you think that subjectivity/objectivity isn't also a property of propositions?

    Let us suppose that society never spoke of abstract pain, and that it instead invented a unique "pain designation" term for each and every person, that applied only to that particular person. E.g, "Bartrick-ouch", "MadFool-ouch" etc. In such a community, would it make sense to classify utterances of "Bartrick-ouch" as being subjective/objective ?

    Recall that we use public criteria for determining whether a verbal report is subjective or objective. In the case of "abstract pain" applied to a particular individual, we use more than the behavioural response of an individual for determining whether "abstract pain" is an appropriate designation of their situation; for the meaning of "abstract pain" is in relation to the average behavioural response of the average individual with respect to the average situation.

    Yet in the case of "Bartrick-ouch", we cannot, by definition, compare your behavioural responses to other peoples. As far as we are concerned, if you yell "Bartrick-ouch!", that can only mean bartrick-ouch.
  • Krishnamurti Thread
    "The observer is the observed", as with a Metaphysical assertion, shouldn't be interpreted as being a report or proposition. Rather, these sorts of statements are better understood to be meta-cognitive speech-acts to re-conceive one's idea of self and world.

    An inherent characteristic of meta-cognitive speech-acts is their circular justification, which makes them appear viciously circular and possibly self-refuting when analysed logically. But arguably this is as much true for our ordinary conceptual schemata as it is for mystical or otherwise alternative conceptual schema.

    In my opinion, it isn't logically possible for Buddhists to enter into metaphysical arguments. If a closed-minded critic claims that Buddhist expressions of thought portray to him something false or meaningless, then the critic is expressing something that is undeniable; namely the inexorable effect that Buddhist expressions of thought have upon him. Given that Buddhism is a pragmatic philosophy, with the cognitive dimension of it's practice concerning therapeutic acts of thought, the current 'language-game' that the critic disputes is by definition unsuitable for him. He is invariably the best person to know what alternative language-game he is better suited to playing.
  • Adam Eve and the unjust punishment
    The biblical meaning of "Good and Evil" as in 'The Tree of Knowledge of Good and Evil' can be interpreted to mean everything that exists, rather than it referring specifically to moral categories. In which case the parable of the Garden of Eden is a natural metaphor for expressing the epistemological stance of pragmatism that rejects the representational idea that knowledge is a mirror of nature, and that ontologically prioritises immanent experience over theoretical constructs that are intellectually derived from such experience.
  • All we need to know are Axioms
    No. 1 expresses the limitless discursive activity of rational analysis, which can sometimes be represented compactly via an infinite loop like in No. 2, except where iterative deduction leads to intermediate conclusions that are not-identical to their premises.

    Yet the ultimate inability of rational analysis to defend any given proposition doesn't mean the proposition is false, with Zeno's paradox being the paradigmatic example. For we cannot give a logical proof of motion, and yet we still 'know' of motion because we are nevertheless able to literally construct it; this is a vivid demonstration of why knowledge cannot be represented solely in terms of axioms, and why any account of knowledge must distinguish the activity of practical synthesis from the activity of discursive analysis.

    Nevertheless, one way to study such paradoxes is by way of epistemic logic, in which some axioms directly represent beliefs or knowledge, while other axioms represent higher-order beliefs or knowledge such as 'one's knowledge of one's own knowledge' , 'one's beliefs about one's own knowledge', 'one's beliefs about one's own beliefs' etc. etc.


    By the way, circular reasoning where the conclusion is considered to be identical to the premise, and hence where no deduction has actually taken place, is characteristic of normative speech acts like "Tidy your room!" whose justification in response to a child's scepticism might consist of the reply "Because I said so!". In my opinion, Metaphysics from the perspective of cognitive psychology, is the study of a particular class of self-reinforcing speech-acts that influence language, motivation and perception.
  • All we need to know are Axioms
    Wittgenstein briefly entertained a somewhat similar idea in the blue book, when he said "apparently it didn't occur to Socrates to enumerate everything we call 'knowledge' ", as part of an argument for empiricism.

    However, to think of knowledge in terms of axioms is misleading, because

    i) we can only write down a finite number of axioms, even though our knowledge production faculties can produce an indefinite number of axioms, without an a priori knowable upper-bound.

    ii) Knowledge equally consists in the use of axioms and their creation; yet these processes cannot be specified as additional axioms, because we then enter an infinite regress (see Quine's 'Truth by Convention' and Lewis Carroll's Paradox).

    iii) we do not always know what we know; furthermore our belief states are invariably inconsistent.

    Therefore it is misleading to think of knowledge as an axiomatic system.
  • What advance in epistemological or metaphysical knowledge did David Hume bring us?
    As an aside, the IEP says

    " here it is important to remember that, in addition to cause and effect, the mind naturally associates ideas via resemblance and contiguity. Hume does not hold that, having never seen a game of billiards before, we cannot know what the effect of the collision will be. Rather, we can use resemblance, for instance, to infer an analogous case from our past experiences of transferred momentum, deflection, and so forth. We are still relying on previous impressions to predict the effect and therefore do not violate the Copy Principle. We simply use resemblance to form an analogous prediction. And we can charitably make such resemblances as broad as we want. Thus, objections like: Under a Humean account, the toddler who burned his hand would not fear the flame after only one such occurrence because he has not experienced a constant conjunction, are unfair to Hume, as the toddler would have had thousands of experiences of the principle that like causes like, and could thus employ resemblance to reach the conclusion to fear the flame. "

    https://www.iep.utm.edu/hume-cau/

    I am not a Hume expert, i'm just a googler. But doesn't this defence of Hume miss the point, or at least fail to stress the epistemological target of Hume's argument?

    Assuming Hume was a rational and non-superficial thinker, he would have granted the possibility that we can "infer" causal relationships even without appeal to resemblance. For example, when a baby is first born, it might initially behave instinctively to avoid fire, implying that it already has a concept of causation.

    Surely, any behaviour, especially avoidance behaviour, can be interpreted as embodying a causal understanding of the world, even when the behaviour is without precedent and there are no earlier resemblances to draw upon.

    The dispute I'm raising here, concerns conflicting interpretations of 'having' knowledge. On a behavioural interpretation of knowledge, the fire-avoiding baby might be said to "already know" that fire hurts. Yet on a mentalistic, verbal or otherwise representational interpretation of knowledge, the fire-avoiding baby is completely ignorant of fire hurting, even when it instinctively acts to avoid fire.

    So assuming Hume was a good philosopher, his concepts of resemblance and constant-conjunction must have been mental concepts referring to the mentalistic interpretation of knowledge, where they make sense. For instance, in our modern world of virtual reality it might be the case that we instinctively avoid virtual fire as we might also instinctively avoid virtual spiders and virtual snakes, even though we consciously appreciate, via resemblence and constant-conjunction, that these virtual entities are likely to be harmless.
  • What advance in epistemological or metaphysical knowledge did David Hume bring us?
    To my way of thinking, subjective idealism isn't a hypothesis about nature but merely a grammatical reminder that we employ empirical criteria in our understanding of each and every concept - even including our representational concepts concerning 'presently unperceived' real objects.

    The subjective idealist isn't denying the conclusions of causality, for he isn't denying the intelligibility or epistemological significance of counterfactuals. He is merely insisting that abstract objects and causality are intelligible and even undeniable precisely because they are semantically reducible to actual experiments and to thought experiments whose sense in both cases hinge upon mental and sensory experience, even if definitions of physical concepts in terms of particular sensations are impossible to give.

    The idea that a thought experiment or actual experiment can disprove subjective idealism, is therefore an oxymoron as far as the subjective idealist is concerned.
  • Does neurophilosophy signal the end of 'philosophy' as we know it ?
    To what extent do the objectives of neuroscience overlap with the objectives of transcendental
    phenomenology?

    In my opinion, if we are talking about a purely naturalised conception of neuroscience whose only objective is the description of the stimulus-response mappings of the brain of a third-person, then these objectives have nothing in common.

    Wittgenstein in the Blue Book, briefly raises the tantalising idea of a solipsistic "first person" neuroscience in which the experimental neuroscientist and the test-subject are one and the same - for example by placing an electrode into your own scalp by using a mirror, whilst recording your thoughts and observations.


    In my opinion, "Solpsistic neuroscience" cannot be expected to produce results that are commensurable or even consistent with standard naturalised neuroscience. I don't however, see how naturalised neuroscience can claim epistemological superiority, for that would be question-begging according to the transcendental phenomenologist.
  • Witnesses in mathematics
    That for a certain broad class of systems with certain qualities the consistency of same cannot be proved within the system is demonstrated as a consequence of Godel's theorem's. But you do not appear to be acknowledging that the proofs in question are meta-mathematical.tim wood

    Well any demonstration of axiomatic incompleteness is a purely syntactical demonstration, in spite of any semantic or meta-logical pretenses to the contrary. It purely consists in the exhibition of a well-formed formula f and it's negation ~f , in a circumstance where neither is currently known to be syntactically inconsistent in relation to a given set of axioms.

    The fact that it is possible to prove the relative consistency of PA in relation to the relative consistency of another system, is again, an equally syntactical derivation, whilst the syntactical notion of absolute inconsistency is also potentially observable by deriving f & ~f. But i don't understand the notion called absolute consistency. For that seems akin to the idea of 'completed infinity'; both of these notions are impossible to determine, or even to define in a non-circular fashion, and only serve to disguise the under-determined semantics of logic that is actually decided a posteriori when a sentence is actually derived, proved, or else used in a fashion unrelated to logic.

    No it isn't. I'm thinking you've read the proof and worked through it at least some - but maybe not. The universal quantifiers are then qualified via recursion schema. .tim wood

    I was referring to the cause of incompleteness, which is due to unbounded universal quantification in cases where universal quantifiers cannot be eliminated. Although here I made a mistake, in that the origin of incompleteness in weaker systems than Peano Arithmetic lies with the universal quantifiers in the other arithmetic axioms, as opposed to the axiom of induction - as evidenced by the incompleteness of Robinson arithmetic that does not possess the induction axiom. None of this changes anything of significance tho..

    And significantly, while your Prov("X," "S") is recursive, according to Godel, Godel also says the related Provable ("S"); that is, "S is a provable theorem," is not recursive.tim wood

    And hence the reason why the universal quantifier over ~Prov('S','G') to form ~prov('G') should not be interpreted as literally "passing over"every number, which was my original point. For Prov('G') might still be derivable, even though Prov('S','G') isn't actually derivable for any 'S', if PA turns out to be omega inconsistent. Likewise, ~Prov('G') cannot express the fact that Prov('S','G') is not derivable for some 'S', for then the diagonalisation lemma yields the contradiction t => ~Prov(''t') for some t.

    The sentence t => ~Prov("t") has an infinitely expandable fractal-like structure due to the sentence being fed it's own godel number, and there is no known reduction of it's quantifiers to those of the axioms. Therefore, whether or not PA is consistent, we don't have a semantically interpretable sentence. All we have is a syntactically verifiable self-negating sentence that has no meaningful interpretation.

    The ultimate mistake is this: In logic, the absence of a witness should not be equivocated with the witnessing of an absence. Only by making this conflation, as is done in classical logic, can Godel's sentence assume its controversial and illogical interpretation as proving it's non-provability.

    It is disingenuous sophistry of textbooks to suggest that t => ~Prov("t") has the high-level interpretation "t implies that t doesn't have a proof", even with the consistency disclaimers. Worse, it disguises the synthetic a posteriori nature of reasoning.

    An existential quantifier cannot make a non-trivial existential claim. Either the quantifier concerned is analytically reducible to an instance of the axioms, else the quantifier is logically meaningless and should not even be informally interpreted.

    So it appears to me, so far, that you're the guy that says "prove it" to the respective proofs until they're driven into their own grounding in axioms and sense, at which level the call for proof is an error. Are you that guy?

    That is, I do not take you to be challenging or disqualifying Godel, but rather making some assumptions both counter-to and beyond it, for other purposes. As you say above,
    tim wood

    On the contrary, I'm about eliminating unprovable assumptions from popular understandings of logic.
  • What is the difference between actual infinity and potential infinity?
    The Wittgensteinian intuition regarding the identity sign, roughly put, and defended here by MU, is that it is the meta-logical expression of synonymy, which upon full analysis of the expression concerned, is eliminated to yield substitution operations among 'non equal' logical terms, each denoting distinguishable objects. This point of view has been shown to be as expressive and as consistent as the logical interpretation of identity, even if on occasion standard theorems of mathematics need amending.
  • Witnesses in mathematics
    Not quite. Not "For all Godel numbers p," but instead, for all x, x being any natural number. Some xs will be ps, most not. Them that aren't won't be (encode) a proof, and them that are also won't be a proof.tim wood

    Sorry, that was actually a typo. Nevertheless this exchange perhaps serves as a useful reminder that any number represents a legitimate theorem relative to the provability-predicate of some Godel-numbering system.

    Furthermore, for any provability predicate Prov('X','S') interpreted as saying 'X' encodes a proof of 'S', it isn't actually knowable which numbers represent legitimate proofs, due the possibility that Peano-arthmetic is inconsistent and proves absurdity, together with Godel's second incompleteness theorem which forbids the possibility of PA representing it's own consistency. Neither humans nor God can ever claim knowledge of PA's consistency, for all we can have are proofs of consistency relative to the consistency of other systems, which begs the question.

    Of course, it is standard practice to explicitly state consistency as an assumption when we informally interpret PA and discussing incompleteness, so your remark is valid, after a bit of clarification. My political agenda here is actually concerned with how to interpret PA without assuming consistency, in light of Godel's results. For there are arguably many conceptual benefits to be had by dropping the consistency assumption, that demonstrate the fundamentally empirically contigent, vague and indeterminate nature of logic and mathematics that is better understood as being a posteriori in nature.

    Incompleteness is the result of unlimited universal quantification in Peano's axiom of induction, that takes us from the non-controversial constructive semantics of quantifier-free Primitive Recursive Arithmetic that represents the predictably terminating algorithms, to the controversial non-constructive interpretations of PA that represents every possible algorithm. Logic should therefore replace the simplistic sign of universal quantification for a richer collection of signs that distinguishes the different use-cases of the original sign, whilst also making explicit the relation of the Axiom of choice to universal quantification. David Hilbert in fact did use a system closer to this, called the epsilon-calculus during his attempts to prove consistency, and it is in fact making a come back.

    And the universal quantifier is both part of the proof, and proved separately within the proof.tim wood

    Consider universal quantification over the negative-provability predicate in PA, that is informally interpreted as saying " For every 'S', 'S' doesn't prove 'G' ".

    We know for a fact that this has no syntactical expression in PA, since if PA is consistent then the statement isn't decidable (via the Diagonalization lemma) and if PA is inconsistent then PA is unsound and has no interpretation whatsoever. We also know that for any particular 'S' and 'G', PA can decide whether or not 'S' derives 'G'.

    Therefore the universal quantifier above cannot have the ordinary meaning of every, which ought to have been the central conclusion of Godel's incompleteness theorem. In stark contrast to the commonly accepted idea that Godels' sentence is self-referential with definite meaning.

    So we have two choices. Either we stick to our a priori philosophical concepts and abandon mathematical logic and it's physical embodiment as being an unsuitable language for expressing and justifying philosophical truth. Or we revise our philosophical intuitions to match what PA and its physical embodiment can and cannot express. I'm saying the latter.


    I read this as your taking exception to the use of existential (i.e., existential and universal) quantifiers unless it/they "abbreviate an independent proof of the fact concerned...". So, if I say x>3 is true for all x greater than three, this is neither true nor valid, subject to a proof of the "for all"?tim wood

    This is acceptable, because the contradiction it expresses can be represented without universal quantifiers. It is analogous to having a function that by design terminates, and that converts an arbitrary number into a number or another arbitrary number. the conflation of all with arbitrary is perhaps the central source of controversy and misunderstanding in logic.
  • The Difference Between Future and Past
    If "the present" refers to the specific context in which it is used,i.e. it is an indexical, referring to different things on each and every occasion that it is uttered, then to speak of the present as 'changing', is merely to point out that we can remember using the words "the present" differently. The idea of a "changing present" might be eliminated if we instead uttered unique indexicals in place of it on each and every occasion.

    On the other hand, whereas we ordinarily speak of the "the present itself as changing", as if "the present" was a rigid designator, for some reason we tend to merely think that our knowledge and remembrances of an immutable past has changed, which indicates that we tend think of "the past" as partly an indexical in relation to our present state of knowledge and remembrance, and partly a rigid-designator referring to an immutable and transcendental temporal object.

    Now the main point of contention here, as i see it, is whether or not the concept of the past deflates to our interaction with "present" appearances, including memories. If it does, then we can eliminate "the past" in the sense of an immutable entity that transcends phenomena, and as with 'the 'changing present', we would merely be grammatically wrong to speak of the "the past" as changing.
  • Witnesses in mathematics
    ↪simeI have to refer you to the proof itself. It relies on recursion and ω-consistency. I quote this: "Every ω-consistent system is obviously also consistent. However, as will be shown later, the converse does not hold." (The Undecidable, Ed. Martin Davis, 1965, p. 24).

    PA may be inconsistent, but I take Godel's qualifications on his system P, of which he says, "P is essentially the system which one obtains by building the logic of PM around Peano's axioms.." (10), & ff., as sufficient to regard his claims as rigorous. As he observes later (p. 36), "In particular, the consistency of P is unprovable in P, assuming P is consistent (in the contrary case, of course, every statement is provable).

    should not be informally interpreted as saying "n is not a proof of S for n < ω".
    .... and this should not feel in the least bit troubling or surprising , for there is no formal justification to support the heuristic and informal interpretation of universal-quantifiers as denoting each and every member of an infinite domain , unless that is to say, the universal quantifier in question was constructed using the axiom of induction.
    — sime

    Agreed. Informal remarks about his proof are often not quite right. But I think it's pretty clear that the axiom of induction or something like is a main piece of his proof.
    tim wood

    Peano's axiom of induction isn't an axiom of logic, and plays no part in standard proofs of Godel's incompleteness theorem. It can however be shown to be responsible for causing incompleteness, in the presence of the other axioms of peano arithmetic.

    The reason this matters, is because in logic the use of an existential quantifier should not be informally interpreted as bearing witness to a fact, unless the quantifier is used to abbreviate an independent proof of the fact concerned that does not beg the use of this quantifier in a circular fashion.

    For otherwise we might just as well say that "a universal quantifier has proved a universal truth, because the universal quantifier says so".

    The universal quantifier in "Godel's sentence" G which supposedly says "For all Godel numbers p, p does not encode a proof of Godel sentence G", isn't an abbreviation of an independent proof of G's non-provability. Therefore this quantifier should not be given this common (mis)interpretation.
  • Sin and emotion.
    Certainly there are similarities between the 'seven deadly sins' and symptoms of mental illness. Was the early christian concept of 'sin' more pragmatic than the modern christian concept?