Comments

  • Negative numbers are more elusive than we think
    Another consideration that supports understanding numerical negation as logical negation, is the consideration of how integers can be constructed from pairs of naturals. Recall that integers can be identified as equivalence classes of natural number pairs, e.g

    an instance of '2' can be any of (2,0), (3,1), (4,2) , ...

    Here, the numbers in a pair (a,b) can be thought of as denoting the scores of two players A and B.

    Negation switches the scores the other way around

    -2 := any of { (0,2) (1,3), (2,4) ,.. }

    Zero represents tied results where A and B's scores are identical, and these results lie on a 45 diagonal line (call it the 'zero line') running through the centre of the positive quadrant of euclidean space, dividing the quadrant into two non-overlapping 'victory zones', one for each player.

    The magnitude m of a general score (a,b) is it's distance from the zero line, and measures by how much the winning player won by. Hence we can view this as the score of an adversarial zero-sum game of tug-of-war between A and B, with rope length m, along the axis perpendicular to the zero-line.



    Compare to the case of 'Complex Number Games'. In contrast,

    i) A game with scores (a,b) is written a + j*b, where j is the imaginary unit.

    ii) Either or both of a and b can be positive or negative, which means A and B face a common opponent C.

    iii) B's score is perpendicular to A's due to multiplication by j, which means that A and B might play cooperatively.

    iv) The magnitude n of the score (a,b) is the Euclidean length, i.e. sqrt( a^2 + b ^2). This represents the total reward with respect to an n-square-sum three player game.

    v) The phase angle of the result determines how the reward is distributed among A, B and C.

    vi) The imaginary unit j serves as negation for three-player games, dividing the 2D Euclidean space of real-valued score outcomes into the following quadrants (where a quadrant is taken to include it's clockwise-next axis and excludes zero):

    {A doesn't lose and B wins, A loses and B doesn't lose, A doesn't win and B loses, A wins and B doesn't win}

    Multiplying any of these quadrants by j yields the next quadrant to the right (using circular repetition).
  • Is a hotdog a sandwich?
    Definitions are at the service of moral and hedonistic imperatives.

    My father insists that Darts isn't a sport. If I ask him why, he argues that when playing a sport you need to take a shower afterwards. On further questioning, he admits that the purpose of his narrower definition of "sport" is to devalue the achievements of non-athletes.
  • Negative numbers are more elusive than we think


    In game semantics, the flipping refers to changing the perspective from which the game is viewed. Say, in the game of chess, where a theorem denoted W represents the winning positions for white and ~W the winning positions for black. There isn't anything transactional implied when changing sign.
  • Negative numbers are more elusive than we think
    The shift to integers is a consequence of the fact that natural numbers are used to denote both the production of resources and the consumption of resources, where the producing process is often independent of the consuming process. Understood in this way, numerical negation can be interpreted as a form of logical negation for the Natural Numbers, where the numerical equation x + (-x) = 0 is analogous to the logical theorem X AND ~X => 'contradiction', where X is a well-formed formula.

    Recall that in many logical systems, if a contradiction is derivable, i.e if 'zero' in that language is proved to exist, then every well-formed formula in that language and its negation are derivable via the principle of explosion, which implies that the well-formed formulas of an enumerable and inconsistent language are isomorphic to integers with additional structure, i.e they form an abelian group.

    Of course, in mathematics 'zero' isn't normally used to mean contradiction (in physics and accounting the opposite is often true), and we don't regard the integers to be unhealthily inconsistent. So the analogy between logical and numerical negation might at first glance appear to be syntactical rather than semantic, but they nevertheless have strong semantic similarities, for both numerical and logical negation are interpretable as denoting the control of resources by an opponent in a two-player game.

    The difference is, the integers and their equations were invented chiefly for the purpose of expressing draws in games (such as balanced production and consumption), whereas logic with the principle of excluded middle was invented for the purpose of expressing games without draws.
  • Is there an external material world ?
    Whereas the direct realist proper is saying something comparable to "we read history", as if reading a textbook is direct access to its subject, which is of course false.Michael

    If somebody insists to me that I can only talk about my memories of my childhood, as opposed to my actual childhood, am I in a position to agree with that person?
  • Is there an external material world ?
    Dennett is an indirect realist, and his view of goals and beliefs is that these features of a cognitive system can be reduced to the collective activity of a network of millions of dumb bits which can’t themselves be said to have goals or beliefs. It can be useful for certain purposes to treat such dumb assemblages as if they possessed such intrinsic properties.Joshs

    Does Dennett interpret the the objects of perception to be theoretical entities , such as those defined according to science and ontological naturalism? If so then that might explain his use of 'indirect realism', in the sense that the entities of a naturalistic ontology are only defined up to their structural/mathematical Lockean primary qualities and are left undefined in relation to phenomenological secondary qualities, effectively deferring their phenomenological meaning to the in situ judgements of language users who apply the terms (and who ultimately apply theoretical terms as a result of perception, so I still can't see this as an indisputable example indirect realism).

    And of course there is the ambiguity as to the location of the agent's sensory surface. If the agent is looking down a microscope, does the definition of the perceptual process include the microscope or not?

    But i think those considerations are tangential, for direct realists take the object of perception to be the stimulus that directly elicits a behavioural response from an agent, however the boundary of the agent is defined. Would Dennett disagree with direct realists who define perception in this way?
  • Is there an external material world ?
    From a behavioural perspective, the notion of an agent committing 'perceptual errors' only serves to account for it's stimulus-responses that are unexpected or undesired in the minds of onlookers who interpret the agent's behaviour as being goal-driven, either as part of a causal explanation of it's behaviour, or as a part of a prescription for what the agent ought to do if it is to act in accordance with the onlookers wishes (for example, the agent might be a robot and the onlookers are it's programmers).

    Relative to this observation, it seems that indirect realism is ontologically committed to the folk-psychological notions of goal driven behaviour and mental states. For according to indirect realism, agents aren't merely said to commit perceptual errors relative to the expectations of onlookers and their linguistic conventions, but are believed to really make those errors as a result of possessing cognitive states that have goals and beliefs as intrinsic properties.
  • Evidence of conscious existence after death.
    Reincarnation isn't a falsifiable hypothesis with respect to recollection of past lives due to the fact that it's compatible with both memories of past lives (good recall) and also no memories of past lives (poor/defective recall).

    Reincarnation is pseudoscientific woo woo!
    Agent Smith

    Yes. To articulate where I believe your position to be heading towards; reincarnation can be supplied a workable definition, e.g if someone's brain activity, as defined and measured by a particular instrument, stops for at least 10 minutes and then later continues, then science is free, if it so chooses, to define this as an instance of "reincarnation". Such a definition can then be used when testing a hypothesis that a given subject has 'reincarnated'.

    The problem then, isn't so much that reincarnation cannot be defined so as to support testable hypotheses, but the fact that with respect to any such definition a hypothesis as to whether a given subject has 'reincarnated' merely relates empirical data to the definitional criteria, and says nothing in support of , or in opposition to, the metaphysical reality of the said definition.

    The same problem exists when deciding whether a subject is self-identical within a single biological lifetime. So hypothesis testing cannot lend support to either the view that two subjects are identical, or to the view that they are different, except in the trivial and tautological sense pertaining to linguistic convention..
  • Evidence of conscious existence after death.
    I say no one exists without the living body.180 Proof

    I can certainly apply your extensional definition of a person to the people I meet. In which case, if I notice their body to be deconstructed I can say they are dead by definition. As an aside, how do you suggest that I should extend this definition in the case their body is reconstituted, considering the fact that the biological identity of any person is open and under-determined?

    On other hand, what does it mean if I apply this definition to my own body? Does the logic still work in the same way? For I sense a person's body in relation to say my field of vision. But can I speak of sensing my field vision?
  • Evidence of conscious existence after death.
    Are you meaning "life" in a strictly biological sense, or could disembodied consciousness work?TiredThinker

    I'm referring to the problematic concept of personal identity over time. For the presentist, a tensed A series, such as [yesterday, now, tomorrow] doesn't move, (or rather, is unrelated to the notion of change), because those terms are understood to be indexicals that are used to point at and order present information e.g "the paper over there on the kitchen table is yesterday's newspaper"

    This is in line with McTaggart, who argued that the A series can't be treated as moving, for otherwise temporal logic becomes inconsistent in allowing propositions such as "now isn't now" and "yesterday is tomorrow".

    Once the A series is held fixed, such that yesterday is always yesterday, now is always now, tomorrow is always tomorrow etc, one can continue to speak of the passing of a train, but one can no longer speak of the passing of subjective time. Relative to this grammar, one can speculate about what happens in one's future, but one cannot speculate about the existence of one's future.
  • Evidence of conscious existence after death.
    In my view, the question "is there life after death or not?" is meaningless, due to the fact that I cannot conceive of a "next" experience, nor of a "previous" experience.

    For example, I can remember what I ate earlier today at six o'clock, but I cannot conceive of having had another earlier experience before this one that happened at six o'clock - all i can do is recall now what i ate earlier at six o'clock. Likewise, I expect that the sun will rise tomorrow, but I cannot conceive of another experience after this one that will occur concurrently with the sun rising. All I can do is expect now that the sun will rise tomorrow.

    So if "life after death" is to mean anything to me, It cannot refer to an ordered set of experiences, which is nonsensical since there is only one. So it must refer to some order of events that I can perceive, and yet I cannot conceive of the universe having an ending or a beginning, hence I cannot make sense of the question.
  • Phenomenalism
    What the realist calls "mind independent" is what the phenomenalist might call "empirically undetermined a priori".

    It is empirically under-determined a priori what observations the entities of the Standard Model refer to. Yet the same is equally true regarding the ordinary public meaning of "redness". For what precisely, under all publicly stateable contexts, are the set of experiences to which "redness" refers?

    Phenomenalism, i.e. logical positivism, has been said to fail as an epistemological enterprise, due to the impossibility of defining how theoretical terms should be reduced to observation terms, where the latter refer to pre-theoretic 'givens' of private experience. But a reply is to say that this only rules out phenomenalism with a priori definable semantics. One can nevertheless argue that the meaning of the standard model is empirical (after all, isn't it supposed to answer to experience?), but where it's empirical meaning is determined in situ and post hoc through judgements for which rules cannot be stated a priori.
  • Phenomenalism
    Suppose that you begin to question whether you are awake or dreaming and conclude that you are awake. Then suppose that a while later you experience 'waking up' and conclude that your earlier self was dreaming. Does this mean that your earlier self's beliefs were wrong during the course of the previous dream, or does this only mean that your earlier self is presently wrong in relation to your present observation of 'waking up' ? Then recall the phenomena of false awakenings...

    In other words, when judging the veracity of a perception, does the verdict only hold at the time of the verdict?
  • Getting a PHD in philosophy


    The difficulty of getting a PhD in any subject is inversely proportional to the corruption of the respective university department and the charlatanism and toxicity of the phd supervisor. In many cases the PhD is just a certificate awarded to survivors of abuse.
  • Phenomenalism
    Berkeley already answered the indirect realist critique of phenomenalism almost 340 years ago, e.g

    "we may say that my gray idea of the cherry, formed in dim light, is not in itself wrong and forms a part of the bundle-object just as much as your red idea, formed in daylight. However, if I judge that the cherry would look gray in bright light, I’m in error. Furthermore, following Berkeley’s directive to speak with the vulgar, I ought not to say (in ordinary circumstances) that “the cherry is gray,” since that will be taken to imply that the cherry would look gray to humans in daylight."

    Berkeley grammatically rule out indirect realism in his constructive logic of perception via his so-called "master argument" , that amounts to defining the meaning of an 'unperceived object ' in terms of present acts of cognition in combination with immediate sense-data.

    His uniform treatment of the cases of veridical perception and non-veridical perception as both pertaining to immediate ideas, implies that for Berkeley "reality" means coherence of thought and perception.
  • Phenomenalism
    It is a paradox that we readily interpret present information as referring to absent entities, e.g. the photograph of my dead grandmother who has long since departed...

    In my view, dissolving the paradox requires defining the notion of 'absence' in terms of present information, whereupon the notion of reference is reduced to a set of relationships within present information.

    From such a perspective , the concepts of doubt and epistemic error are reinterpreted as semantic notions rather than metaphysical notions related to unobserved truth values. Essentially, semantics becomes holistic, immanent, and under-determined, comprising of partial-definitions that change over time in such a fashion as to alleviate the concerns of idealists who reject transcendental signification, and realists who reject epistemic infallibility.
  • On whether what exists is determinate
    In a logic proof such as in the lambda calculus, the counterpart of a 'universal' is a term or formula that can be reduced, via computation, to some constant term standing for a particular. This process is known as beta-reduction.

    For example, a mathematical function such as f(x)= 2x can be regarded as a 'universal' term that when applied to the 'particular' object 2 is eliminated to produce the 'particular' object 4.

    Beta reduction a useful analogy for understanding the cognition of language; For example, if i am looking for my red jumper, then I understand "red jumper" in the sense of a universal until as and when I find the particular object i am looking for - in which case "red jumper" reduces to an indexical such as this, which points directly without further linguistic mediation to the non-linguistic 'term' concerned.
  • Fitch's "paradox" of knowability
    I’ll try and come back to the rest of your post, but if the above is correct, then this would seem to contradict Michael’s claim that a proposition can be known to be true at one time and then known to be false at a later time. If K refers only to what is eventually known, then a proposition which is ultimately known to be false cannot earlier be known to be true.Luke

    As Wittgenstein said in On Certainty

    "I know" seems to describe a state of affairs which guarantees what is known, guarantees
    it as a fact. One always forgets the expression "I thought I knew".

    If the epistemic usage of "to know" is considered to be the same as "to be certain", then knowledge changing over time is no big deal for the verificationist and simply means that one's beliefs are changing as the facts are changing. But this doesn't necessitate contradiction.

    For instance, if p is "Novak is Wimbledon Champion", then p today, and hence K p (assuming verificationism). Yet on Sunday it might be the case that ~p and hence K ~ p. But any perceived inconsistency here is merely due to the fact that the sign p is being used twice, namely to indicate both Friday 8th July and Sunday 10th July.

    If instead p is "Novak is Wimbledon Champion with respect to the years 2011, 2014, 2015, 2018,2019, 2021" and q is "Kygrios is 2022 Wimbledon Champion" then we will still have K p whatever happens, even though the domain of the operator 'K' has enlarged to include q.

    Of course, not every observation, such as the contents of a fridge, has an obvious time-stamp that places the observation into an order with every other observation of the fridge, but contradictions can at least be averted by using fresh signs to denote present information. "Never the same fridge twice".
  • Fitch's "paradox" of knowability


    Interesting observation.

    - In Fitch's case, the epistemic operator K is usually assumed to be factive and used in the future-tense in standing for "Eventually it will be known that ...", where K's arguments are general propositions p that can refer to any point in time. So Fitch's paradox is a paradox concerning the eventual knowledge of propositions.

    - In Moore's case, the epistemic operator B is assumed to be non-factive and referring only to the present state of the world, in standing for "It is presently believed that", where B's argument is the present state of the world s that changes over time. So Moore's paradox is a temporal paradox referring to the indistinguishability of the concepts of belief and truth in the mind of a single observer with respect to his understanding of the present state of the world, in spite of the fact the observer distinguishes these concepts when referring to the past and future state of the world.

    - Only in the case of K is there the general rule K p --> p , since knowledge is assumed to be true, unlike beliefs that aren't generally regarded as truthful , except in the case of the present tense if Moore's sentences are rejected for all s, in which case it is accepted that for all s, ~ (s & ~B s). This premise is equivalent to saying that for all s, ( s --> B s).

    -The argument for Fitch's knowability conclusion (p --> K p) starts from a weaker knowability premise that (p --> possibly K p). On the other hand, Moore's sentences, if rejected, are rejected a priori as being grammatically inadmissible, meaning that (s --> B s) is accepted immediately and doesn't require derivation.
  • Speculations in Idealism


    lol. Definitional equality isn't a reflexive relation as definiendum isn't definiens. Otherwise not only is Berkeley refuted, but so is the entire Oxford English Dictionary.



    I believe "qualia" to be the closest modern translation of Berkeley's "ideas", as that term serves as an indexical that carries no theoretical meaning, unlike the modern understanding of 'mental states' that is theory laden with inferential semantics.
  • Speculations in Idealism
    At the heart of the problem is the logic underlying the evidence/fact distinction.

    In saying for instance, that the redness of a strawberry isn't semantically reducible to a perception of the strawberry, one is pointing out that the meaning of 'red' is predictive and refers to the conditional expectation of seeing other phenomena in relation to the strawberry if committing hypothetical courses of action, such as performing a chemical or spectroscopic analysis of the strawberry under laboratory conditions.

    For the idealist, a conditional expectation is by definition part of the present that includes the state of the observer and his environment. This implies that if the observer who previously judged the strawberry to be red decides upon further investigation that the strawberry is in fact grey, that his previous judgement that the strawberry is red isn't falsified by his later change of mind. For the idealist, the observer's judgements changed because his situation changed, and so he hasn't committed a 'real' epistemic error. So for the idealist, perceptual errors and failed predictions aren't the result of failing to predict perception transcendent 'truth' but instead merely refer to classes of changing circumstance. This viewpoint has the physical advantage of interpreting human perception no differently to other physical measurement apparatus such as geiger-counters that are never said to be 'wrong', but only faulty under conditions in which where their desired or expected responses are unexpected or misunderstood.
  • Speculations in Idealism
    As food for thought. Bernardo Kastrup writes:

    ...as I’ve elaborated upon more extensively in a Scientific American essay, our sensory apparatus has evolved to present our environment to us not as it is in itself, but instead in a coded and truncated form as a ‘dashboard of dials.’ The physical world is the dials.

    Once this is clarified, analytic idealism is entirely consistent with the observations of neuroscience: brain function is part of what our conscious inner life looks like when observed from across a dissociative boundary. Therefore, there must be tight correlations between patterns of brain activity and conscious inner life, for the former is simply the extrinsic appearance of the latter; a pixelated appearance.
    Tom Storm


    That very much echos Wittgenstein's commentary in The Blue Book that briefly touched upon the logic and sense-making of neuroscience. Wittgenstein's entire 'Ordinary Language philosophy' that came after the Blue Book can almost be described as elaborating 'analytic solipsism' , e.g in PI

    295. "I know .... only from my own case"—what kind of proposition
    is this meant to be at all? An experiential one? No.—A grammatical
    one?

    I say "almost", due to the fact that if realism, idealism and solipsism are understood to refer to grammatical stances, and if one is free to choose one's grammatical stance in accordance with one's circumstances, then the so-called "ontological commitments" that are entailed by these contrary positions can only refer to the state of mind and intentions of their asserters, in which case the public debate between realism and idealism amounts to psychological differences among the public that have no relevance to the empirical sciences at large.
  • Welcome Robot Overlords
    I think you are referring to Hubert Dreyfus' work, not the American actor from Close Encounters... :wink:Tom Storm

    lol. maybe that's because the movie was better.
  • Welcome Robot Overlords
    In line with Richard Dreyfus's criticisms of computer science in the seventies that predicted the failure of symbolic AI, AI research continues to be overly fixated upon cognitive structure, representations and algorithms, due to western culture's ongoing cartesian prejudices that continue to falsely attribute properties, such as semantic understanding, or the ability to complete a task, to learning algorithms and cognitive architectures per-se, as opposed to the wider situational factors that subsume the interactions of machines with their environments, that includes the non-cognitive physical processes that mediate such interactions.

    Humans and other organisms are after all, open systems that are inherently interactive, so when it comes to studying and evaluating intelligent behaviour why are the innards of an agent relevant? shouldn't the focus of AI research be on agent-world and agent-agent interactions, i.e. language-games?

    In fact, aren't such interactions the actual subject of AI research, given that passing the Turing Test is the very definition of "intelligence"? In which case, the Turing Test cannot be a measure of 'intelligence properties' that are internal to the interrogated agent.

    For instance, when researchers study and evaluate the semantics of the hidden layers and outputs of a pre-trained GPT-3 architecture, isn't it the conversations that GPT-3 has with researchers that are the actual underlying object of study? In which case, how can it make sense to draw context-independent conclusions about whether or not the architecture has achieved understanding? An understanding of what in relation to whom?
  • Welcome Robot Overlords
    The issue is trivial; if you feel that another entity is sentient, then that entity is sentient, and if you feel that another entity isn't sentient, then the entity isn't sentient. The Google engineer wasn't wrong from his perspective, and neither were his employers who disagreed.

    In the same way that if you judge the Mona Lisa to be smiling, then the Mona Lisa is smiling.

    Arguing about the presence or absence of other minds is the same as arguing about aesthetics. Learning new information about the entity in question might affect one's future judgements about that entity, but so what? why should a new perspective invalidate one's previous perspective?

    Consider for instance that if determinism is true, then everyone you relate to is an automaton without any real cognitive capacity. Coming to believe this possibility might affect how you perceive people in future, e.g you project robotics imagery onto a person, but again, so what?
  • Shouldn't we speak of the reasonable effectiveness of math?
    Berkeley's argument "the mind....is deluded to think it can and does conceive of bodies existing unthought of, or without the mind, though at the same time they are apprehended by, or exist in, itself" may be countered by common sense justifications.RussellA

    Berkeley's 'esse is percipi' principle wasn't meant in the sense of a speculative truth-apt empirical proposition, but as a grammatical norm for eliminating i) Cartesian doubt regarding the existence of the external world that inevitably arises when the world is thought of as being only knowable indirectly via intermediate mental representations and ii) Lockean doubt regarding the existence of either primary or secondary qualities, that arises when the 'subjective' content of perception is believed to be ontologically separate from 'objective' mathematical structure.

    It is ironic that Berkeley's 'realist' critics misunderstand him by projecting their own deeply entrenched representationalism onto his remarks and then attributing to him the corollaries of their own positions.

    Understood correctly, Berkeley was a defender of common-sense who cannot be interpreted as saying that the world is a 'figment of the imagination', unless the concept of 'imagination' is generalised to such an extent that it includes the content of all involuntary perceptions, to the point that the phrase "figment of the imagination" no longer says anything.
  • The Churchlands


    You will have to elaborate as to why you consider functions to be observer dependent, but not the existence of other minds.

    After all, we recognise the existence of other minds in terms of behavioural stimulus-responses we relate to, which are in turn correlated to the ability of said body to perform computation. How is it consistent to regard the computation to be observer-dependent, but not the existence of said 'other' mind?

    Also consider borderline AI cases. Suppose that 50% of the population believe an artificial agent to be conscious, but the other 50% disagrees. A realist regarding the existence of other-minds will conclude that half of the population is right and that the other half is wrong. But there is no reason to assume the existence of a transcendental fact of the matter concerning the consciousness of the agent, that is above and beyond the observable behaviour. An anti-realist can simply conclude that the agent is 50% human-like in it's observable responses as judged by aggregated public opinion.
  • The Churchlands
    Note that consciousness, in humans, or dogs, is not an observer-dependent phenomenon. Whether you (or your dog if any) are conscious is not a matter of interpretation by an external observer.Daemon

    Not necessarily. It is perfectly consistent to adopt an anti-realist stance regarding the existence of other minds, where the existence of other minds is considered to be ontologically dependent on the perceptions of the observer.

    This position has the advantage of being able to refute skepticism regarding the existence of other minds, in identifying the recognition of another mind as partially constituting the very definition of said 'other' mind.
  • The Churchlands
    In threads such as this, it is helpful to begin by investigating the 'opposite' notion of unconsciousness, which at first glance appears to be a simpler concept and which people mistakenly take for granted; for they naively presume that they have a sound understanding of the concept of "unconsciousness", a presumption which they naively carry forward when using their understanding of 'unconsciousness' as a baseline when constructing and appraising theories of consciousness.

    First of all, one should examine situations in which they claim not to have been conscious of anything. For example, take the claim that one wasn't conscious of anything while one was sleeping last night. If this were indeed the case, then how could one possibly know it? Isn't empiricism, i.e. conscious verification, supposed to be the most authoritative methodology for making epistemological claims, in which case, isn't unconsciousness an inadmissible concept? Or are we supposed to put faith in pure reason here and unquestionably accept the testimony of others?

    One is tempted to say that one can only deduce one's state of unconsciousness in retrospect via a rational reconstruction of what happened in the past. This conclusion ought to encourage a focusing of attention on the meaning of retrospection, including the nature and existence of the past itself , a topic which impinges upon debates concerning the nature and existence of time, space and causation.

    At the very least, thinking about sleep and unconsciousness illuminates the conceptual inter-dependencies of philosophies of consciousness with philosophies of time and philosophies of causation, in which the debate between realism and idealism is ever present. Sadly, most neuroscientists appear not to grasp the conceptual scope of their investigations and instead derive trite and unenlightened conclusions.
  • Nothing is really secular, is it?
    The myth of state secularism is but a special case of the is/ought fallacy, for there isn't an objective basis for ethics, including the ethical matters of law and government policy.
  • Logical Necessity and Physical Causation


    To my faint recollection, Hume never denied the ability to associate observations, and neither did he deny the imperative mood. By my understanding, his remarks are only suggestive of scepticism that causal and logical necessity are objective properties of objects. i.e he would have accepted anti-realistic understandings of logic and causation, especially those that make no commitment to synthetic a priori propositions, such as Hacker's interpretation of Wittgenstein.
  • Logical Necessity and Physical Causation
    So only in the context of infinite experiments we could say something is truly random?Haglund

    If you only accept the existence of potential infinity then you believe that every process, whether real or mathematical, eventually terminates after a finite amount of time. In which case lawfulness and lawlessness are disrobed of their metaphysical status as distinguishable properties attributable to things in themselves.

    E.g we might say that the eventually terminating process {1,2,3, ...} begins 'lawfully' for the first three elements and then continues unlawfully for an unspecified amount of time. Which is only to say that it looks initially similar to another well-known sequence, such as {1,2,3,4,5,6,7,8,9,10}, before continuing in either an unspecified fashion or in a fashion that is expected to look unfamiliar to the average person, before eventually halting.


    To take a physical example, take the 'law' that every electron has the same mass. Obviously one cannot measure every electron and so the law isn't empirically verifiable. Following the positivism of Karl Popper, we might either regard "every electron has the same mass" as being as a norm of linguistic convention that is held true no matter transpires in the future, or we might regard it as being a semi-decidable empirical proposition that is falsifiable but not verifiable. What positivism took for granted is the view that the semantics of logic and language is absolutely knowable a priori without being contigent upon and epistemically restricted by the available empirical evidence, even as the world referred to by language and logic is accepted as being absolutely unknowable and empirically contingent.

    But suppose we grant that the meaning of thought and language is as uncertain and as empirically contingent as the world to which it refers and instead treat logic, language and world on the same epistemological and ontological footing. Then not only will we be unable to empirically verify that every electron has the same mass, but we will also be unable to empirically establish the meaning of the word "every" in the proposition "every electron has the same mass" - for the word "every" can only be given a definite empirical interpretation in cases involving explicit enumeration over a finite number of entities, but in the present case we have no idea as to how many electrons there. Therefore the meaning of "every electron" is indefinite unless electrons are said to have the same mass by definition, in which case we have a 'law' of convention rather than of matters of fact.

    In summary, if the rules of logic are treated as being empirically contingent and epistemically bound by the same principles of verification as the theories of the world they are used to formulate, then the empirical meaning of "infinite quantification" can only be interpreted as meaning 'indefinite finite quantification' as opposed to meaning 'greater than finite quantification' In which case, the question "does every electron have the same mass?" is equivalent to asking "does an undefined number of electrons have the same mass?" which falls short of describing or being supportive of the metaphysical ideas of lawfulness/lawlessness which become superfluous and cannot gain empirical footing.

    On the other hand, if the rules of logic are understood to embody norms of conduct and linguistic representation, as opposed to embodying empirical contingencies, then the rules of logic do express lawfulness, namely the conduct and ethics of the logician.

    It is the simultaneous presence of both types of meaning in science and logic that leads to the normative notion of "law" being misconstrued as an empirical matter of fact.
  • Logical Necessity and Physical Causation
    If the sequence is random, no such function exists. Each outcome (B or R) is not determined by a function. Isn't that the definition of a sequence of random choices?Haglund

    Yes, according to the classical understanding of randomness, or rather we should say unlawfulness - in mathematical logic "randomness" tends to refer to algorithmic randomness that refers to the compressibility of a definable sequence such as Chaitin's Constant , whereas we are referring to the common understanding of "randomness" that indicates that the values of a sequence are being produced in an algorithmically unspecified manner -something that the intuitionists call unlawfulness, which includes sequences generated step-wise by repeatedly tossing a coin, or through the 'free willed' choices of the mathematician. For intuitionists, the distinction between lawfulness and lawlessness is practical rather than metaphysical or ontological - if a sequence is generated by using an algorithm, then it is can be said to be "lawful" in pretty much the same way that a good citizen might be said to be "law abiding" - neither of these examples appeals to the metaphysical notions of logical or causal necessity.

    What you describe is the classical way of thinking that upholds the traditional philosophical dichotomy between lawfulness versus lawlessness . The classical ontological distinction between a lawless sequence versus a lawful sequence begs the existence of absolute infinity in order to conclude that a 'completed' infinite extension is possible that can be subsequently tested as to whether it corresponds to a definable function. But if the empirically meaningless notion of absolute infinity is rejected for the empirically meaningful weaker notion of potential infinity in which infinite sequences are understood to refer to unfinished sequences of a priori unknown finite length, then the previous conclusion can no longer be considered as meaningful and consequently there is no longer an ontological distinction between unlawful versus lawless processes; all we we can only speak of are similarities between finite observed portions of two or more processes that haven't thus far finished.

    That every choice is based on pure chance? If you assess a finite sequence, BRRBRBRBRRBBRRBRB... (which probably ain't random since I typed it right now) and you find a program leading to this sequence, but can this be done with every sequence? Say that I base my choice on the throwing of a coin. Taking the non-ideal character of the dice into consideration and throwing it randomly (by making random movements). Will there always be a function a pattern, beneath the sequence? Is there non-randomness involved? If the underlying mechanism is deterministic, and we're able in principle, to predict an R or a B, can't we say the initial states of the throws are random?Haglund

    When repeatedly tossing a coin ad infinitum, at any given time one has only generated a finite number of outcomes that is always identical to the prefix of some definable function. A classicist is tempted to speculate " this infinite process if continued ad infinitum might eventually contradict every definable binary total function, and hence be lawless", but such speculation isn't testable as it again begs the transcendental idea of a completed infinity of throws only with respect to which lawfulness and lawlessness become ontologically distinguishable.


    How about the genetic code? That determines outcomes, does it not?Wayfarer

    When we say that genotypes 'determine' phenotypes we are implictly referring to a class of situations that we recognize as bringing about this determination via the empirical contingencies of nature that we are unable to fully describe, control or predict. And so we are not appealing to causal or logical necessity when we recognise this determination, and are only appealing to our expectations of nature with respect to this recognisable class of situations. We could have alternatively said that genotypes 'miraculously' produce phenotypes with respect to such situations that bring about the magic of nature.

    The ontological distinction between miracles and mechanics begs the principle of sufficient reason, which is but another form of absolute infinity in disguise.
  • Logical Necessity and Physical Causation
    Not sure I understand. Why shouldn't determinism be meaningless in such a universe? I understand that from the outside of such a universe all the events in that universe can be known. If you are part of it, your being in it prohibits knowing all happenings, that's clear. But while in it you can still say there is determinism. Without actually knowing what's determined.Haglund

    To put in another way, I am basically arguing that determinism and non-determinism aren't descriptive of phenomena and therefore shouldn't be considered as being applicable to reality considered in itself. Determinism and non-determinism are descriptive of theories and beliefs concerning the consequences of hypothetical actions, but these concepts are not descriptive of phenomena.

    For instance, suppose that if Bob (P)resses a button, then it either results in a (B)lue light or a (R)ed light: P --> B OR R. Then we might say this theory is "non-deterministic". Equivalently, we could drop philosophical nomenclature for logic , and simply state that our hypothesis is a product in a suitable category.

    But notice that the above theory is simply stating that if Bob presses a button, then one of two possible outcomes are expected. It isn't describing observations of the actual world:

    For suppose that Bob presses the button a potentially infinite number of times, and this results in a potentially infinite sequence of 'random' outcomes, {B, R, B, B, R, R, ...}. The previously observed outcomes can be vaguely summarised as 'possibilities' using the co-product, yet there is no objective test for a random process; for at any time t, the sequence of lights generated so far is always describable by some computable function, and at any time t, any previously assumed computable hypothesis about the generating process of the lights might be falsified. Therefore as far as phenomena are concerned, there is no discernable distinction between a deterministic process and a non-deterministic process.

    So we have at most a concept of epistemic uncertainty at play. But I don't see anything in the above that refers to the actual world; .
  • Logical Necessity and Physical Causation
    If the universe is assumed to be causally closed and contain a finitely bounded amount of information, then both determinancy and indeterminacy can be rejected as meaningless concepts on the grounds that neither concept can say anything normative or descriptive about a universe that is considered to be a complete dataset.

    Asking whether or not a finitely bounded universe is deterministic or not is like asking whether J R Tolkien's world of Middle Earth is deterministic or not. The question only makes sense relative to some conception of the transcendental, relative to which the world in question can be regarded as incomplete. In the case of the complete works of Middle Earth, the applicable transcendental concept would be the author J R Tolkien , whom when considered from an external perspective transcedendental to Middle Earth can be said to have determined the events of Middle Earth. But when Tolkien and his books are considered together as a complete joint system, the question is again meaningless.

    This is more or less the same observation that Bertrand Russell made when he commented to the effect that the concept of cause and effect adds nothing to the joint description of the motions of the stars and planets.

    Causality is really a means of talking about experimental interventions, in which the actions of an experimenter, i.e the 'causes', are considered to be 'transcendental interventions' with respect to the experiment he is performing. Causality is therefore a "metalogical" concept rather than a logical concept, when we consider a logical system to be finite number of axioms with finite proof lengths that are self-contained.

    Science therefore doesn't need causality per se, but only the concepts of internal versus external reasoning relative to the theories in question, plus a notion of material implication as provided by relevance logic.
  • Logical Necessity and Physical Causation
    The modern conception of logic (as represented by categorical quantum linear logic) is interactive and game theoretic (e.g quantum, linear), where the role of logic isn't to determine or even to predict the outcomes of experiments (which amounts to superstition and fortune-telling), but merely to define protocols of scientific investigation and to document the outcomes of such investigations.

    In game-theoretic fashion, the material implication A --> B is weakly interpreted to refer to some process of interaction between an observer and his environment, a process that in general is vaguely understood and unreliable. For example, 'A' might stand for a message that Alice sends to Bob, and 'B' might stand for a response that Alice expects to receive from Bob in return. Understood this way, logical implication represents an expected or intented dialogue between interacting entities, rather than representing epistemic certainty with respect to a supernaturally infallible process. The role of the modern logician is thus akin to the role of a tennis umpire, who adjudicates and documents the conduct of interacting actors, whilst remaining agnostic with respect to the outcome of the game.
  • Logical Necessity and Physical Causation
    I suggest reading about Linear Logic, that greatly clarifies and narrows the distinction between logical, causal and modal necessity, even if causality isn't directly discussed in the majority of articles on the topic.

    Modern philosophical confusions about the relationship of logic and causality are largely due to the fallacy of 'material implication' - a classically valid mathematical rule of inference adopted by Frege and Russell that is inadmissible for casual reasoning.

    According to 'material implication', the hypothesis, rule or law A --> B is equivalent to a data-set of the form NOT A OR B:

    A --> B <---> (NOT A OR B) ,

    where (NOT A OR B) refers to elements of the set {( A = FALSE, B = FALSE), (A = FALSE, B = TRUE), (A = TRUE, B = TRUE))

    Common sense should instantly recognise this rule as unreasonable, made worse by the the fact that (NOT A OR B) OR (NOT B OR A) is a tautology, which if material implication is accepted implies that
    A --> B OR B --> A is true, i.e. that for any event types A or B, either A must cause B OR B must cause A.

    In Classical logic, material implication is true as a result accepting the law of excluded middle. On the other hand, intuitionistic logic that rejects LOM thereby rejects material implication, whereby only the inference (NOT A OR B) --> (A --> B) is intuitionistically valid.

    But if causal theories are supposed to summarise and describe our experimental interventions in the course of nature, then even the intuitionistically valid latter inference rule is inadmissible , considering the fact that even if (NOT A OR B) is observed, this doesn't necessarily imply that manipulating events of type A influences events of type B, since A type events might not be relevant to B type events. This leads us to so-called 'Relevance Logics', which includes linear logic, in which A --> B is interpreted to mean 'One A-type resource transforms into one B-type resource'.
  • Atheism & Solipsism
    In my opinion,

    Metaphysical Solipsism : True by definition.
    Methodological Solipsism : Unavoidable.
    Psychological Solipsism : Dangerous and unhealthy, avoid at any cost.
  • Mindfulness: How Does the Idea Work Practically and Philosophically?
    The tasks which you suggest like gardening and caring for a pet aren't what I would call mindlessness, although the only one of them which I ever do is painting. I would say that painting is a form of mindfulness because it is about active attention, especially in relation to the experience of the senses. My biggest example of mindlessness would be going out and getting drunk. I have done it a few times to cope with stress and it involves blotting things out, especially emotional distress.Jack Cummins

    My problem with the term "mindfulness" , is that the term might be interpreted as selectively paying attention to, and thereby inadvertently feeding, preconceived cartesian notions of self/ego. To me the term intuitively implies self-monitoring, self-judgement and self-obsession, which can only feed self-consciousness, introspection and anxiety, and ultimately behavioural avoidance of anxiety provoking situations.

    Of course, advocates might say "no, mindfulness is about passive observation and acceptance of the mind". But is the passive observation of the mind a valid concept? Doesn't the very act of paying attention to a thought create it? And how can one even choose to observe passively, given the fact that the very intention to be mindful is agenda-driven?

    On the surface at least, mindfulness therapies, especially how they are marketed in consumerist contexts, seem to me like a denial of , or excuse to avoid, the socio-political reality that ultimately determines thought and behaviour.

    Also, if a person denies the existence of the cartesian self, then what does 'mindfulness' amount to for that person with that understanding? For that person, doesn't the concept of "mindfulness" become broadened to the point of not excluding any mental or physical activity?
  • Infinites outside of math?
    There is no "pausing" and "restarting", only a reference to subsequences leading toward a limit definition.jgill


    If a student asked you to explain "what is a non-terminating process?" what would your reply be, and how would you avoid running into circularity?

    I cannot think of any way of explaining what is meant by a non-terminating process, other than to refer to it as a finite sequence whose length is unknown. Saying "Look at the syntax" doesn't answer the question. Watching how the syntax is used in demonstrative application reinforces the fact that "non-terminating" processes do in fact eventually terminate/pause/stop/don't continue/etc.

    The creation of numbers is a tensed process involving a past, a present (i.e. a pause), and only a potential future.

    Simply:

    Where do you find "eventually finitely bounded" in Brouwer, or even any secondary source, on potential infinity? Please cite a specific passage.
    TonesInDeepFreeze

    It is a logically equivalent interpretation of Brouwer's unfinishable choice sequences generated by a creating subject, and I have already presented my arguments in enough detail as to why it is better to think of potential infinity in that way.
    .
    You said that your claims about the notion of potential infinity are supported by the article about intuitionism. Now you're jumping to non-standard analysis. You would do better to learn one thing at a time. You alrady have too many serious misunderstandings of classical mathematics and of intuitionism that you need to fix before flitting off fro them.TonesInDeepFreeze
    .


    Intuitionism is partially aligned with constructively acceptable versions of non-standard analysis. If you want an more authoritative but easy-read sketch, Read Martin Lof's "The Mathematics of Infinity" to see the influence Choice sequences have had on non standard extensions of type-theory (which still cannot fully characterise potential infinity due to relying exclusively on inductive, i.e. well-founded types.

    You say, "in mathematics". The ordinary mathematical literature does not use 'absolute infinity' in the sense you do. As I've pointed out to you several times, 'absolute infinity' was a notion that Cantor had but is virtually unused since axiomatic set theory. And Cantor does not mean what you mean. So "in mathematics" should be written by you instead as "In my own personal view of mathematics, and using my own terminology, not related to standard terminology". Otherwise you set up a confusion between the known sense of 'absolute infinity' and your own personal sense of it.TonesInDeepFreeze

    Classical mathematics and Set theory conflate the notions of absolute with potential infinity, hence only the term "infinity" is required there. Not so in computer science, where a rigorous concept of potential infinity becomes needed, and where ZFC is discarded as junk.

    Cantor does mean what i mean in so far that his position is embodied by the axioms of Zermelo set theory:

    1) The Law of Excluded Middle is not only invalid, but false with respect to intuitionism.

    2) The axiom of regularity (added in ZFC) prevents the formulation of unfinishable sets required for potential infinity.

    For example, in {1,2,3 ... } where "..." refers to lazy evaluation, there is nothing wrong with substituting {1,2,3 ...} indefinitely for "..."

    3) Choice Axioms obscure the distinction between intension and extension, whereupon no honest mathematician knows what is being asserted beyond fiat syntax when confronted with an unbounded quantifier.

    3) The Axiom of Extensionality : According to absolute infinity, two functions with the same domain that agree on 'every' point in the domain must be the same function. Not so according to potential infinity, since it cannot be determined that two functions are the same given a potentially infinite amount of data .

    We also have Markov's Principle: according to absolute infinity, an infinite binary process S must contain a 1 if it is contradictory that S is constantly zero, and hence MP is accepted. Not so according to potential infinity, due to the fact that 1 might never be realised. This principle is especially relevant with respect to Proof theory, since any proof by refutation must eventually terminate at some point, before knowing for certain whether an unrefuted statement is refutable. So unless we are a platonist who accepts absolute infinity, Markov's principle isn't admissible.