• Dualism and Interactionism
    The debate as to whether QM randomness is aleatoric or epistemic is presumably rendered moot by modern understanding of Quantum Contextuality; for in the case of entangled systems it isn't possible in principle to assign a complete probability distribution over the joint values of every quantum observable, as per the Kochen-Specker theorem.
  • Teleology and Instrumentality
    I feel like you are making a category error with respect to 'final cause'. What do you mean by that term and why does your 'initial cause' make that meaning superfluous?Leontiskos

    To use the SEP's description of Aristotle's causal ontology:

    The material cause or that which is given in reply to the question “What is it made out of?” What is singled out in the answer need not be material objects such as bricks, stones, or planks. By Aristotle’s lights, A and B are the material cause of the syllable BA.

    The formal cause or that which is given in reply to the question “"What is it?”. What is singled out in the answer is the essence or the what-it-is-to-be something.

    The efficient cause or that which is given in reply to the question: “Where does change (or motion) come from?”. What is singled out in the answer is the whence of change (or motion).

    The final cause is that which is given in reply to the question: “What is its good?”. What is singled out in the answer is that for the sake of which something is done or takes place.

    Let us define a final cause to be reducible to the first three causes if there exists a causal model that reproduces the effects attributed to the final cause, that consists only of the first three causes applied to one another in an adaptive feedback loop.

    I am asserting that all final causes are reducible in the above sense. This is equivalent to asserting the existence of a computer simulation of all phenomena attributed to final causes.
  • Teleology and Instrumentality
    I feel like you are making a category error with respect to 'final cause'. What do you mean by that term and why does your 'initial cause' make that meaning superfluous? Surely Aristotle was not "speaking informally about evolutionary feedback" when he used the term, given that he was not aware of Darwinian evolution.Leontiskos

    Had Aristotle known about evolution, then he could have explained the regularity of nature without appealing to final causes and only to adaptive feedback in the cycle of life. His arguments don't amount to a proof of the necessity of final causes, but to the insufficiency of causal models that don't take into account adaptive feedback.
  • Teleology and Instrumentality
    I'm not sure I understand your question. Let's take an example: the final cause of an acorn is an oak tree. Presumably you are positing that there is some "initial cause" which makes this final cause superfluous?Leontiskos

    Yes. To say that an oak tree is a "final cause" of an acorn is to speak informally about the evolutionary feedback that determined the chemistry of Dendrology, which when applied to a given acorn refers only to a directed chain of causality whose conclusions are fully determined by initial conditions.
  • Freedom and Process
    Yes, although I would reject the concept of "self determination" as meaningless.

    IMO, "determination" is only meaningful when used in the sense of the verb "to determine" as a practical relational concept referring to the control of one process by another process when they are treated as autonomous entities, as embodied by the concept of multiplayer games.

    By that norm, J R Tolkien can be said to have "determined" the universe of Middle Earth, a fact that can only be understood by those of us who exist outside of the universe of Middle Earth. But from the perspective of the characters within Middle Earth mythologized as a closed system, their lives can neither be said to be determined nor undetermined.
  • Artificial intelligence
    The concept called "Intelligence" - whether natural or otherwise - is largely an anthropocentric misconception of Cybernetics that is born out of cartesian folk-psychology. I think the word "Intelligence" should be eliminated from the sciences.
  • Teleology and Instrumentality
    Can someone give me an example of a "final cause" that cannot be eliminated for initial causes?

    The examples that Aristotle provides, e.g the functions of the human body, aren't examples because they can be eliminated for the initial causes of Darwinian evolution.
  • Absolute nothingness is only impossible from the perspective of something
    In type theory as used in computer programming languages, "nothing" is interpreted as a type denoted that has no values but that can be eliminated for values of every type, which is in semantic alignment with your name "nothing implies everything". Practically speaking, this can be interpreted as referring to a function that allocates memory for the values of any input type



    Likewise, for 'destroying' elements of a type, i.e to free their memory, we have



    "Deallocate" can therefore be regarded as the 'negation' of "Deallocate" and vice-versa.

    In particular, can be taken to be , which apparently denotes useless instances of the above functions that do nothing (since we officially have no values for to read or write with). This is because the meaning of in our programming language is to denote the mysterious origin and destination of the values in our programs, i.e. the external compiler/OS that our programs depend on for resources, but whose resources cannot be talked about or reasoned with within our programming language.

    Yet suppose the following isomorphism is taken to be true and can be manipulated within our programming language.



    This equivalence is regarded as true in models of classical linear logic. Then this implies that can be iteratively expanded as a potentially infinite list of ... values of some sort? - no, for that possibility was already forbidden - then of memory addresses for storing values!. In which case, if we allowed our programming language to manipulate memory addresses directly (e.g as in C/C++) then we can then interpret within our programming language, and consequently give meaning to and take control of allocate/deallocate in the intuitive and practical way, For example by defining







    And paired with a respective definition for 'deallocate' that destroys these values by freeing their corresponding memory addresses and then returning them.

    So to summarise the above examples, "Nothing" initially referred to the mysterious effect of a destroyed value of a type and to the mysterious cause of an initialized value of a type within a programming language that could only talk about values. But by expanding the semantics of our language, we eliminated dead talk of "Nothingness" for talk about the interactive duality of "type values" and "memory addresses" .

    I think the lesson is, "nothingness" denotes epistemic ignorance and/or semantic scoping and can be eliminated from empirical and rational discourse.
  • A very basic take on Godel's Incompleteness Theorem
    Listed as 402 pages. Godel's paper is 34 pages. Interestingly, I find only one place where (in my translation) Godel uses "true" or variants: section 3, "The following proposition is true: Theorem VII: Every recursive relation is arithmetical." In the rest it's "provable" or "decidable," and variants.tim wood

    Yes, that would be because Godel wanted to reinforce the point that his theorems have entirely constructive proofs, that don't appeal to the law of excluded middle or to meta-mathematical assumptions. Unfortunately, that point was lost on Douglas Hofstadter, who only caused misunderstanding of the theories, by linking them to the meta-mathematical concept of self-reference that isn't of any proof-theoretic relevance to the theorems.

    Peter Smiths book is much longer because it also covers related topics that developed after Godel that are of relevance to his now ancient theorems, even if the book is badly outdated in certain respects and unhelpfully assumes a background of classical logic for presumably historical reasons.

    The usual locution I find is that G is undecidable, but because G says it's undecidable, it's true; this a so-called metamathematical proof being outside the system in which G is created.tim wood

    Well syntactical incompleteness does at least imply that any system of mathematical logic is an unfinishable open-system, such that any sufficiently expressive axiomatic system can be consistently extended in an infinite number of mutually inconsistent ways, leading to an infinite number of conflicting rival interpretations with respect to any theory of arithmetic. A problem with your meta-mathematical heuristic however, aside from the fact that it isn't relevant to Godel's constructive proof of the theorems, is that it assumes the existence of mathematical truth prior to system construction - which might or might not be epistemically appropriate in a given situation.

    Essentially, if one accepts Godel's completeness conjecture , namely the conjecture that first order logic is sound and complete with respect to the theorems that are tautologies, i.e. the theories that are satisfied by every model of the logic, then either a theorem is syntactically provable in such a logic or there must exist a model in which the theorem is false. So the heuristic you mention is a potentially useful heuristic for automated theorem proving - if the completeness conjecture is assumed, which is generally appropriate for classical logic, then a theorem can be disproved by merely showing that it's negation is satisfiable in a model of the logic, meaning that one can "observe" the theory to be false, without having to laboriously work "within the system" to produce a syntactical proof of the negated theorem.

    Syntactically however, G is merely the fixed-point of an alternating fix-point equation, whose iteration causes an increase in the number of negation signs at the front of the fixed-point. It is structurally similar to a coalgebra representing a potentially infinite list of oscillating boolean values, objects that are normally called "impredicative" rather than "self-referencing" in the literal sense implied by Hofstadter.
  • A very basic take on Godel's Incompleteness Theorem
    The book "An Introduction to Godel's Theorems" by Peter Smith would be my recommendation for an eager beginner.

    Alternatively, wiki's page on the Halting Problem conveys in the simplest and quickest fashion nearly all of the relevant computational and philosophical implications of Godelian incompleteness without the torture of Godel numbering.

    In fact, the popular misconception of Godel's theorem as amounting to saying that "a sentence is 'true', assuming that it is unprovable" is a more apt (but still ultimately misleading) conception of a weaker form of incompleteness that is directly implied by the undecidablity of the Halting problem, namely that there cannot be a sound and complete axiomatization of the natural numbers, since that would imply the existence of a computation on the natural numbers that halts if it doesn't halt.

    Godel's actual proof, that doesn't appeal to a model or intended interpretation of the axioms of Peano Arithmetic, is a stronger result since it directly proves syntactical incompleteness without appealing to any notion of soundness. His proof can be intuited geometrically and trivially, as showing that the forest of mathematical proofs that constitutes Peano Arithmetic doesn't describe the background that isn't part of the forest of Peano Arithmetic. Is that really an interesting and deep result?

    So the heuristic argument of there being a "true but unprovable statement" that the public use as a heuristic for understanding or remembering Godel's theorem is very misleading. Their catchphrase is more suitable as a moderately misleading heuristic for remembering the proof of the Halting problem (which also doesn't refer to models, but to external assumptions concerning halting conditions, that with much charity might be interpreted as comprising a weak model of something or other).

    Many mathematicians and logicians cannot themselves be bothered to master Godel's incompleteness proof, for there isn't any payoff for doing so, and will probably content themselves with a technical understanding of the weaker version I mentioned above that is straightforward to prove and remember, and loses practically nothing.
  • Kripke's skeptical challenge
    I am probably not understanding this at all correctly because its too technical for me but it sounds like its bolstering the Kripke's skepticism rather than really solving anything.Apustimelogist

    Alternative foundations for general mathematics and computing can't solve Kripikean skepticism in the sense of providing stronger foundations that rule out unorthodox rival interpretations of mathematical concepts - but they can partially dissolve the skepticism by

    1) Refactoring the principles of logic, so as to accommodate finer grained distinctions in mathematical logic, particularly with regards to a) Distinguishing intensional vs extensional concepts, b) Distinguishing between the process of constructing data and communicating it, versus the process of receiving data and deconstructing it, c) Distinguishing between various different meanings of finitism that are equivocated with classical logic.

    2) Weakening foundations so as to assume less to begin with. This replaces skepticism with semantic under-determination. E.g, if "plus" is considered to be a finitary concept that does not possess a priori definite meaning to begin with, then Kripkean doubt about it's meaning doesn't make as much sense.

    In summary, a good logic from the perspective of computer science describes the process of mathematical reasoning intuitively and practically in terms of a finite series of interactions between agents playing a partially understood multi-player game, in which no agent is the sole authority regarding the meaning and rules of the game, nor does any agent have omniscient knowledge regarding the eventual outcome of following a given strategy.
  • Kripke's skeptical challenge
    Kripke allows that mathematicians can adequately specify the rules of addition. That's not being called into question.frank

    That depends on the sense of adequacy you are referring to. The question is, how can an intensional definition of addition such as an inductive definition, that is finitely specified and only provides an inductive rule for performing a single step of computation, imply an unambiguous and extensionally infinite table of values? As Kripke himself pointed out (IIRC), as the numbers to be added get very large, there is increasing uncertainty as to what the meaning of a "correct" calculation is, for finite computers and human beings can only cognize truncations of the plus function. And a "gold standard" for the extensional meaning of addition up to a large enough arguments for most human purposes hasn't been physically defined by a convention as it has for the meaning of a second of time.
  • Kripke's skeptical challenge
    Can you demonstrate how quus is dealt with by the approaches you have said?Apustimelogist

    A philosophical commonality of those approaches, is that mathematical objects are treated as being finite. So for example, plus is permitted to exist intensionally in the sense of an algorithmic specification but not in the extensional sense of a completed table (unlike with the quus function). Likewise, all sequences are treated as being necessarily finite and generally unfinshed. This apparent restriction is compensated by allowing objects to grow over time (technically, growable objects are describable by using what are now referred to as coalgebras and coinduction).

    Intuitionism would call a sequence of n numbers x1,x2,...xn that were generated by iteratively applying the "plus" function as constituting a lawful sequence of n terms. But another sequence consisting of exactly the same numbers x1,x2,..xn, that wasn't assumed to be generated by some underlying function, would be considered a lawless sequence. In the case of quus, it can be considered to be a lawless sequence , since it is describable as a table of exactly 57 rows and 57 columns. So if that table is defined to mean "quus" as a matter of tautology, then skepticism as to what the function quus refers to can only concern operational assumptions regarding how the table should be evaluated - however skepticism of this sort is accommodated by intuitionism, since intuitionism doesn't consider mathematical truth to be a priori and the properties of unfinished sequences are allowed to change over time . The situation with quus is at least a partial improvement upon plus, whose table cannot even be explicitly stated. Quus is more or less a truncation of plus, and roughly speaking, intuitionism considers such "truncations" as constituting the basis of mathematical analysis.


    Additionally, in Linear Logic terms and constants can only be used once. So two uses of a function demands two separate and distinct copies of that function. Linear logic includes a copy operation, (the so-called exponential rules), meaning that the logic can be used without loss of generality, but this forces the mathematician to justify and account for his use of resources in the same way as engineers who must always consider the possibility of numerical overflow and hardware failure.
  • The meaning of meaning?
    It's interesting to speculate what effects that mind-reading technology could have on our linguistic conventions. It is conceivable that the use of white-box methods for directly probing correlations between speakers mental states and their use of linguistic expressions could lead to an enriching of ordinary communication, e.g The English language might introduce a public name for the mysterious sensation correlated to rising blood-pressure that everyone had hitherto ignored, apart from the private linguist who scribbled "S" in his diary.
  • Kripke's skeptical challenge
    Kripke's mistake (assuming I am recalling his position correctly), was phrasing the skepticism as a circular question to a mathematician where he asked to defend the validity of his judgements, as in

    "How do you know that your present usage of "plus" is in accordance with your previous usage of "plus" ?"

    That question is easily viewed as nonsensical, since it is easily interpreted as asking a person to question their own sanity. Similarly bad phrasing, leading to pointlessly circular discussion is found throughout the philosophy literature on private language arguments.

    Different conceptions of logic and semantics cope variously with the question of meaning skepticism. For instance, Classical Logic with set-theoretic semantics, as in Model Theory, lends to the idea of semantics being static, a priori, unambiguous, infinite and transcendent of the finitely observed behavior of a formal system that is said to correspond to the semantics. Such "picture theories" of meaning, that place semantics in an exalted position above the cut and thrust of computation and IO, naturally provoke skepticism as to the relevance, utility and even existence of semantics, as evidenced by the existence of formalists of the last century. Similarly, I think Kripke's (misconceived) interpretation of Wittgenstein was partly born out of this obsoleted semantic tradition that he was part of, but couldn't see beyond due to the lack of a formalized alternative approach.

    Intuitionism copes better with semantic under-determination, because it assumes less meaning to begin with; it interprets infinity as referring to unspecified finite extension of indefinite length, implying that all data is finite and that all symbolic meanings have a finite shelf-life. So it doesn't consider mathematics or logic to consist of an actually infinite number of semantic facts that finite linguistic practices must miraculously account for. Consequently, intuitionism permits a tighter identification of logic with a suitably non-standard version of set-theory, narrowing the opportunity for semantic skepticism.

    A more modern alternative is to place syntax and semantics on an exactly equal footing, by considering them to refer to opposing sides of interaction of a dialogue between two or more agents, where what is considered to be syntax and what is considered to be semantics is a matter of perspective, depending on who is asking questions and who is answering them. Girard's Ludics is a formalisation of this pragmatic idea of meaning as interaction, and is of relevance to the rapidly emerging discipline of interactively-typed languages and interactive AI, in which no individual party has full control or understanding of the language they are using, whereupon the meaning of a type or symbol is identified with it's observational history of past-interactions.
  • A Wittgenstein Commentary
    A central question of concern when it comes to interpreting the late Wittgenstein and what he meant by "private language" is the following issue:

    " ..Are certain norms valid, or in force, because certain things such as linguistic expressions and intentional states have certain meanings/contents? Or do such things have meaning/content because some norms are in force? We shall distinguish between meaning/content “engendered” (ME/CE) normativity and meaning/content determining (MD/CD) normativity (cf. Glüer & Wikforss 2009). MD/CD norms are such that they metaphysically determine, ground, or constitute meaning/content; here, the norms are prior. ME/CE normativity is normativity engendered by, or consequent upon, meaning/content, regardless of how the latter is determined. "


    https://plato.stanford.edu/entries/meaning-normativity/

    To understand this question, one must decide the extent to which they consider norms and intentional states are gramatically related, i.e to what extent are norms considered to be part of the very meaning of "intentional states" (or vice-versa)?

    Take the following examples:

    Suppose a zoologist asserts that a particular species of animal exhibits signs of intentionality. To what extent should the zoologist be interpreted as referring to the norms of zoology, as opposed to the actual "perspective" of the animal in question?

    What if an AI engineer asserts that ChatGPT123 demonstrates intentionality. Is their answer more concerned with the norms of AI engineering than in the purportedly more descriptive zoological science example?

    And what of our ordinary attribution of other minds? If Alice recognizes that Bob has a mind that is distinct from hers, to what extent should she be interpreted as referring to her personal state of empathy , as opposed to the state of Bob's actual behaviour?

    In the case of Wittgenstein's "Beetle in the Box", the specific issue is the relationship between the customs of language that are invariant to personal circumstances and perspective and thus incapable of representing 'private objects' on the one hand, versus the language users who express their personal perspectives using such aperspectival customs. The analogy of a children's playground comes to mind. The function of swings, climbing frames and roundabouts are to accommodate the needs and perspectives of children, in a manner that abides by norms of health and safety. But obviously it would be a terrible category error if one attempted to explain why and how children used playgrounds by appealing to the design of the playground and the norms of the controlling adults. Likewise, the beetle-in the box argument wasn't made to deny the semantic importance of intentional content, but to stress how social customs, such as the custom of physical language, have evolved to facilitate the expression of intentional content.
  • Belief
    Truth conditional semantics implies that beliefs don't exist, for beliefs are teleological entities.

    The meaning of belief-behaviour isn't reducible to it's effects, for obvious causal reasons. Objectively, the meaning of an agent's "beliefs" can only refer to their stimulus-response disposition, which is in turn explained causally without appealing to teleology.

    When an external observer interprets a performing agent as having beliefs, those "beliefs" only exist in the prejudiced mind of the external observer who interprets the agent using normative teleological principles.
  • The Worldly Foolishness of Philosophy
    s the philosopher a life coach ? A spiritual advisor ? This “philosopher” is analogous to a nutritional supplement, which is to say as a piece of technology, tested qualitatively like a new painkiller or piece of music in terms of feeling it gives us.

    Or can we really take seriously the idea that a philosopher is essentially “scientific” in some radical, foundational sense ? Is the philosopher a kind of “pure mathematician” of existence as a whole ? I say “pure” because I want to highlight an impractical interest in truth for its own sake. Even an unpleasant truth is still good, because it is possessed as truth, because it’s worse to be confused or deceived.
    plaque flag

    Mathematicians are also "spiritual advisors", for mathematics and logic are normative disciplines. Going along with Hacker's interpretation of Wittgenstein, I think it is most useful to consider the academic disciplines of pure Mathematics and logic as not having subject matter in themselves, but as defining semantical norms of representation that facilitate the translation, coordination and comparison of language games that do possess subject matter. Many language-games that are partly mathematical, e.g the natural sciences, possess some degree of subject matter to the extent that they are applied disciplines that establish synthetic propositions.

    As for philosophy itself, I think it is reducible to every other subject and their interrelations.
  • Belief
    In AI , the word "belief" is principally used to denote an agent's understanding of cause and effect in terms of his "Cartesian" mental model of the world. Here, a "belief" can be identified as map involving the following types which comprise the agent's internal epistemic state.

    Belief : (History of Observations x Set of Possible Actions) --> Power Set of Observations

    "History of Observations" comprises the agent's understanding of his external world, and refers to his memory of observations up to and including his present observations. Conditioned on his observation history, he infers the consequences of performing a possible action. In order to accommodate causal and epistemic uncertainty, the agent will in general map a possible action to a set of potential observations, hence the use of the power set on the right hand side.

    Nowhere in the above definition is the world, as we outsiders understand it, referred to by the agent's beliefs, for the agent's beliefs are understood purely in terms of the agent's mental functioning and stimulus responses. It should also be understood that from the point of view of the agent, his observation history is his "external world".

    It makes no sense for onlookers to interpret an agent as referring to anything other than his memories and sensory surface. As far as an onlooker is concerned who is trying to understand the agent's beliefs, the world that is external to the agent is only relevant and useful to the onlooker in so far as the onlooker lacks knowledge of the agent's mental functioning - in which case the onlooker can infer the agent's mental disposition bu observing the agent's behavioural reactions to external stimuli originating in the world of the onlooker. But if an onlooker were to possess perfect knowledge of the agent's mentation, then as far as that onlooker is concerned, the state of the external world would be irrelevant with regards to understanding what the agent believes.

    No matter how much a community of agents might appear to agree (or disagree) that "such and such is true of "the" real world", as far as linguistic designation is concerned they are merely talking past one another and gesticulating towards different and unsharable private worlds corresponding to their individuated mental processes.
  • Sleeping Beauty Problem


    The sample space of any room is { H, (T,F), (T,S) }

    where F and S refer to First Stay and Second Stay, respectively

    with probability measure

    M(H) = 1/2
    M(T,F) = 1/4
    M(T,S) = 1/4

    (a consequence of your assumed prior probabilities )

    Define a variable indicating the stay

    Stay (H) = First
    Stay (T,F) = First
    Stay (T,S) = Second


    P(Stay = First) = M (H) + M(T,F) = 3/4
    P(Stay = Second) = 1/4

    That's all that can be said, unless i've overlooked an aspect of your problem. Which stay it is would give new information to the sitter about the coin, but alas she doesn't know this information. . To get a hotel for the purposes of obtaining a statistically interpretable result, simple take the product-space of the sample spaces for each room, and take the product of each room-specific measure M.

    As before, the prior probability of the coin landing heads is given in the premises, and the sitter has no new information upon which to condition the state of the coin, meaning that it's probabilities remain unchanged.

    I think the version of the hotel in which subjects are assigned to different rooms on each awakening is more interesting, because it reveals the importance of counterfactual reasoning when the sitter allocates her credences, which thirders blindly ignore in their "picture theory" of credence allocation.
  • Sleeping Beauty Problem
    Now suppose she doesn't make any assumptions regarding her room allocation. What then?

    For all she knows, she might have been hand-picked to babysit a "heads" room. In which case the probability that her room is of a heads coin, is 1. Due to possibilities such as this, she should

    1) Refrain from committing to a probability that her room is of a heads coin, until further information is given.

    but nevertheless

    2) Deduce that the probability of her room coin landing heads is 1/2, given that it is known information.
  • Sleeping Beauty Problem
    How does Sue deduce that "here" indicates she is in a room occupied by a participant scheduled for two awakenings with a probability of 2/3? She draws this conclusion from the fact that twice as many such rooms exist in the facility compared to rooms allocated to participants with only one scheduled awakening. So, from Sue's perspective (based on the exact same evidence she shares with the participant), she concludes that the coin landed tails with a 2/3 probability, despite the coin having a 1/2 propensity to land tails. Sue's credence that the coin landed tails is a consequence of both the initial propensity of the coin to land tails and the propensities of the experimental setup to place her in a room that corresponds to a tails outcome.Pierre-Normand

    Two "tail" rooms refer to the same coin outcome. So when computing the probability that the coin of her room landed heads, she needs to take into consideration the other room she might have occupied.

    So assuming that her supervisors assign her room using a draw from a uniform distribution, her credence that the coin of her room landed heads, should be 1/2
  • Sleeping Beauty Problem
    Because her credence wasn't meant to reflect the proportion of winning bets but rather the proportion of awakening episodes that occur during an experimental run that results in the (initial) bet being won. Those two aren't the same. If Sleeping Beauty had instead been afforded the opportunity to bet on each awakening episode, then her winning frequencies would exactly match her credence that she is winning that bet.Pierre-Normand

    The question she is asked is in the context of a single trial. It is merely

    "What is your credence now for the proposition that the coin landed heads?"

    The problem refers to the outcome of a particular coin toss, with a sample space S = {H,T} describing two possible worlds. So the problem does not concern frequencies. Both halfers and Elga agree that the propensity of the coin to land heads is 1/2. So the measure M over S is

    M (H) = M(T) = 1/2

    We are given two causal implications with respect to S that characterise two random variables, namely

    Awaken_Mon (H) = True
    Awaken_Mon (T) = True

    and

    Awaken_Tue (H) = True
    Awaken_Tue (T) = False

    Both of which are "measured" by M to give

    P( Awaken_Mon = True) = 1
    P( Awaken_Tue = True ) = 1/2

    This fully describes the stated causal relationships of the Sleeping Beauty Problem, that go from the state of the coin to the states of two awakening events.

    How do you suppose that we

    (1) relate the random variables, whose outcomes refer to events in the B series of time, to the indexical "Today" of the A series, whose relationship to B series and to the physics of coin-tosses isn't specified by the problem?

    and

    (2) How should we apply induction to "invert" the causal direction from "Today" to the state of the coin, assuming we have a satisfactory answer to (1) ?

    Unless thirders can answer (1) and (2), their epistemic probabilities are meaningless
  • Sleeping Beauty Problem
    Your suggestion that a thirder expects to gain from choosing amnesia would depend on her conflating the probability of making a correct prediction upon awakening with the frequency of the actual payout from the initial bet.Pierre-Normand

    They do indeed conflate them. Why would SB assign a credence of 1/3 to heads, if she knows

    1) That the physical probability of winning the lottery is 1/2

    2) That amnesia doesn't influence lottery outcomes.

    3) That had she not had amnesia, then she would either know the result of the lottery on waking up, or else refer to it's probabilities as being 1/2?


    Since thirders make no principled distinction between causes and effects, they do indeed encourage avoidance-coping. For by their logic, one should raise one's credences for one's preferred outcomes by avoiding information.
  • Is consciousness present during deep sleep?
    So, why can't we feel anything under total anesthesia? Because we are unconscious.Alkis Piskas

    Not feeling anything is a definition of unconsciousness.

    But how can you verify that you feel nothing under anaesthesia?
  • Is consciousness present during deep sleep?
    Who is such a hard nosed empiricist that he can't learn from someone else? People tell me I snore. Despite having no conscious recollection of snoring, I believe them.wonderer1

    Other people can hear your snoring, but they cannot observe an absence of your experiences on your behalf, and neither can you. All they can observe is an absence of your experiences on their behalf in terms of their behavioural definitions of your experiences.
  • Is consciousness present during deep sleep?
    When I awoke this morning, I felt as if time had passed. This coheres with biological hypotheses that some level attention was present, at least sporadically, throughout the night.

    However, the question isn't only a matter of psychobiology, but also a question of phenomenology, perspective and empirical logic that neuroscientists tend to overlook:

    A subject in an ideal sleep lacks consciousness from the perspective of external observers according to behavioural definitions of ' "consciousness" that refer to inter-subjectively ascertainable criteria such as stimulus responses, memory recall and directed attention. And yet from the subjective phenomenological point of view, the first person never sleeps. After all, I cannot experience myself sleeping, and so I cannot have direct knowledge that I have ever been previously unconscious in the pure sense.

    If I awaken from a coma, I might infer that I was previously unconscious on the basis of my present state of amnesia, together with reports I am told from external observers who monitored my behaviour. The question is, are these grounds sufficient for me to establish the proposition I was previously unconscious in the phenomenological sense of "experiencing nothing"? Shouldn't a hard-nosed empiricist who demands verification criteria, reject this commonly held conclusion as meaningless or false?
  • Sleeping Beauty Problem
    Is/does she paid/pay this $1 on both days, or on Wednesday after the experiment is over? In the latter case, can she choose not to have amnesia, and then choose "Heads" if she recalls no other waling but change that to Tails if she does?JeffJo

    I simplified my thought experiment to say that she loses $1 on Monday if heads comes up, otherwise she loses nothing. No stake is involved, and she isn't asked to bet on the outcome during the course of the experiment.

    It boils down to the following question. Should she risk a single draw from

    P(Flip = Heads | Today = Monday) = 1/2 (no amnesia)

    Or should she risk a single draw from

    P(Flip = Heads | Today = Monday Or Tuesday) = x (amnesia)

    According to halfers, x = 1/2, implying that she gains nothing by choosing amnesia.

    According to thirders, x = 1/3, indicating that she gains from choosing amnesia.

    Say I draw a card. After I look at it, I tell Andy that it is a black card, Betty that it is a spade, Cindy that its value is less than 10, and David that it is a seven (all separately). I ask each what they think the probability is that it is the Seven of Spades. Andy says 1/26, Betty says 1/13, Cindy says 1/32, and David says 1/4. All are right, but that does not affect my draw. I had a 1/52 chance to draw it.JeffJo

    If you are referring to a context involving repeated trials, then all of your probabilities are physically meaningful posterior probabilities , for we are confident on the basis of our past experience with card games that the frequencies will roughly obtain. In which case none of your above probabilities are appealing to the controversial principle of indifference.

    In stark contrast, Elga (for example) invokes the principle of indifference on the basis of ignorance to assert

    P(Monday | Tails) = P(Tuesday | Tails)

    Leading to him to a conclusion the conflicts with his knowledge of fair coins and that encourages avoidance coping. What he ought to have done, is to represent his ignorance with the maximal set of permissible distributions and to assign confidence bounds, until as and when he has reason to whittle the set down to a smaller subset and make more specific predictions.

    Or if he really must assign a single distribution (i can't think of a good reason), to choose one whose deductive implications cohere with his broader state of knowledge.
  • Sleeping Beauty Problem
    1/3 would be her prior upon awakening and before being informed that the day is Monday. Upon being informed of this, Sleeping Beauty would update her credence to 1/2.Pierre-Normand

    That is true according to thirder's logic, but it doesn't alter their fallacious implication that consenting to amnesia before the experiment improves SB's chances of getting tails. If SB accepts that amnesia isn't a relevant causal factor of the coin's outcome, then she must avoid using a self-location credence that has that implication.

    Similar magical thinking is demonstrated in avoidance coping. A poor person who has financial anxiety might raise their credence that their bank balance is in the black by choosing not to read their financial statements. Having accurate credences can be painful and reduce one's ability to cope in the short term, so there are short-term psychological benefits in choosing ignorance.
  • Sleeping Beauty Problem
    If she has amnesia she should guess heads and will will 2/3 of the time.

    If she doesn't have amnesia she should guess either on the first wake up (1/2 probability so doesn't matter which she guesses) and she should guess tail with absolute certainty if she remembers having woken up before (ie on her second wake up). Again she will win 2/3 of the time.

    So having amnesia or not does not change the probability that she will win, but the tactics she should use are different.
    PhilosophyRunner

    In my argument, SB isn't asked to guess anything during the experiment. To make things really simple, let us only suppose that SB will lose $1 if the coin lands heads, without a wager being involved.

    According to thirders, if she has amnesia then

    P(Tails and Tuesday) = P(Tails and Monday) = P(Heads and Monday) = 1/3

    So, according to thirders her probability of losing $1 when awakened on monday in a state of amnesia is 1/3.

    On the other hand, if she doesn't have amnesia when waking up on Monday then thirders will agree that her probability of losing $1 is 1/2.

    So according to thirder logic, it is rational for SB to consent to having amnesia before the experiment begins, in order to bias the coin's outcome towards tails.
  • Sleeping Beauty Problem
    Here's another criticism of the thirder position: Their reasoning implies that self-induced amnesia is a valid strategy for controlling outcomes:

    Suppose that SB gets paid $1 if the coin lands tails, otherwise she must pay $1. Furthermore, suppose that before the experiment begins she is given the choice as to whether or not she will have amnesia during the course of the experiment. According to thirder reasoning, she should choose to have amnesia in order to raise the probability of tails to 2/3
  • Sleeping Beauty Problem
    There's no inconsistency here. It's precisely because the premise holds that the coin is equally likely to land heads or tails when tossed, and the fact that tails would result in twice as many awakenings, that Sleeping Beauty deduces that she could accept 2 to 1 odds on each awakening and break even in the long run.Pierre-Normand

    Let P(Flip | Today ) represent the conditional probability of the state of coin, relative to the value of SB's indexical "Today". Then

    2/3 = P(Flip = Tails | Today = Mon Or Tue) = P(Flip = Tails) = 1/2

    What is wrong with this conclusion of inconsistency?
  • Sleeping Beauty Problem


    But indifference with respect to the joint outcomes (Day, Flip) implies not only

    (1) a "posterior" probability for heads that differs from it's prior probability in spite of not learning anything upon awakening and before interacting with the world upon waking up.

    but also

    (2) different credence assignments for marginals one is equally ignorant of.

    So even if (1) is acceptable for Principle of Indifference advocates, how can they justify (2) that flagrantly disregards their beloved principle?

    From the premises

    P(Flip) = 1/2
    P (Day = mon | Flip = heads) = 1
    P (Flip = tails | Day = tue) = 1

    If a PoI advocate assigns the credence

    P(Day = mon, Flip = tails ) = P(Day = mon , Flip = heads) = P(Day = tue, Flip = tails) = 1/3

    Then his unconditioned credence that it is Monday is

    P(Day = mon) = 2/3

    which is clearly not the expression of indifference.

    Then for sake of consistency he must ignore the actual information he is given about the coin, by using

    P(Flip = tails) = 2/3

    Not only does this credence assignment ignore the physical probability of heads that is known to him in advance, but it also indicates different credence assignments for outcomes of Flip when assuming ignorance of Flip! - in flagrant contradiction to PoI.

    In general, when a PoI advocates uses PoI for specifying an unknown distribution, they don't possess the epistemic resources to determine the credences that they are implying regarding the values of related variables. And when their implied credences are pointed out to them, they are likely to withdraw their initial credence estimates.

    IMO, there isn't a good reason for using the principle of indifference, not even in the case of describing credences. For handling ignorance, it is always better to assign probability intervals than to assign precise probabilities.
  • Sleeping Beauty Problem
    I would say that neither the halfers nor the thirders position concerns credence. Plus the concept of "credence" is a minefield of unresolved philosophical difficulty, which is usually overlooked by advocates of probabilism.

    The halver's position is based on the common-sense postulate that the physical properties and propensities of objects remain unchanged when their states are conditioned upon irrelevant information. Given acceptance of that postulate plus the premise of a fair coin, why shouldn't the halver insist that they are talking about reality, as opposed to their mere mental state?

    The thirder's position is indeed a ratio of possible words, but there is scant evidence to support the idea that credences are accurately represented by taking ratios over possible worlds.

    For example, if one were to ask the average person to express their credences regarding the outcome of a two horse race that they know absolutely nothing about, they will simply say "I don't know who will win" and refrain from assigning any odds, equal or otherwise. They will also tend to accept bets in which they have knowledge that the physical probabilities are 50/50 over bets that they are totally ignorant about.

    Fundamentally, it is difficult to even know what the object of a credence or belief is, which relates to an overlooked problem of epistemology: If every belief is fully explained by it's natural causes, then how are false beliefs possible and where does the error reside? In the mind of the speaker who expresses their beliefs, or in the mind of the listener who interprets the speaker?

    Whatever SB says when interrogated upon awakening, if she is assumed to have amnesia then how are her interrogators supposed to interpret her "credence estimate"? Why should they consider her uttered "credence estimate" as being qualitatively different from yawning?
  • A Case for Analytic Idealism
    I had a hard time understanding what you were conveying, as I think we just use terminology differently, so let me ask some questions pertaining thereto.

    Subjective Idealism and solipsism aren't ideas

    By “idea”, I was meaning it in the colloquial sense of the term. Technically, those are metaphysical theories. One is a sub-type of idealism that does not hold there is an objective reality but, rather, that all that exists is to perceive or to be perceived (e.g., the tree doesn’t exist other than an image within your perception). The other is the theory that all that exists is one’s own mind, or, epistemically speaking, one can only know the existence of their own mind.
    Bob Ross

    The question here concerns whether realism and idealism are truth-apt synthetic propositions, with each representing competing theories that describe different and incompatible possible worlds, or whether they are analytic tautologies that are referring to different aspects of the logic of language and are merely talking past one another. Notably, realists and idealists each consider their stances to be irrefutable parts of commonsense, and are concerned with understanding the semantics of language in opposite directions from opposite starting points.

    "Being is perception" is an unavoidable tautology of non-representational idealism that is necessarily appealed to whenever an observer interprets a physical proposition in terms of his personal experiences

    I don’t see how this is true. For example, both physicalists and analytic idealists hold that being is more than perception. No one inevitably speaking in terms of their experiences forcing “being” to be perception. Why would that be the case?
    Bob Ross

    If your "analytic" idealism abandons "esse is percipi" how does it differ from representational realism?

    Berkeley's subjective idealism was already "analytic" in the sense that he postulated that observation and conception is tautologically equivalent to existence. Many philosophers misunderstand this principle.

    For example, they take the principle to imply that unobserved items disappear from existence. But this doesn't follow from the principle, for according to the principle it isn't false that unobserved objects exist, but nonsensical.

    What Berkeley's principle is actually saying, is rather trivial ; that only what is observed or conceived can be thought or talked about. If a realist asserts that "unperceived objects such as quarks exist", Berkeley wouldn't contradict the content of the assertion but remind the realist that his use of "unperceived" requires elaboration until it refers to something thought or perceived, for the assertion to become sensical.
  • Two envelopes problem
    I'm not redefining y, the switching argument is. I'm showing you what it covertly does.Michael

    I haven't redefined y, and still derive the switching argument from it's premises.
  • Two envelopes problem
    Notice that in E(z) the variable y stands for 3 different values. In one case it stands for the value of the smaller envelope (10), in another case it stands for the value of the larger envelope (20), and in the final case it stands for a different value entirely (12).Michael

    Yes, I see that. So why are you redefining y?

    Your two definitions of E[z] aren't equivalent. The first one is implicitly referring to
    E[ z | y], the conditional expectation of z given the value of envelope y. Whereas the second definition refers to E[z | x] the conditional expectation of z given the value of the smallest envelope x.
  • Two envelopes problem
    I believe it does, as I showed above. It covertly redefines y
    such that when it concludes E(z)=54y is no longer the value of the chosen envelope.
    Michael


    No covert redefinitions of y are happening, just covert pathological expectations over infinity. For the problem implicitly assumes



    This is an improper prior that has a divergent mean.



    The distribution of envelope values conditioned on the smallest value is symmetric, consisting of two spikes on the infinite plane at (x,2x) and (2x,x)



    Summing out, say, z gives



    The two previous formulas imply the conditional



    since is improper, one cannot "integrate out" x from the last equation and get a well-defined distribution. Instead one gets a function defined up to proportionality.



    Taking the expectation of over this function as if it were a distribution gives



    Hence by symmetry



    No redefinitions, covert or otherwise, of any variable were involved here.

    But I perhaps should correct myself a bit; the expectation values aren't contradictory in the sense of



    rather, the expectations imply contradictory strategies in the context of utility maximisation.
  • Two envelopes problem


    Yes, my previous criticism was directed more towards the confusing use of notation with respect to expectation values. It is clearer to condition expectation values on the variables and information they assume.

    But the switching argument isn't invalid; It's contradictory expectation values don't appeal to faulty reasoning given acceptance of the premises. Rather the switching argument is unsound, for among it's premises is an improper prior distribution over x, the smallest amount of money in an envelope. And this premise isn't possible in a finite universe.

    Intuitively, it's contradictory conclusions makes sense; if the smallest amount of money in an envelope could be any amount of money, and if the prior distribution over the smallest amount of money is sufficiently uniform, then whatever value is revealed in your envelope, the value of the other envelope is likelier to be higher.