• Free Will
    You said it like the compatibilist model of the world has retro causality, but I think instead it's more accurate to say that your model of compatibilism has retro causality.flannel jesus

    Retro-causality is a generally vague and controversial concept, to the point that it seems to rule very little in or out (recalling the fact that QM, which most physicists consider to be forwards-directed, has an innocuous retro-causal interpretation). Causal conventionalists like Hume for instance, even rule out retro-causality as a matter of tautology, which is why i didn't want to appeal to retro-causality as a hypothesis (which some might argue is formally meaningless), but to philosophical and empirical intuitions, naive if you like, that align with the idea.

    It might have been better if I had never used the term. What is of underlying importance to compatibilism in my view, isn't the existence of retro-causation (whatever it is supposed to mean), but the treatment of material implication as being symmetric, i.e. of the form A <--> B, which can be interpreted in a number of ways, including Bertrand Russell's directionless "no causality" view, super-determinism and circular causality. In these cases, it is accepted that there exists synchronisation between a so-called "cause" and a so-called "effect", but where the control between "cause" and "effect" is either considered to be bidirectional, directional but a matter of perspective, or directionless in both directions.

    I don't know the background motivation of the OP, but the problem that was presented is very reminiscent of the thought experiments that physicists use when selecting among interpretations of QM, which frequently give rise to debates over free-will in magazines such as the scientific american. In fact the OP's thought experiment is more or less identical to premises called "quantum conspiracies" , namely the premise that nature has already decided on the properties that physicists will measure, such that physics experiments cannot reveal anything about nature's properties.
  • Free Will
    Why? Says who?flannel jesus

    Yours truly. Tell me how i've gone wrong.
  • Free Will
    I still have no clue why you think compatibilism and retro causality have anything to do with each otherflannel jesus

    Because according to classical understanding of causality, the past is both fixed and exactly determines the future, which prevents the possibility of free choice of any agent who lives above the initial cause.

    Compatibilism doesn't make sense as a concept unless the past is in some way considered to be ontologically dependent upon the future. Being committed to the appearance of retrocausation isn't to be committed to retro-causation, and super-determinism might even be considered as appearing retro-causal.
  • Free Will
    but what does retro casualty have to do with any of that? I know a lot of compatibilists, and they don't have a model of the world involving retro causality.flannel jesus

    Earlier i was mentioning models that might appear retro-causal, in the sense that the model considers facts about the past to be ontologically dependent on present and future observations, i.e history is considered not to exist unless 'enabled' by the observations of 'future' observers, which isn't to imply that observers get to control the content of history.

    Roguelike video games come to mind, in which a computer game generates an on-the-fly history of a world around the player, in direct response to the players actions. The player's possible actions, which he himself controls, are by definition considered to be "future directed" causal events relative to the player. e.g opening a door, digging a hole, killing a monster etc. The information about the world that those actions reveal, but which those actions aren't said to "cause", is information that appears to be retrocausal if it is considered to be nature's on-the-fly response to the players actions.
  • Free Will
    What does "compatibilist" mean in this sentence? It doesn't look like it means the usual free-will/determinism kind of compatibilism, but I'm stumped at what else it could mean.flannel jesus

    I was referring to the usual kind of compatibilism. The problem of compatibilism, at least as i understand it, is how to reconcile two seemingly contradictory premises

    1) The principle of causal determinism - by which the future when conditioned upon a hypothetical total knowledge of the past, is believed to consist of precisely one possible world.

    2) The metaphysical existence of choice and possibility for agents who interact with the world.

    In my opinion, many self-described compatibilists are in fact deniers of either 1 or 2, and so don't qualify as being "compatibilist". For example, they might hold to 1) but interpret possibilities to be epistemic rather than ontic. Or they might hold onto 2 whilst apparently forgetting their alleged commitment to 1, or they might simply fail to provide any reconciliation of their beliefs in 1 and 2.

    To actually commit to both 1 and 2 in a way that reconciles them requires a radical re-conception of time and causation along the lines of presentism, such that the logical implications of causal determinism can be either fully, or least partly, recoverable from the interactive choice principles of 2.
  • Free Will
    Models of causality that are "compatibilist" are those which appear to be retro-causal due to rejecting the antecedent-precedent distinction. These models aren't built upon the directed conditionals of the form A --> B, but upon bi-conditionals of the form A <--> B. In other words, these are models whose logic is called "circular reasoning".

    To give a simplified version of OP's thought experiment, let

    A := Alice's secret prediction on Monday, concerning what Bob will do on Tuesday

    B := Bob's actions on Tuesday, without Bob knowing about Alice's prediction, which he later learns about on Wednesday.

    Suppose that Bob believes from past experience that

    1) Alice's secret 'predictions about him are always true.

    2) Her predictions cannot be explained by a hidden confounding variable that influences both her prediction and what he does.

    As a result, Bob accepts a conditional of the form A --> B. If Bob is to be a compatibilist, then he must also argue for a "retrocausal" relation of the form B --> A. This means that Bob must assume that from his perspective, Alice's "prediction on Monday" actually occurs on 'Bob's Wednesday' when he learns about her prediction, which is after he dug the path. Thus B --> A refers to Bob's actions "causing" Alice's "earlier" prediction. Since Bob only observes Alice's prediction after the facts of his actions, this compatibilist interpretation is perfectly consistent.
  • Free Will
    it's a good example, that is co-related to the idea that the direction of causality is relative.

    The classical conception of causality, which assumes that the causal order is independent of perspective, does not possess the notion of synchronized events, in which the existence of an event necessitates the existence or non-existence of another event, but without either event being alleged to influence the other.

    If the notion of causality is adjusted so as to included synchronised events, we automatically get
    1) The notion of non-local quantum entanglement.
    2) A reconciliation of Bertrand Russell's view that causality doesn't exist, with the interventionist view of causality as used in the sciences.

    So in your previous example, the man's path can be viewed as being synchronised with the independent observation that the rest of the field is white, even though neither event is the cause of the other.
  • Quantum Physics, Qualia and the Philosophy of Wittgenstein: How Do Ideas Compare or Contrast?
    I think you misrepresent Hawking's intentions sime. In "The Grand Design" he presents himself as having an adequate understanding of philosophical problems, and proposes model-dependent realism as a superior alternative to other philosophical theories. So he is not suggesting that philosophical problems are beside the point, he is proposing model-dependent realism as a solution to those philosophical problems.Metaphysician Undercover

    I think Hawking misrepresents his own intentions, judging by the wildly different interpretations I have seen of MDR, for I've witnessed both realists and idealists defending MDR.

    At the very least, its definition is too ill-defined to be distinguished from other forms of instrumentalism, or for allowing critical comparison against other epistemological stances such as positivism and Bayesian epistemology. Notably, MDR isn't mentioned even once in the SEP article discussing models in science.

    MDR doesn't explain the relationship of theoretical models to their observational semantics and truth-conditions, neither does it give any guidance as to how and when to select a model among "equally good" alternatives, let alone for deciding what is a good model, and neither does it serve as an explanation for theory-change.

    So what exactly does MDR solve?
  • Quantum Physics, Qualia and the Philosophy of Wittgenstein: How Do Ideas Compare or Contrast?
    Of course. "Mind-dependent reality" doesn't make sense except to idealists / antirealists (who tend to obey poison warning labels, not carry lightning rods in thunderstorms, purchase auto insurance and eat enough in order not to starve).180 Proof

    The most famous western idealist George Berkeley wasn't a reality skeptic, precisely because his subjective idealism denied the status of the hidden variables of scientific theories as being representational. He was attacking representationalist materialism, and not the "model dependent realism" that is the narrow concern of a scientific theory under test.

    I think that when talking about idealism, especially in the context of the philosophy of science, it should be interpreted more technically as referring to a strong form of instrumentalism in which both

    1) The hidden variables of scientific theories are considered to be ontologically reducible to observation terms, even if the distinction between hidden variables and observation terms is considered to to be epistemically necessary in practice.

    2) The meaning and existence of observation terms is considered to be ontologically dependent on perspective.

    Ironically, Samuel Johnson demonstrated Berkeley's views when kicking a rock - for he demonstrated that his theoretical understanding of a rock's material qualities was ultimately in terms of his experiential perspective. Had he believed that the rock's theoretical properties were irreducible to his experiences, then he would not have believed that kicking a rock could prove it's material properties.

    Metaphysical "propositions" are indeed propositions - but they are higher order propositions about theories, as opposed to being first-order propositions that are expressed by those theories.

    Hawking indicated his disinterest in philosophy, which concerns these higher-order propositions, for his interest in physics, where I interpret his "model dependent realism" to refer to his view that the remit of physics is strictly in the analysis and testing of first-order physics propositions, as illustrated by his "world picture" analogy.

    So it remains very unclear to me as what his philosophical views are, for his "model dependent realism" clearly wasn't meant to be a philosophical proposition, but only to express that as far as physics is concerned, philosophical questions are besides the point.
  • Quantum Physics, Qualia and the Philosophy of Wittgenstein: How Do Ideas Compare or Contrast?
    Better yet (since, for instance, planck-scale entities are not "empirical"): Hawking-Mlodinow's model-dependent realism.180 Proof

    Model Dependent Realism is a dubious metaphysical proposition in itself. For how does MDR avoid collapsing into either George Berkeley's instrumentalism, or into naive realism?

    When it comes to interpreting any theory T containing a set of observation variables O and a set of 'hidden' variables H, there are only two possibilities

    1) T can eventually be rewritten as another theory T' that only contains O terms, such that the original variables H in T aren't considered to be hidden after all.

    or

    2) T can never be rewritten as a theory containing only O terms.

    MDR comes across as decrying the metaphysical speculation that is inevitable in the face of theoretical uncertainty, and disguising this attitude as epistemological substance.

    BTW, planck-scale entities are empirical - for why would they be proposed otherwise? Rather, their empirical meaning remains open and is relative to context, as is also the case for "molecules", "optical redness", "cats", "cars" and my dear old mother.

    As the logical positivists discovered, there isn't a clear meta-theoretical distinction between O terms and H terms. For example, electrons are observation terms in the context of using an electron microscope, whereas in macroelectronics they are hidden terms in the sense of having indirect phenomenal relevance .

    So did Hawking believe in mind-independent reality or not?
  • Quantum Physics, Qualia and the Philosophy of Wittgenstein: How Do Ideas Compare or Contrast?
    I suspect that Wittgenstein wasn't interested in QM due to the fact that he considered it to have no metaphysical value. Remarkably, it has taken another fifty years of mathematics and physics research since Wittgenstein's death to "Ordinarize" QM and bring it back down to earth, in the sense of showing how many of QM's so-called "non-classical" properties arise through our ordinary concepts of epistemic uncertainty and linearity with respect to partially observed states of affairs.

    As far as I am aware, the only truly "non-classical" property of QM is non-locality - but there are even shades of non-locality in problems of synchronized computation that aren't motivated by physical phenomena. For example, during the execution of an ordinary computer program, the OS recycles the memory addresses of old values for newly created values. From the perspective of the program that has no direct control over memory allocation, the new values and old values might look as if they are non-locally entangled.
  • Rhees on understanding others and Wittgenstein’s "strange" people
    The above can be summarized by saying that "other minds" is an oxymoron.

    Epistemic uncertainty is meaningful when identifying a behavioral disposition - for example one can doubt whether or not another person's behavior is receptive, competent or genuine, but to empathise with another person isn't to feel or think on their behalf, even if the practice of empathy lends insight with respect to their behavioral disposition.

    Nevertheless, one's beliefs concerning a person's behavioural disposition effects the course and extent of one's empathy towards that person.
  • Dualism and Interactionism
    The debate as to whether QM randomness is aleatoric or epistemic is presumably rendered moot by modern understanding of Quantum Contextuality; for in the case of entangled systems it isn't possible in principle to assign a complete probability distribution over the joint values of every quantum observable, as per the Kochen-Specker theorem.
  • Teleology and Instrumentality
    I feel like you are making a category error with respect to 'final cause'. What do you mean by that term and why does your 'initial cause' make that meaning superfluous?Leontiskos

    To use the SEP's description of Aristotle's causal ontology:

    The material cause or that which is given in reply to the question “What is it made out of?” What is singled out in the answer need not be material objects such as bricks, stones, or planks. By Aristotle’s lights, A and B are the material cause of the syllable BA.

    The formal cause or that which is given in reply to the question “"What is it?”. What is singled out in the answer is the essence or the what-it-is-to-be something.

    The efficient cause or that which is given in reply to the question: “Where does change (or motion) come from?”. What is singled out in the answer is the whence of change (or motion).

    The final cause is that which is given in reply to the question: “What is its good?”. What is singled out in the answer is that for the sake of which something is done or takes place.

    Let us define a final cause to be reducible to the first three causes if there exists a causal model that reproduces the effects attributed to the final cause, that consists only of the first three causes applied to one another in an adaptive feedback loop.

    I am asserting that all final causes are reducible in the above sense. This is equivalent to asserting the existence of a computer simulation of all phenomena attributed to final causes.
  • Teleology and Instrumentality
    I feel like you are making a category error with respect to 'final cause'. What do you mean by that term and why does your 'initial cause' make that meaning superfluous? Surely Aristotle was not "speaking informally about evolutionary feedback" when he used the term, given that he was not aware of Darwinian evolution.Leontiskos

    Had Aristotle known about evolution, then he could have explained the regularity of nature without appealing to final causes and only to adaptive feedback in the cycle of life. His arguments don't amount to a proof of the necessity of final causes, but to the insufficiency of causal models that don't take into account adaptive feedback.
  • Teleology and Instrumentality
    I'm not sure I understand your question. Let's take an example: the final cause of an acorn is an oak tree. Presumably you are positing that there is some "initial cause" which makes this final cause superfluous?Leontiskos

    Yes. To say that an oak tree is a "final cause" of an acorn is to speak informally about the evolutionary feedback that determined the chemistry of Dendrology, which when applied to a given acorn refers only to a directed chain of causality whose conclusions are fully determined by initial conditions.
  • Freedom and Process
    Yes, although I would reject the concept of "self determination" as meaningless.

    IMO, "determination" is only meaningful when used in the sense of the verb "to determine" as a practical relational concept referring to the control of one process by another process when they are treated as autonomous entities, as embodied by the concept of multiplayer games.

    By that norm, J R Tolkien can be said to have "determined" the universe of Middle Earth, a fact that can only be understood by those of us who exist outside of the universe of Middle Earth. But from the perspective of the characters within Middle Earth mythologized as a closed system, their lives can neither be said to be determined nor undetermined.
  • Artificial intelligence
    The concept called "Intelligence" - whether natural or otherwise - is largely an anthropocentric misconception of Cybernetics that is born out of cartesian folk-psychology. I think the word "Intelligence" should be eliminated from the sciences.
  • Teleology and Instrumentality
    Can someone give me an example of a "final cause" that cannot be eliminated for initial causes?

    The examples that Aristotle provides, e.g the functions of the human body, aren't examples because they can be eliminated for the initial causes of Darwinian evolution.
  • Absolute nothingness is only impossible from the perspective of something
    In type theory as used in computer programming languages, "nothing" is interpreted as a type denoted that has no values but that can be eliminated for values of every type, which is in semantic alignment with your name "nothing implies everything". Practically speaking, this can be interpreted as referring to a function that allocates memory for the values of any input type



    Likewise, for 'destroying' elements of a type, i.e to free their memory, we have



    "Deallocate" can therefore be regarded as the 'negation' of "Deallocate" and vice-versa.

    In particular, can be taken to be , which apparently denotes useless instances of the above functions that do nothing (since we officially have no values for to read or write with). This is because the meaning of in our programming language is to denote the mysterious origin and destination of the values in our programs, i.e. the external compiler/OS that our programs depend on for resources, but whose resources cannot be talked about or reasoned with within our programming language.

    Yet suppose the following isomorphism is taken to be true and can be manipulated within our programming language.



    This equivalence is regarded as true in models of classical linear logic. Then this implies that can be iteratively expanded as a potentially infinite list of ... values of some sort? - no, for that possibility was already forbidden - then of memory addresses for storing values!. In which case, if we allowed our programming language to manipulate memory addresses directly (e.g as in C/C++) then we can then interpret within our programming language, and consequently give meaning to and take control of allocate/deallocate in the intuitive and practical way, For example by defining







    And paired with a respective definition for 'deallocate' that destroys these values by freeing their corresponding memory addresses and then returning them.

    So to summarise the above examples, "Nothing" initially referred to the mysterious effect of a destroyed value of a type and to the mysterious cause of an initialized value of a type within a programming language that could only talk about values. But by expanding the semantics of our language, we eliminated dead talk of "Nothingness" for talk about the interactive duality of "type values" and "memory addresses" .

    I think the lesson is, "nothingness" denotes epistemic ignorance and/or semantic scoping and can be eliminated from empirical and rational discourse.
  • A very basic take on Godel's Incompleteness Theorem
    Listed as 402 pages. Godel's paper is 34 pages. Interestingly, I find only one place where (in my translation) Godel uses "true" or variants: section 3, "The following proposition is true: Theorem VII: Every recursive relation is arithmetical." In the rest it's "provable" or "decidable," and variants.tim wood

    Yes, that would be because Godel wanted to reinforce the point that his theorems have entirely constructive proofs, that don't appeal to the law of excluded middle or to meta-mathematical assumptions. Unfortunately, that point was lost on Douglas Hofstadter, who only caused misunderstanding of the theories, by linking them to the meta-mathematical concept of self-reference that isn't of any proof-theoretic relevance to the theorems.

    Peter Smiths book is much longer because it also covers related topics that developed after Godel that are of relevance to his now ancient theorems, even if the book is badly outdated in certain respects and unhelpfully assumes a background of classical logic for presumably historical reasons.

    The usual locution I find is that G is undecidable, but because G says it's undecidable, it's true; this a so-called metamathematical proof being outside the system in which G is created.tim wood

    Well syntactical incompleteness does at least imply that any system of mathematical logic is an unfinishable open-system, such that any sufficiently expressive axiomatic system can be consistently extended in an infinite number of mutually inconsistent ways, leading to an infinite number of conflicting rival interpretations with respect to any theory of arithmetic. A problem with your meta-mathematical heuristic however, aside from the fact that it isn't relevant to Godel's constructive proof of the theorems, is that it assumes the existence of mathematical truth prior to system construction - which might or might not be epistemically appropriate in a given situation.

    Essentially, if one accepts Godel's completeness conjecture , namely the conjecture that first order logic is sound and complete with respect to the theorems that are tautologies, i.e. the theories that are satisfied by every model of the logic, then either a theorem is syntactically provable in such a logic or there must exist a model in which the theorem is false. So the heuristic you mention is a potentially useful heuristic for automated theorem proving - if the completeness conjecture is assumed, which is generally appropriate for classical logic, then a theorem can be disproved by merely showing that it's negation is satisfiable in a model of the logic, meaning that one can "observe" the theory to be false, without having to laboriously work "within the system" to produce a syntactical proof of the negated theorem.

    Syntactically however, G is merely the fixed-point of an alternating fix-point equation, whose iteration causes an increase in the number of negation signs at the front of the fixed-point. It is structurally similar to a coalgebra representing a potentially infinite list of oscillating boolean values, objects that are normally called "impredicative" rather than "self-referencing" in the literal sense implied by Hofstadter.
  • A very basic take on Godel's Incompleteness Theorem
    The book "An Introduction to Godel's Theorems" by Peter Smith would be my recommendation for an eager beginner.

    Alternatively, wiki's page on the Halting Problem conveys in the simplest and quickest fashion nearly all of the relevant computational and philosophical implications of Godelian incompleteness without the torture of Godel numbering.

    In fact, the popular misconception of Godel's theorem as amounting to saying that "a sentence is 'true', assuming that it is unprovable" is a more apt (but still ultimately misleading) conception of a weaker form of incompleteness that is directly implied by the undecidablity of the Halting problem, namely that there cannot be a sound and complete axiomatization of the natural numbers, since that would imply the existence of a computation on the natural numbers that halts if it doesn't halt.

    Godel's actual proof, that doesn't appeal to a model or intended interpretation of the axioms of Peano Arithmetic, is a stronger result since it directly proves syntactical incompleteness without appealing to any notion of soundness. His proof can be intuited geometrically and trivially, as showing that the forest of mathematical proofs that constitutes Peano Arithmetic doesn't describe the background that isn't part of the forest of Peano Arithmetic. Is that really an interesting and deep result?

    So the heuristic argument of there being a "true but unprovable statement" that the public use as a heuristic for understanding or remembering Godel's theorem is very misleading. Their catchphrase is more suitable as a moderately misleading heuristic for remembering the proof of the Halting problem (which also doesn't refer to models, but to external assumptions concerning halting conditions, that with much charity might be interpreted as comprising a weak model of something or other).

    Many mathematicians and logicians cannot themselves be bothered to master Godel's incompleteness proof, for there isn't any payoff for doing so, and will probably content themselves with a technical understanding of the weaker version I mentioned above that is straightforward to prove and remember, and loses practically nothing.
  • Kripke's skeptical challenge
    I am probably not understanding this at all correctly because its too technical for me but it sounds like its bolstering the Kripke's skepticism rather than really solving anything.Apustimelogist

    Alternative foundations for general mathematics and computing can't solve Kripikean skepticism in the sense of providing stronger foundations that rule out unorthodox rival interpretations of mathematical concepts - but they can partially dissolve the skepticism by

    1) Refactoring the principles of logic, so as to accommodate finer grained distinctions in mathematical logic, particularly with regards to a) Distinguishing intensional vs extensional concepts, b) Distinguishing between the process of constructing data and communicating it, versus the process of receiving data and deconstructing it, c) Distinguishing between various different meanings of finitism that are equivocated with classical logic.

    2) Weakening foundations so as to assume less to begin with. This replaces skepticism with semantic under-determination. E.g, if "plus" is considered to be a finitary concept that does not possess a priori definite meaning to begin with, then Kripkean doubt about it's meaning doesn't make as much sense.

    In summary, a good logic from the perspective of computer science describes the process of mathematical reasoning intuitively and practically in terms of a finite series of interactions between agents playing a partially understood multi-player game, in which no agent is the sole authority regarding the meaning and rules of the game, nor does any agent have omniscient knowledge regarding the eventual outcome of following a given strategy.
  • Kripke's skeptical challenge
    Kripke allows that mathematicians can adequately specify the rules of addition. That's not being called into question.frank

    That depends on the sense of adequacy you are referring to. The question is, how can an intensional definition of addition such as an inductive definition, that is finitely specified and only provides an inductive rule for performing a single step of computation, imply an unambiguous and extensionally infinite table of values? As Kripke himself pointed out (IIRC), as the numbers to be added get very large, there is increasing uncertainty as to what the meaning of a "correct" calculation is, for finite computers and human beings can only cognize truncations of the plus function. And a "gold standard" for the extensional meaning of addition up to a large enough arguments for most human purposes hasn't been physically defined by a convention as it has for the meaning of a second of time.
  • Kripke's skeptical challenge
    Can you demonstrate how quus is dealt with by the approaches you have said?Apustimelogist

    A philosophical commonality of those approaches, is that mathematical objects are treated as being finite. So for example, plus is permitted to exist intensionally in the sense of an algorithmic specification but not in the extensional sense of a completed table (unlike with the quus function). Likewise, all sequences are treated as being necessarily finite and generally unfinshed. This apparent restriction is compensated by allowing objects to grow over time (technically, growable objects are describable by using what are now referred to as coalgebras and coinduction).

    Intuitionism would call a sequence of n numbers x1,x2,...xn that were generated by iteratively applying the "plus" function as constituting a lawful sequence of n terms. But another sequence consisting of exactly the same numbers x1,x2,..xn, that wasn't assumed to be generated by some underlying function, would be considered a lawless sequence. In the case of quus, it can be considered to be a lawless sequence , since it is describable as a table of exactly 57 rows and 57 columns. So if that table is defined to mean "quus" as a matter of tautology, then skepticism as to what the function quus refers to can only concern operational assumptions regarding how the table should be evaluated - however skepticism of this sort is accommodated by intuitionism, since intuitionism doesn't consider mathematical truth to be a priori and the properties of unfinished sequences are allowed to change over time . The situation with quus is at least a partial improvement upon plus, whose table cannot even be explicitly stated. Quus is more or less a truncation of plus, and roughly speaking, intuitionism considers such "truncations" as constituting the basis of mathematical analysis.


    Additionally, in Linear Logic terms and constants can only be used once. So two uses of a function demands two separate and distinct copies of that function. Linear logic includes a copy operation, (the so-called exponential rules), meaning that the logic can be used without loss of generality, but this forces the mathematician to justify and account for his use of resources in the same way as engineers who must always consider the possibility of numerical overflow and hardware failure.
  • The meaning of meaning?
    It's interesting to speculate what effects that mind-reading technology could have on our linguistic conventions. It is conceivable that the use of white-box methods for directly probing correlations between speakers mental states and their use of linguistic expressions could lead to an enriching of ordinary communication, e.g The English language might introduce a public name for the mysterious sensation correlated to rising blood-pressure that everyone had hitherto ignored, apart from the private linguist who scribbled "S" in his diary.
  • Kripke's skeptical challenge
    Kripke's mistake (assuming I am recalling his position correctly), was phrasing the skepticism as a circular question to a mathematician where he asked to defend the validity of his judgements, as in

    "How do you know that your present usage of "plus" is in accordance with your previous usage of "plus" ?"

    That question is easily viewed as nonsensical, since it is easily interpreted as asking a person to question their own sanity. Similarly bad phrasing, leading to pointlessly circular discussion is found throughout the philosophy literature on private language arguments.

    Different conceptions of logic and semantics cope variously with the question of meaning skepticism. For instance, Classical Logic with set-theoretic semantics, as in Model Theory, lends to the idea of semantics being static, a priori, unambiguous, infinite and transcendent of the finitely observed behavior of a formal system that is said to correspond to the semantics. Such "picture theories" of meaning, that place semantics in an exalted position above the cut and thrust of computation and IO, naturally provoke skepticism as to the relevance, utility and even existence of semantics, as evidenced by the existence of formalists of the last century. Similarly, I think Kripke's (misconceived) interpretation of Wittgenstein was partly born out of this obsoleted semantic tradition that he was part of, but couldn't see beyond due to the lack of a formalized alternative approach.

    Intuitionism copes better with semantic under-determination, because it assumes less meaning to begin with; it interprets infinity as referring to unspecified finite extension of indefinite length, implying that all data is finite and that all symbolic meanings have a finite shelf-life. So it doesn't consider mathematics or logic to consist of an actually infinite number of semantic facts that finite linguistic practices must miraculously account for. Consequently, intuitionism permits a tighter identification of logic with a suitably non-standard version of set-theory, narrowing the opportunity for semantic skepticism.

    A more modern alternative is to place syntax and semantics on an exactly equal footing, by considering them to refer to opposing sides of interaction of a dialogue between two or more agents, where what is considered to be syntax and what is considered to be semantics is a matter of perspective, depending on who is asking questions and who is answering them. Girard's Ludics is a formalisation of this pragmatic idea of meaning as interaction, and is of relevance to the rapidly emerging discipline of interactively-typed languages and interactive AI, in which no individual party has full control or understanding of the language they are using, whereupon the meaning of a type or symbol is identified with it's observational history of past-interactions.
  • A Wittgenstein Commentary
    A central question of concern when it comes to interpreting the late Wittgenstein and what he meant by "private language" is the following issue:

    " ..Are certain norms valid, or in force, because certain things such as linguistic expressions and intentional states have certain meanings/contents? Or do such things have meaning/content because some norms are in force? We shall distinguish between meaning/content “engendered” (ME/CE) normativity and meaning/content determining (MD/CD) normativity (cf. Glüer & Wikforss 2009). MD/CD norms are such that they metaphysically determine, ground, or constitute meaning/content; here, the norms are prior. ME/CE normativity is normativity engendered by, or consequent upon, meaning/content, regardless of how the latter is determined. "


    https://plato.stanford.edu/entries/meaning-normativity/

    To understand this question, one must decide the extent to which they consider norms and intentional states are gramatically related, i.e to what extent are norms considered to be part of the very meaning of "intentional states" (or vice-versa)?

    Take the following examples:

    Suppose a zoologist asserts that a particular species of animal exhibits signs of intentionality. To what extent should the zoologist be interpreted as referring to the norms of zoology, as opposed to the actual "perspective" of the animal in question?

    What if an AI engineer asserts that ChatGPT123 demonstrates intentionality. Is their answer more concerned with the norms of AI engineering than in the purportedly more descriptive zoological science example?

    And what of our ordinary attribution of other minds? If Alice recognizes that Bob has a mind that is distinct from hers, to what extent should she be interpreted as referring to her personal state of empathy , as opposed to the state of Bob's actual behaviour?

    In the case of Wittgenstein's "Beetle in the Box", the specific issue is the relationship between the customs of language that are invariant to personal circumstances and perspective and thus incapable of representing 'private objects' on the one hand, versus the language users who express their personal perspectives using such aperspectival customs. The analogy of a children's playground comes to mind. The function of swings, climbing frames and roundabouts are to accommodate the needs and perspectives of children, in a manner that abides by norms of health and safety. But obviously it would be a terrible category error if one attempted to explain why and how children used playgrounds by appealing to the design of the playground and the norms of the controlling adults. Likewise, the beetle-in the box argument wasn't made to deny the semantic importance of intentional content, but to stress how social customs, such as the custom of physical language, have evolved to facilitate the expression of intentional content.
  • Belief
    Truth conditional semantics implies that beliefs don't exist, for beliefs are teleological entities.

    The meaning of belief-behaviour isn't reducible to it's effects, for obvious causal reasons. Objectively, the meaning of an agent's "beliefs" can only refer to their stimulus-response disposition, which is in turn explained causally without appealing to teleology.

    When an external observer interprets a performing agent as having beliefs, those "beliefs" only exist in the prejudiced mind of the external observer who interprets the agent using normative teleological principles.
  • The Worldly Foolishness of Philosophy
    s the philosopher a life coach ? A spiritual advisor ? This “philosopher” is analogous to a nutritional supplement, which is to say as a piece of technology, tested qualitatively like a new painkiller or piece of music in terms of feeling it gives us.

    Or can we really take seriously the idea that a philosopher is essentially “scientific” in some radical, foundational sense ? Is the philosopher a kind of “pure mathematician” of existence as a whole ? I say “pure” because I want to highlight an impractical interest in truth for its own sake. Even an unpleasant truth is still good, because it is possessed as truth, because it’s worse to be confused or deceived.
    plaque flag

    Mathematicians are also "spiritual advisors", for mathematics and logic are normative disciplines. Going along with Hacker's interpretation of Wittgenstein, I think it is most useful to consider the academic disciplines of pure Mathematics and logic as not having subject matter in themselves, but as defining semantical norms of representation that facilitate the translation, coordination and comparison of language games that do possess subject matter. Many language-games that are partly mathematical, e.g the natural sciences, possess some degree of subject matter to the extent that they are applied disciplines that establish synthetic propositions.

    As for philosophy itself, I think it is reducible to every other subject and their interrelations.
  • Belief
    In AI , the word "belief" is principally used to denote an agent's understanding of cause and effect in terms of his "Cartesian" mental model of the world. Here, a "belief" can be identified as map involving the following types which comprise the agent's internal epistemic state.

    Belief : (History of Observations x Set of Possible Actions) --> Power Set of Observations

    "History of Observations" comprises the agent's understanding of his external world, and refers to his memory of observations up to and including his present observations. Conditioned on his observation history, he infers the consequences of performing a possible action. In order to accommodate causal and epistemic uncertainty, the agent will in general map a possible action to a set of potential observations, hence the use of the power set on the right hand side.

    Nowhere in the above definition is the world, as we outsiders understand it, referred to by the agent's beliefs, for the agent's beliefs are understood purely in terms of the agent's mental functioning and stimulus responses. It should also be understood that from the point of view of the agent, his observation history is his "external world".

    It makes no sense for onlookers to interpret an agent as referring to anything other than his memories and sensory surface. As far as an onlooker is concerned who is trying to understand the agent's beliefs, the world that is external to the agent is only relevant and useful to the onlooker in so far as the onlooker lacks knowledge of the agent's mental functioning - in which case the onlooker can infer the agent's mental disposition bu observing the agent's behavioural reactions to external stimuli originating in the world of the onlooker. But if an onlooker were to possess perfect knowledge of the agent's mentation, then as far as that onlooker is concerned, the state of the external world would be irrelevant with regards to understanding what the agent believes.

    No matter how much a community of agents might appear to agree (or disagree) that "such and such is true of "the" real world", as far as linguistic designation is concerned they are merely talking past one another and gesticulating towards different and unsharable private worlds corresponding to their individuated mental processes.
  • Sleeping Beauty Problem


    The sample space of any room is { H, (T,F), (T,S) }

    where F and S refer to First Stay and Second Stay, respectively

    with probability measure

    M(H) = 1/2
    M(T,F) = 1/4
    M(T,S) = 1/4

    (a consequence of your assumed prior probabilities )

    Define a variable indicating the stay

    Stay (H) = First
    Stay (T,F) = First
    Stay (T,S) = Second


    P(Stay = First) = M (H) + M(T,F) = 3/4
    P(Stay = Second) = 1/4

    That's all that can be said, unless i've overlooked an aspect of your problem. Which stay it is would give new information to the sitter about the coin, but alas she doesn't know this information. . To get a hotel for the purposes of obtaining a statistically interpretable result, simple take the product-space of the sample spaces for each room, and take the product of each room-specific measure M.

    As before, the prior probability of the coin landing heads is given in the premises, and the sitter has no new information upon which to condition the state of the coin, meaning that it's probabilities remain unchanged.

    I think the version of the hotel in which subjects are assigned to different rooms on each awakening is more interesting, because it reveals the importance of counterfactual reasoning when the sitter allocates her credences, which thirders blindly ignore in their "picture theory" of credence allocation.
  • Sleeping Beauty Problem
    Now suppose she doesn't make any assumptions regarding her room allocation. What then?

    For all she knows, she might have been hand-picked to babysit a "heads" room. In which case the probability that her room is of a heads coin, is 1. Due to possibilities such as this, she should

    1) Refrain from committing to a probability that her room is of a heads coin, until further information is given.

    but nevertheless

    2) Deduce that the probability of her room coin landing heads is 1/2, given that it is known information.
  • Sleeping Beauty Problem
    How does Sue deduce that "here" indicates she is in a room occupied by a participant scheduled for two awakenings with a probability of 2/3? She draws this conclusion from the fact that twice as many such rooms exist in the facility compared to rooms allocated to participants with only one scheduled awakening. So, from Sue's perspective (based on the exact same evidence she shares with the participant), she concludes that the coin landed tails with a 2/3 probability, despite the coin having a 1/2 propensity to land tails. Sue's credence that the coin landed tails is a consequence of both the initial propensity of the coin to land tails and the propensities of the experimental setup to place her in a room that corresponds to a tails outcome.Pierre-Normand

    Two "tail" rooms refer to the same coin outcome. So when computing the probability that the coin of her room landed heads, she needs to take into consideration the other room she might have occupied.

    So assuming that her supervisors assign her room using a draw from a uniform distribution, her credence that the coin of her room landed heads, should be 1/2
  • Sleeping Beauty Problem
    Because her credence wasn't meant to reflect the proportion of winning bets but rather the proportion of awakening episodes that occur during an experimental run that results in the (initial) bet being won. Those two aren't the same. If Sleeping Beauty had instead been afforded the opportunity to bet on each awakening episode, then her winning frequencies would exactly match her credence that she is winning that bet.Pierre-Normand

    The question she is asked is in the context of a single trial. It is merely

    "What is your credence now for the proposition that the coin landed heads?"

    The problem refers to the outcome of a particular coin toss, with a sample space S = {H,T} describing two possible worlds. So the problem does not concern frequencies. Both halfers and Elga agree that the propensity of the coin to land heads is 1/2. So the measure M over S is

    M (H) = M(T) = 1/2

    We are given two causal implications with respect to S that characterise two random variables, namely

    Awaken_Mon (H) = True
    Awaken_Mon (T) = True

    and

    Awaken_Tue (H) = True
    Awaken_Tue (T) = False

    Both of which are "measured" by M to give

    P( Awaken_Mon = True) = 1
    P( Awaken_Tue = True ) = 1/2

    This fully describes the stated causal relationships of the Sleeping Beauty Problem, that go from the state of the coin to the states of two awakening events.

    How do you suppose that we

    (1) relate the random variables, whose outcomes refer to events in the B series of time, to the indexical "Today" of the A series, whose relationship to B series and to the physics of coin-tosses isn't specified by the problem?

    and

    (2) How should we apply induction to "invert" the causal direction from "Today" to the state of the coin, assuming we have a satisfactory answer to (1) ?

    Unless thirders can answer (1) and (2), their epistemic probabilities are meaningless
  • Sleeping Beauty Problem
    Your suggestion that a thirder expects to gain from choosing amnesia would depend on her conflating the probability of making a correct prediction upon awakening with the frequency of the actual payout from the initial bet.Pierre-Normand

    They do indeed conflate them. Why would SB assign a credence of 1/3 to heads, if she knows

    1) That the physical probability of winning the lottery is 1/2

    2) That amnesia doesn't influence lottery outcomes.

    3) That had she not had amnesia, then she would either know the result of the lottery on waking up, or else refer to it's probabilities as being 1/2?


    Since thirders make no principled distinction between causes and effects, they do indeed encourage avoidance-coping. For by their logic, one should raise one's credences for one's preferred outcomes by avoiding information.
  • Is consciousness present during deep sleep?
    So, why can't we feel anything under total anesthesia? Because we are unconscious.Alkis Piskas

    Not feeling anything is a definition of unconsciousness.

    But how can you verify that you feel nothing under anaesthesia?
  • Is consciousness present during deep sleep?
    Who is such a hard nosed empiricist that he can't learn from someone else? People tell me I snore. Despite having no conscious recollection of snoring, I believe them.wonderer1

    Other people can hear your snoring, but they cannot observe an absence of your experiences on your behalf, and neither can you. All they can observe is an absence of your experiences on their behalf in terms of their behavioural definitions of your experiences.
  • Is consciousness present during deep sleep?
    When I awoke this morning, I felt as if time had passed. This coheres with biological hypotheses that some level attention was present, at least sporadically, throughout the night.

    However, the question isn't only a matter of psychobiology, but also a question of phenomenology, perspective and empirical logic that neuroscientists tend to overlook:

    A subject in an ideal sleep lacks consciousness from the perspective of external observers according to behavioural definitions of ' "consciousness" that refer to inter-subjectively ascertainable criteria such as stimulus responses, memory recall and directed attention. And yet from the subjective phenomenological point of view, the first person never sleeps. After all, I cannot experience myself sleeping, and so I cannot have direct knowledge that I have ever been previously unconscious in the pure sense.

    If I awaken from a coma, I might infer that I was previously unconscious on the basis of my present state of amnesia, together with reports I am told from external observers who monitored my behaviour. The question is, are these grounds sufficient for me to establish the proposition I was previously unconscious in the phenomenological sense of "experiencing nothing"? Shouldn't a hard-nosed empiricist who demands verification criteria, reject this commonly held conclusion as meaningless or false?