Comments

  • Science seems to create, not discover, reality.
    I really don't understand what you are saying here. You appear to be saying that you see no clear distinction between past and future, because you interpret everything "within the context of the present".Metaphysician Undercover

    Yes, roughly speaking.

    But isn't it the case that your reference to "the present" already implies a clear distinction between past and future? What could you possible mean by "the present", other than an assumed separation between memories of past, and anticipations of the future? Therefore your reference to "the present" seems to already imply a clear distinction between past and future.Metaphysician Undercover

    I understand the tenses to be closely related to modal distinctions made in relation to the present, but I don't deny the modal distinctions, nor the practical psychological distinction between past and future, or what McTaggart crudely referred to as the A series (is psychological time really a series?). But like McTaggart, I don't think the information content of the "A series" has any obvious relationship to the B series which is all that the public theory of physics refers to, or to the broader physical conception of time that Wittgenstein occasionally referred to as "information time" which i think of as a "use-meaning" generalisation of McTaggarts B series that also includes the practice of time keeping ( see Hintikka for more discussion on Wittgenstein's evolving views on the subject).


    Furthermore, you refer to "present observations", but this concept is logically flawed. There can be no such thing as present observations because "to observe" is to take note of what happens, and this implies that an observation, being what has been noticed is necessarily in the past. It is this idea, of "present observations" which is actually self-contradicting.Metaphysician Undercover

    The word "present" is only used to stress the distinction between the A and B series and the fact that observations are always in the present tense, even when they are used to evaluate past-contigent propositions (which are understood to be past-contigent in the sense of the B series, but not necessarily in the sense of the A series)

    So yes, observations are not of the present but they are always in relation to the present tense. Furthermore, if the B series isn't reducible to facts that are obtainable in the present-tense then the existence and usefulness of the B series can be doubted or denied, and at the very least cannot be reconciled with the the present-tensed practice of physics.
  • Science seems to create, not discover, reality.
    Put predictions aside for a moment. How would you deal with possibilities in the sense of "it is possible for me to do X, and possible for me to do Y", when X and Y are mutually exclusive? If I act for Y, then X is made to be impossible, and if I act for X, then Y is made to be impossible. However, at the time when I am deciding, both are possible.

    How can we model this type of future in relation to this type of past, when both X and Y change from being equally possible in the future, to being one necessary, and one impossible in the past? What happens at "the present" to change the ontological status of these events?
    Metaphysician Undercover

    If I speculate that the past might change, then aren't I contradicting the very definition of what i mean by "the past"?

    And If i speculate that the future is already decided, then aren't I contradicting the very definition of what i mean by "the future"?

    I don't conceive of a clear distinction between the tenses and the modalities. I interpret both empirically within the context of the present, even I don't consider their meanings to be empirically exhausted by present observations, memories, intentions, actions and so on.

    It doesn’t seem an apt analogy to me. At issue is the nature of the object in question and what it is that transforms it from a possibility to an actuality.Wayfarer

    Does it even make sense to consider the modalities (or tenses) to be the subject-matter of physics? For aren't the modalities the very essence of what is meant by an 'explanation' that are inevitably invoked when explaining any explicandum in any subject?

    Unless physics is willing to collapse the explanans/explanandum distinction by appealing to circular reasoning (which for many would defeat the purpose of an explanation), then i cannot see how the metaphysical concepts of modalities can be treated as first-order physical propositions that warrant physical explanation.

    From an instrumentalist perspective, scientific theories are conditional propositions that do not say how things are in themselves, but rather predict or describe the empirical consequences of performing a particular action or observation in a particular context. So according to this perspective, possibilities are what is directly expressed by scientific theories, but not what is represented or referred to by such theories.
  • Science seems to create, not discover, reality.
    That is true, but the nature of the object who's existence is only possible is not. And that is the point at issue in this context, as the putative object, a component of the atom, is supposed to be amongst the building blocks of material existence.Wayfarer

    If a weather-forecaster states that tomorrows weather is possibly heavy showers, i interpret his sentence to be an empirical report regarding his model of the weather, and not literally to be a reference to tomorrows unobserved weather. (In general, I don't consider predictions to be future-referring in a literal sense, for the very reason that it leads to conflating modalities with theory-content and facts)

    Modalities only arise in conversation when a theory is used to make predictions. But the content of theories never mention or appeal to modalities, e.g neither the Bloch sphere describing the state-space of a qubit, nor the Born rule describing a weighted set of alternative experimental outcomes appeal to the existence of modalities. Rather the converse is true. E.g a set of alternative outcomes stated in a theory might be given possible world semantics, but the semantics isn't the empirical content of the theory and so does not ground the theory, in my empiricist opinion.
  • Science seems to create, not discover, reality.
    Possibility is an empirical notion. In the case of QM, possibilities either refer to directly observable interference patterns, or they refer to statistical summaries of repeated trials. It is also a good idea not to conflate the empirical meaning of possibility with the epistemic notion referring to possible world semantics, which refers to how people use and think about theories.

    IMO, reifying possibility to the status of multiple actual worlds is a mistake born out of equivocating the various uses of the term.
  • Why is the Hard Problem of Consciousness so hard?
    A description is not the thing described.Wayfarer

    True, but the distinction is easily lost in communication.

    To see a robot as a mind is not to infer that the robot has a mind. By contrast, to see that the robot has sensors relaying information to Machine Learning algorithms is not to see the robot as having sensors and ML algorithms.

    The word "other" in "other-minds" is where the confusion lies, for insinuating indirect-realism with respect to the mental qualities that we directly project onto others.
  • Why is the Hard Problem of Consciousness so hard?
    Oddly enough, I believe it's correct.Wayfarer

    But presumably human cognition, emotion, awareness, and behavior are equally describable in terms of adaptive algorithms , data, environmental feed-back and pattern-matching.

    In which case, how can disagreements over the sentience of chatbots, robots , non-human animals, and even disagreements regarding the sentience of other human beings, be regarded as disagreements over matters-of-fact?
  • Why is the Hard Problem of Consciousness so hard?
    Isn't the concept of other-minds reducible to the concept of empathy? In which case, the sentence " a rock doesn't have consciousness" isn't a proposition about the rock. Instead, it has the same meaning as "I cannot relate to a rock", implying that if the rock ever began to act like a human, then I would change my mind about the rock , and that my new opinion about the rock would not be in contradiction with my old opinion or with other people's contrary opinions.

    (If the public disagrees as to whether a chatbot is conscious, are they really disagreeing over facts about the chatbot?
  • What are your favorite thought experiments?
    There is more to an ideal of reasoning than the ability to apply logic in a valid way. There is also the pattern recognition applied to diverse empirical observations that allow for recognition of false premises. For example the "training set" which is hugely important to the results yielded by modern AI.wonderer1

    Yes, very much so. The successes of Machine Learning generalisation are entirely the consequence of ML models evolving over time so as to fit the facts being modeled, as opposed to the generalisation performance of ML being the consequence of a priori and constructive mathematical reasoning, as if purely mathematical reasoning could predict in advance the unknown facts being modeled.

    And yet many popular textbooks on ML written around the turn of the millennium presented the subject as if successful generalisation performance could be mathematically justified in advance on the basis of a priori philosophical principles such as Occam's Razor, Non-informative prior selection, Maximum Entropy and so on. Notably those books only very briefly mentioned, if at all, Wolpert's No-Free lunch theorems that put paid to the idea of ML being a theory of induction.
  • What are your favorite thought experiments?
    Anselms's ontological argument is mine, in spite of it's theological pretenses, for it is an example of a logically valid constructive argument that is 'necessarily true' but nevertheless draws a false conclusion about the world outside of logic, in spite of the argument insisting that it is referring to the outside world!

    As I see it, the argument is but one of infinitely many examples of a logically valid but false arguments, that presents negative evidence with regards to the epistemological utility of constructive logic, and thus in turn presenting negative evidence regarding the epistemological utility of a priori philosophical arguments, such as transcendental arguments. In other words, even ideal reasoners can be expected to draw rationally "correct" yet empirically false conclusions about the world. In which case, what is the point of AI and cognitive science?
  • Reasons for believing in the permanence of the soul?
    Does Permanence/Impermanence of the soul necessarily refer to a fact about souls, or might it refer to the grammar of the word "soul"? (Theology as grammar)

    For example, consider a presentist who considers the concept of change to only refer to objects but not to subjects (since he believes the present to be the only moment of time). Then he might assent to the sentence that "the soul is permanent", as a vulgar way of expressing his view that the word "impermanent" isn't applicable to subjects.
  • Why is the Hard Problem of Consciousness so hard?
    But what if the object of translation was not optical redness but brain states? It seems then that the context problem doesn't apply because Mary's perceptions are always present alongside her brainstates and correlate so much that many suspect that they are identical.Apustimelogist

    I'm not sure what neuropsychology means by 'brainstates' exactly - but then isn't that the point - that the types and tokens referred to by neuropsychology are sufficiently vague and flexible so as to both accommodate the ad-hoc and informal judgements of it's practitioners on a case-by-case basis, whilst conveying enough of the practically essential information?

    I'm also reminded of software-engineering, where the concepts of types, tokens and type/token identity are normative notions that only concern and describe the programming language being used, rather than being descriptive of the implemented application (that could be implemented in any number of languages that use different and incompatible type-systems).

    In my view, Physicalism takes types, tokens and identity relations too seriously, due to mistaking these normative linguistic concepts for propositions.
  • Why is the Hard Problem of Consciousness so hard?
    Physics could dissolve any particular "hard problem" of consciousness, by simply expanding the rules of it's language to accommodate any perception, in a bespoke, albeit practically unworkable fashion.

    For example, take the colour scientist Mary from the knowledge argument, who "learns" about redness for the first time when leaving her black and white room. Suppose that upon leaving her black-and-white room and seeing red for the first time, the language of physics is augmented with a new term that specifically denotes Mary's red perceptual judgements. Call this new term maryred. There is one simple rule for this new term ; whenever Mary perceives an object to be "red" then by definition the object is said to be maryred. So if another scientist is performing an optical experiment, say on a distant planet, and wants to know whether the result is maryred or not, then according to the definition of maryredness, there is nothing he can do other than to ask Mary after she has inspected the result for herself.

    Mary cannot explain the relation between optical redness and maryredness, and the augmented physical language doesn't specify theoretical rules for inter-translating the two, not even when additional context is provided. But why should this absence of translation rules be considered a problem for physics? Isn't it in fact a blessing that we might call "The Hard Feature of Physics"?

    For suppose that maryredness was theoretically correlated to optical redness (plus context). Then doesn't this imply that Mary needs to be present at every optical experiment performed anywhere in the world, including the ordinary optical experiments that aren't measuring maryredness? For how can it be argued that maryredness is theoretically reducible to optical redness + context, but not vice-versa? Theoretical translation must surely work in both directions. So wouldn't the meaning of optical redness become contingent upon the meaning of maryredness such that Mary's perceptual judgements became part of the theoretical foundation of optics? Clearly this isn't desirable, because we want physics to be a universally applicable language with a semantics that is independent of the perceptual judgements of particular observers. So it makes good sense for physics to decree optical redness and maryredness to be incommensurable by fiat.

    Hence in my opinion, those who believe in a "Hard Problem of Consciousness" misunderstand the purpose of science, and that this hard problem is better understood as being a "Hard Feature of applicable Physics"
  • The Mind-Created World
    Recall that Euler's postulates weren't given in relation to a system of numbers; he took lines and points to be primitive concepts. Relative to his informal axiomatisation, the length of a hypotenuse is "real" in the sense that it is a constructible number, meaning that it can be drawn using the practical method of 'straightedge and compass', which is algebraically expressible in terms of a finite number of mathematical field operations.

    When it is disputed that a hypotenuse has a "real length", it is when geometric postulates are used to interpret Euclidean space in relation to a fixed Vector-space basis. The irrational points of a Euclidean space aren't extensionally interpretable unless the basis of the underlying vector-space is rotated so as to transform those irrational points to rational values, which also leads to previously rational-valued points to become irrational. So the problem of incommensurability is really about the fact that it isn't possible to represent all points finitely at the same time, which implies that Euclidean Space cannot serve as a constructive logical foundation for geometry.

    The obvious alternative is to follow Alfred North Whitehead in 1919-1920, and abandon classical Euclidean topology for a 'point-free topology' that refers only to extensionally interpretable "blobs", namely open-sets that have a definite non-zero volume, whose intersections approximate pointedness . Then it might be possible to extensionally interpret all such "blobs" in relation to a fixed basis of topological description in a more constructive fashion, meaning that extensional ambiguity is handled directly on the logical level of syntax, as opposed to on the semantic level of theory interpretation.
  • Free Will
    I think there are definitely problems with the main ways of defining probability, particularly frequentism, but I don't think circularity is one of them. https://plato.stanford.edu/entries/probability-interpret/ .Count Timothy von Icarus

    Probability Theory actually supports what i'm saying.

    First recall that Classical Probability Theory is said to speak of 'events' of Probability 1 that occur almost surely, and conversely of 'events' of Probability 0 that occur almost never. So although classical probability is sound in the sense of comprising an identifiable class of entities belonging to the universe of, say, ZFC Set Theory, it's semantics is in contradiction with naive intuitions about chance.

    E.g when probability theory is interpreted as saying that a dart must land somewhere on an infinitely divisible dart-board, at a location that has probability 0. One the one hand, we want Pr(1) to mean surely, and Pr(0) to mean never, but this 'exacting' demand conflicts with our other demand that it is possible to choose any member of an infinite set. What probability theory is actually expressing, is that our intuitions about chance, determinism and infinity are vague and contradictory and cannot be reconciled, let alone be formally represented in terms of a finite axiomatic definition.

    An obvious way out of the above impasse is to interpret almost surely and almost never as referring to limits of a sequence of random events, such as the dart's sequence of positions over time, where these limits aren't considered to represents probability-apt events in themselves. In which case, we restrict our interpretation of Probability Theory as only assigning meaningful probabilities to either incomplete trajectories of darts that haven''t yet landed and whose eventual position is uncertain, or to landed darts whose position is vague and to within finite precision among a set of positions whose probability is strictly greater than zero. In my view, this way out amounts to a philosophical rejection of an absolute distinction between determinism and chance.


    That's an interesting idea. Any tips on a place to read more?Count Timothy von Icarus

    Sadly I can't think of specific references off the top of my head, but in my view Category Theory is the right meta-language for relating physics, logic and philosophy, so Samuel Abramsky and Jean Yves Girard would be my generally recommended authors, Plus lots of nlab and SEP, of course.
  • Free Will
    It basically comes down to this; "If something is not determined by anything in what way is it not random?Count Timothy von Icarus

    I think that alternative interpretations of 'chance' is the key to non-classical compatibilism, where by "non-classical" I am referring to considerations from modern logic.

    Consider the fact that the definition of chance appears to be circular - ordinarily, chance is taken to mean "to not be determined", where to be "determined" is taken to mean "to not be subject to chance".

    One way out of this circularity is to consider determinism and chance to be relative to perspective, by taking inspiration from game-theory in which "chance nodes" are understood to refer to states of a game in which it isn't the player's turn to move, but someone else's.

    Non-classical compatibilism that is based on this logic, can take metaphysical "free choice" as an axiom that is true for every player of the game, whose actions impose constraints on both the possible futures and possible 'pasts' of every other player. This position can be regarded as "compatibilist" to the extent that it can successfully reduce the empirical observations of modern theoretical physics in terms of a set of laws, whose 'determinism' is considered to be relative to the frame of reference used.

    Transactional QM seems to be the closest theory in this regard.
  • Free Will
    A person can recognize that we are physically determined systems, and recognize that we are systems that develop probabilistic anticipations of future events. Furthermore, it's rather pragmatically valuable for machines like us to discuss such anticipations. (To get a job, to get married, to get to the moon, to end global warming, etc.)

    It seems to me there is a pragmatic value, for the sort of machines we are, to being able to communicate in simplistic terms of free will, and as we are able, modify what we mean by "free will" to be more accurate.
    wonderer1

    Yes, but if determinism is accepted by the compatibilist, then probabilities can only be given an epistemic interpretation, while teleological concepts such as "anticipating the future" can only be objectively interpreted as referring to present and past causes. In which case, your pragmatic compatibilist solution must surely collapse on further inspection into standard metaphysical determinism without "free will".

    Another possibility which comes to mind, is to deny that there is an absolute metaphysical distinction between determinism and free-will, by arguing that a definition of either is meaningless, by virtue of their definitions being circular. This is analgous to the arguments that Quine used to reject the analytic-synthetic distinction. However, since this is about denying the intelligibility of the determinism/free-will distinction, I can't see how this stance could be described as a "compatibilist" position. Furthermore, it entails re-conceiving the problem of free will as being at least partly grammatical in nature, as opposed to referring to a purely physical conjecture.
  • Free Will
    You said it like the compatibilist model of the world has retro causality, but I think instead it's more accurate to say that your model of compatibilism has retro causality.flannel jesus

    Retro-causality is a generally vague and controversial concept, to the point that it seems to rule very little in or out (recalling the fact that QM, which most physicists consider to be forwards-directed, has an innocuous retro-causal interpretation). Causal conventionalists like Hume for instance, even rule out retro-causality as a matter of tautology, which is why i didn't want to appeal to retro-causality as a hypothesis (which some might argue is formally meaningless), but to philosophical and empirical intuitions, naive if you like, that align with the idea.

    It might have been better if I had never used the term. What is of underlying importance to compatibilism in my view, isn't the existence of retro-causation (whatever it is supposed to mean), but the treatment of material implication as being symmetric, i.e. of the form A <--> B, which can be interpreted in a number of ways, including Bertrand Russell's directionless "no causality" view, super-determinism and circular causality. In these cases, it is accepted that there exists synchronisation between a so-called "cause" and a so-called "effect", but where the control between "cause" and "effect" is either considered to be bidirectional, directional but a matter of perspective, or directionless in both directions.

    I don't know the background motivation of the OP, but the problem that was presented is very reminiscent of the thought experiments that physicists use when selecting among interpretations of QM, which frequently give rise to debates over free-will in magazines such as the scientific american. In fact the OP's thought experiment is more or less identical to premises called "quantum conspiracies" , namely the premise that nature has already decided on the properties that physicists will measure, such that physics experiments cannot reveal anything about nature's properties.
  • Free Will
    Why? Says who?flannel jesus

    Yours truly. Tell me how i've gone wrong.
  • Free Will
    I still have no clue why you think compatibilism and retro causality have anything to do with each otherflannel jesus

    Because according to classical understanding of causality, the past is both fixed and exactly determines the future, which prevents the possibility of free choice of any agent who lives above the initial cause.

    Compatibilism doesn't make sense as a concept unless the past is in some way considered to be ontologically dependent upon the future. Being committed to the appearance of retrocausation isn't to be committed to retro-causation, and super-determinism might even be considered as appearing retro-causal.
  • Free Will
    but what does retro casualty have to do with any of that? I know a lot of compatibilists, and they don't have a model of the world involving retro causality.flannel jesus

    Earlier i was mentioning models that might appear retro-causal, in the sense that the model considers facts about the past to be ontologically dependent on present and future observations, i.e history is considered not to exist unless 'enabled' by the observations of 'future' observers, which isn't to imply that observers get to control the content of history.

    Roguelike video games come to mind, in which a computer game generates an on-the-fly history of a world around the player, in direct response to the players actions. The player's possible actions, which he himself controls, are by definition considered to be "future directed" causal events relative to the player. e.g opening a door, digging a hole, killing a monster etc. The information about the world that those actions reveal, but which those actions aren't said to "cause", is information that appears to be retrocausal if it is considered to be nature's on-the-fly response to the players actions.
  • Free Will
    What does "compatibilist" mean in this sentence? It doesn't look like it means the usual free-will/determinism kind of compatibilism, but I'm stumped at what else it could mean.flannel jesus

    I was referring to the usual kind of compatibilism. The problem of compatibilism, at least as i understand it, is how to reconcile two seemingly contradictory premises

    1) The principle of causal determinism - by which the future when conditioned upon a hypothetical total knowledge of the past, is believed to consist of precisely one possible world.

    2) The metaphysical existence of choice and possibility for agents who interact with the world.

    In my opinion, many self-described compatibilists are in fact deniers of either 1 or 2, and so don't qualify as being "compatibilist". For example, they might hold to 1) but interpret possibilities to be epistemic rather than ontic. Or they might hold onto 2 whilst apparently forgetting their alleged commitment to 1, or they might simply fail to provide any reconciliation of their beliefs in 1 and 2.

    To actually commit to both 1 and 2 in a way that reconciles them requires a radical re-conception of time and causation along the lines of presentism, such that the logical implications of causal determinism can be either fully, or least partly, recoverable from the interactive choice principles of 2.
  • Free Will
    Models of causality that are "compatibilist" are those which appear to be retro-causal due to rejecting the antecedent-precedent distinction. These models aren't built upon the directed conditionals of the form A --> B, but upon bi-conditionals of the form A <--> B. In other words, these are models whose logic is called "circular reasoning".

    To give a simplified version of OP's thought experiment, let

    A := Alice's secret prediction on Monday, concerning what Bob will do on Tuesday

    B := Bob's actions on Tuesday, without Bob knowing about Alice's prediction, which he later learns about on Wednesday.

    Suppose that Bob believes from past experience that

    1) Alice's secret 'predictions about him are always true.

    2) Her predictions cannot be explained by a hidden confounding variable that influences both her prediction and what he does.

    As a result, Bob accepts a conditional of the form A --> B. If Bob is to be a compatibilist, then he must also argue for a "retrocausal" relation of the form B --> A. This means that Bob must assume that from his perspective, Alice's "prediction on Monday" actually occurs on 'Bob's Wednesday' when he learns about her prediction, which is after he dug the path. Thus B --> A refers to Bob's actions "causing" Alice's "earlier" prediction. Since Bob only observes Alice's prediction after the facts of his actions, this compatibilist interpretation is perfectly consistent.
  • Free Will
    it's a good example, that is co-related to the idea that the direction of causality is relative.

    The classical conception of causality, which assumes that the causal order is independent of perspective, does not possess the notion of synchronized events, in which the existence of an event necessitates the existence or non-existence of another event, but without either event being alleged to influence the other.

    If the notion of causality is adjusted so as to included synchronised events, we automatically get
    1) The notion of non-local quantum entanglement.
    2) A reconciliation of Bertrand Russell's view that causality doesn't exist, with the interventionist view of causality as used in the sciences.

    So in your previous example, the man's path can be viewed as being synchronised with the independent observation that the rest of the field is white, even though neither event is the cause of the other.
  • Quantum Physics, Qualia and the Philosophy of Wittgenstein: How Do Ideas Compare or Contrast?
    I think you misrepresent Hawking's intentions sime. In "The Grand Design" he presents himself as having an adequate understanding of philosophical problems, and proposes model-dependent realism as a superior alternative to other philosophical theories. So he is not suggesting that philosophical problems are beside the point, he is proposing model-dependent realism as a solution to those philosophical problems.Metaphysician Undercover

    I think Hawking misrepresents his own intentions, judging by the wildly different interpretations I have seen of MDR, for I've witnessed both realists and idealists defending MDR.

    At the very least, its definition is too ill-defined to be distinguished from other forms of instrumentalism, or for allowing critical comparison against other epistemological stances such as positivism and Bayesian epistemology. Notably, MDR isn't mentioned even once in the SEP article discussing models in science.

    MDR doesn't explain the relationship of theoretical models to their observational semantics and truth-conditions, neither does it give any guidance as to how and when to select a model among "equally good" alternatives, let alone for deciding what is a good model, and neither does it serve as an explanation for theory-change.

    So what exactly does MDR solve?
  • Quantum Physics, Qualia and the Philosophy of Wittgenstein: How Do Ideas Compare or Contrast?
    Of course. "Mind-dependent reality" doesn't make sense except to idealists / antirealists (who tend to obey poison warning labels, not carry lightning rods in thunderstorms, purchase auto insurance and eat enough in order not to starve).180 Proof

    The most famous western idealist George Berkeley wasn't a reality skeptic, precisely because his subjective idealism denied the status of the hidden variables of scientific theories as being representational. He was attacking representationalist materialism, and not the "model dependent realism" that is the narrow concern of a scientific theory under test.

    I think that when talking about idealism, especially in the context of the philosophy of science, it should be interpreted more technically as referring to a strong form of instrumentalism in which both

    1) The hidden variables of scientific theories are considered to be ontologically reducible to observation terms, even if the distinction between hidden variables and observation terms is considered to to be epistemically necessary in practice.

    2) The meaning and existence of observation terms is considered to be ontologically dependent on perspective.

    Ironically, Samuel Johnson demonstrated Berkeley's views when kicking a rock - for he demonstrated that his theoretical understanding of a rock's material qualities was ultimately in terms of his experiential perspective. Had he believed that the rock's theoretical properties were irreducible to his experiences, then he would not have believed that kicking a rock could prove it's material properties.

    Metaphysical "propositions" are indeed propositions - but they are higher order propositions about theories, as opposed to being first-order propositions that are expressed by those theories.

    Hawking indicated his disinterest in philosophy, which concerns these higher-order propositions, for his interest in physics, where I interpret his "model dependent realism" to refer to his view that the remit of physics is strictly in the analysis and testing of first-order physics propositions, as illustrated by his "world picture" analogy.

    So it remains very unclear to me as what his philosophical views are, for his "model dependent realism" clearly wasn't meant to be a philosophical proposition, but only to express that as far as physics is concerned, philosophical questions are besides the point.
  • Quantum Physics, Qualia and the Philosophy of Wittgenstein: How Do Ideas Compare or Contrast?
    Better yet (since, for instance, planck-scale entities are not "empirical"): Hawking-Mlodinow's model-dependent realism.180 Proof

    Model Dependent Realism is a dubious metaphysical proposition in itself. For how does MDR avoid collapsing into either George Berkeley's instrumentalism, or into naive realism?

    When it comes to interpreting any theory T containing a set of observation variables O and a set of 'hidden' variables H, there are only two possibilities

    1) T can eventually be rewritten as another theory T' that only contains O terms, such that the original variables H in T aren't considered to be hidden after all.

    or

    2) T can never be rewritten as a theory containing only O terms.

    MDR comes across as decrying the metaphysical speculation that is inevitable in the face of theoretical uncertainty, and disguising this attitude as epistemological substance.

    BTW, planck-scale entities are empirical - for why would they be proposed otherwise? Rather, their empirical meaning remains open and is relative to context, as is also the case for "molecules", "optical redness", "cats", "cars" and my dear old mother.

    As the logical positivists discovered, there isn't a clear meta-theoretical distinction between O terms and H terms. For example, electrons are observation terms in the context of using an electron microscope, whereas in macroelectronics they are hidden terms in the sense of having indirect phenomenal relevance .

    So did Hawking believe in mind-independent reality or not?
  • Quantum Physics, Qualia and the Philosophy of Wittgenstein: How Do Ideas Compare or Contrast?
    I suspect that Wittgenstein wasn't interested in QM due to the fact that he considered it to have no metaphysical value. Remarkably, it has taken another fifty years of mathematics and physics research since Wittgenstein's death to "Ordinarize" QM and bring it back down to earth, in the sense of showing how many of QM's so-called "non-classical" properties arise through our ordinary concepts of epistemic uncertainty and linearity with respect to partially observed states of affairs.

    As far as I am aware, the only truly "non-classical" property of QM is non-locality - but there are even shades of non-locality in problems of synchronized computation that aren't motivated by physical phenomena. For example, during the execution of an ordinary computer program, the OS recycles the memory addresses of old values for newly created values. From the perspective of the program that has no direct control over memory allocation, the new values and old values might look as if they are non-locally entangled.
  • Rhees on understanding others and Wittgenstein’s "strange" people
    The above can be summarized by saying that "other minds" is an oxymoron.

    Epistemic uncertainty is meaningful when identifying a behavioral disposition - for example one can doubt whether or not another person's behavior is receptive, competent or genuine, but to empathise with another person isn't to feel or think on their behalf, even if the practice of empathy lends insight with respect to their behavioral disposition.

    Nevertheless, one's beliefs concerning a person's behavioural disposition effects the course and extent of one's empathy towards that person.
  • Dualism and Interactionism
    The debate as to whether QM randomness is aleatoric or epistemic is presumably rendered moot by modern understanding of Quantum Contextuality; for in the case of entangled systems it isn't possible in principle to assign a complete probability distribution over the joint values of every quantum observable, as per the Kochen-Specker theorem.
  • Teleology and Instrumentality
    I feel like you are making a category error with respect to 'final cause'. What do you mean by that term and why does your 'initial cause' make that meaning superfluous?Leontiskos

    To use the SEP's description of Aristotle's causal ontology:

    The material cause or that which is given in reply to the question “What is it made out of?” What is singled out in the answer need not be material objects such as bricks, stones, or planks. By Aristotle’s lights, A and B are the material cause of the syllable BA.

    The formal cause or that which is given in reply to the question “"What is it?”. What is singled out in the answer is the essence or the what-it-is-to-be something.

    The efficient cause or that which is given in reply to the question: “Where does change (or motion) come from?”. What is singled out in the answer is the whence of change (or motion).

    The final cause is that which is given in reply to the question: “What is its good?”. What is singled out in the answer is that for the sake of which something is done or takes place.

    Let us define a final cause to be reducible to the first three causes if there exists a causal model that reproduces the effects attributed to the final cause, that consists only of the first three causes applied to one another in an adaptive feedback loop.

    I am asserting that all final causes are reducible in the above sense. This is equivalent to asserting the existence of a computer simulation of all phenomena attributed to final causes.
  • Teleology and Instrumentality
    I feel like you are making a category error with respect to 'final cause'. What do you mean by that term and why does your 'initial cause' make that meaning superfluous? Surely Aristotle was not "speaking informally about evolutionary feedback" when he used the term, given that he was not aware of Darwinian evolution.Leontiskos

    Had Aristotle known about evolution, then he could have explained the regularity of nature without appealing to final causes and only to adaptive feedback in the cycle of life. His arguments don't amount to a proof of the necessity of final causes, but to the insufficiency of causal models that don't take into account adaptive feedback.
  • Teleology and Instrumentality
    I'm not sure I understand your question. Let's take an example: the final cause of an acorn is an oak tree. Presumably you are positing that there is some "initial cause" which makes this final cause superfluous?Leontiskos

    Yes. To say that an oak tree is a "final cause" of an acorn is to speak informally about the evolutionary feedback that determined the chemistry of Dendrology, which when applied to a given acorn refers only to a directed chain of causality whose conclusions are fully determined by initial conditions.
  • Freedom and Process
    Yes, although I would reject the concept of "self determination" as meaningless.

    IMO, "determination" is only meaningful when used in the sense of the verb "to determine" as a practical relational concept referring to the control of one process by another process when they are treated as autonomous entities, as embodied by the concept of multiplayer games.

    By that norm, J R Tolkien can be said to have "determined" the universe of Middle Earth, a fact that can only be understood by those of us who exist outside of the universe of Middle Earth. But from the perspective of the characters within Middle Earth mythologized as a closed system, their lives can neither be said to be determined nor undetermined.
  • Artificial intelligence
    The concept called "Intelligence" - whether natural or otherwise - is largely an anthropocentric misconception of Cybernetics that is born out of cartesian folk-psychology. I think the word "Intelligence" should be eliminated from the sciences.
  • Teleology and Instrumentality
    Can someone give me an example of a "final cause" that cannot be eliminated for initial causes?

    The examples that Aristotle provides, e.g the functions of the human body, aren't examples because they can be eliminated for the initial causes of Darwinian evolution.
  • Absolute nothingness is only impossible from the perspective of something
    In type theory as used in computer programming languages, "nothing" is interpreted as a type denoted that has no values but that can be eliminated for values of every type, which is in semantic alignment with your name "nothing implies everything". Practically speaking, this can be interpreted as referring to a function that allocates memory for the values of any input type



    Likewise, for 'destroying' elements of a type, i.e to free their memory, we have



    "Deallocate" can therefore be regarded as the 'negation' of "Deallocate" and vice-versa.

    In particular, can be taken to be , which apparently denotes useless instances of the above functions that do nothing (since we officially have no values for to read or write with). This is because the meaning of in our programming language is to denote the mysterious origin and destination of the values in our programs, i.e. the external compiler/OS that our programs depend on for resources, but whose resources cannot be talked about or reasoned with within our programming language.

    Yet suppose the following isomorphism is taken to be true and can be manipulated within our programming language.



    This equivalence is regarded as true in models of classical linear logic. Then this implies that can be iteratively expanded as a potentially infinite list of ... values of some sort? - no, for that possibility was already forbidden - then of memory addresses for storing values!. In which case, if we allowed our programming language to manipulate memory addresses directly (e.g as in C/C++) then we can then interpret within our programming language, and consequently give meaning to and take control of allocate/deallocate in the intuitive and practical way, For example by defining







    And paired with a respective definition for 'deallocate' that destroys these values by freeing their corresponding memory addresses and then returning them.

    So to summarise the above examples, "Nothing" initially referred to the mysterious effect of a destroyed value of a type and to the mysterious cause of an initialized value of a type within a programming language that could only talk about values. But by expanding the semantics of our language, we eliminated dead talk of "Nothingness" for talk about the interactive duality of "type values" and "memory addresses" .

    I think the lesson is, "nothingness" denotes epistemic ignorance and/or semantic scoping and can be eliminated from empirical and rational discourse.
  • A very basic take on Godel's Incompleteness Theorem
    Listed as 402 pages. Godel's paper is 34 pages. Interestingly, I find only one place where (in my translation) Godel uses "true" or variants: section 3, "The following proposition is true: Theorem VII: Every recursive relation is arithmetical." In the rest it's "provable" or "decidable," and variants.tim wood

    Yes, that would be because Godel wanted to reinforce the point that his theorems have entirely constructive proofs, that don't appeal to the law of excluded middle or to meta-mathematical assumptions. Unfortunately, that point was lost on Douglas Hofstadter, who only caused misunderstanding of the theories, by linking them to the meta-mathematical concept of self-reference that isn't of any proof-theoretic relevance to the theorems.

    Peter Smiths book is much longer because it also covers related topics that developed after Godel that are of relevance to his now ancient theorems, even if the book is badly outdated in certain respects and unhelpfully assumes a background of classical logic for presumably historical reasons.

    The usual locution I find is that G is undecidable, but because G says it's undecidable, it's true; this a so-called metamathematical proof being outside the system in which G is created.tim wood

    Well syntactical incompleteness does at least imply that any system of mathematical logic is an unfinishable open-system, such that any sufficiently expressive axiomatic system can be consistently extended in an infinite number of mutually inconsistent ways, leading to an infinite number of conflicting rival interpretations with respect to any theory of arithmetic. A problem with your meta-mathematical heuristic however, aside from the fact that it isn't relevant to Godel's constructive proof of the theorems, is that it assumes the existence of mathematical truth prior to system construction - which might or might not be epistemically appropriate in a given situation.

    Essentially, if one accepts Godel's completeness conjecture , namely the conjecture that first order logic is sound and complete with respect to the theorems that are tautologies, i.e. the theories that are satisfied by every model of the logic, then either a theorem is syntactically provable in such a logic or there must exist a model in which the theorem is false. So the heuristic you mention is a potentially useful heuristic for automated theorem proving - if the completeness conjecture is assumed, which is generally appropriate for classical logic, then a theorem can be disproved by merely showing that it's negation is satisfiable in a model of the logic, meaning that one can "observe" the theory to be false, without having to laboriously work "within the system" to produce a syntactical proof of the negated theorem.

    Syntactically however, G is merely the fixed-point of an alternating fix-point equation, whose iteration causes an increase in the number of negation signs at the front of the fixed-point. It is structurally similar to a coalgebra representing a potentially infinite list of oscillating boolean values, objects that are normally called "impredicative" rather than "self-referencing" in the literal sense implied by Hofstadter.
  • A very basic take on Godel's Incompleteness Theorem
    The book "An Introduction to Godel's Theorems" by Peter Smith would be my recommendation for an eager beginner.

    Alternatively, wiki's page on the Halting Problem conveys in the simplest and quickest fashion nearly all of the relevant computational and philosophical implications of Godelian incompleteness without the torture of Godel numbering.

    In fact, the popular misconception of Godel's theorem as amounting to saying that "a sentence is 'true', assuming that it is unprovable" is a more apt (but still ultimately misleading) conception of a weaker form of incompleteness that is directly implied by the undecidablity of the Halting problem, namely that there cannot be a sound and complete axiomatization of the natural numbers, since that would imply the existence of a computation on the natural numbers that halts if it doesn't halt.

    Godel's actual proof, that doesn't appeal to a model or intended interpretation of the axioms of Peano Arithmetic, is a stronger result since it directly proves syntactical incompleteness without appealing to any notion of soundness. His proof can be intuited geometrically and trivially, as showing that the forest of mathematical proofs that constitutes Peano Arithmetic doesn't describe the background that isn't part of the forest of Peano Arithmetic. Is that really an interesting and deep result?

    So the heuristic argument of there being a "true but unprovable statement" that the public use as a heuristic for understanding or remembering Godel's theorem is very misleading. Their catchphrase is more suitable as a moderately misleading heuristic for remembering the proof of the Halting problem (which also doesn't refer to models, but to external assumptions concerning halting conditions, that with much charity might be interpreted as comprising a weak model of something or other).

    Many mathematicians and logicians cannot themselves be bothered to master Godel's incompleteness proof, for there isn't any payoff for doing so, and will probably content themselves with a technical understanding of the weaker version I mentioned above that is straightforward to prove and remember, and loses practically nothing.
  • Kripke's skeptical challenge
    I am probably not understanding this at all correctly because its too technical for me but it sounds like its bolstering the Kripke's skepticism rather than really solving anything.Apustimelogist

    Alternative foundations for general mathematics and computing can't solve Kripikean skepticism in the sense of providing stronger foundations that rule out unorthodox rival interpretations of mathematical concepts - but they can partially dissolve the skepticism by

    1) Refactoring the principles of logic, so as to accommodate finer grained distinctions in mathematical logic, particularly with regards to a) Distinguishing intensional vs extensional concepts, b) Distinguishing between the process of constructing data and communicating it, versus the process of receiving data and deconstructing it, c) Distinguishing between various different meanings of finitism that are equivocated with classical logic.

    2) Weakening foundations so as to assume less to begin with. This replaces skepticism with semantic under-determination. E.g, if "plus" is considered to be a finitary concept that does not possess a priori definite meaning to begin with, then Kripkean doubt about it's meaning doesn't make as much sense.

    In summary, a good logic from the perspective of computer science describes the process of mathematical reasoning intuitively and practically in terms of a finite series of interactions between agents playing a partially understood multi-player game, in which no agent is the sole authority regarding the meaning and rules of the game, nor does any agent have omniscient knowledge regarding the eventual outcome of following a given strategy.
  • Kripke's skeptical challenge
    Kripke allows that mathematicians can adequately specify the rules of addition. That's not being called into question.frank

    That depends on the sense of adequacy you are referring to. The question is, how can an intensional definition of addition such as an inductive definition, that is finitely specified and only provides an inductive rule for performing a single step of computation, imply an unambiguous and extensionally infinite table of values? As Kripke himself pointed out (IIRC), as the numbers to be added get very large, there is increasing uncertainty as to what the meaning of a "correct" calculation is, for finite computers and human beings can only cognize truncations of the plus function. And a "gold standard" for the extensional meaning of addition up to a large enough arguments for most human purposes hasn't been physically defined by a convention as it has for the meaning of a second of time.