• No Self makes No Sense
    I don't think someone could be very functional having their self identity undermined as we see in cases of amnesia and dementia. It is useful to keep track of who you are and exhibit a consistent personality.Andrew4Handel

    In other words, the "self" is a useful idea with practical utility. But does that warrant the promotion of the "self" to the status of ontological primacy?
  • Evidence of Consciousness Surviving the Body
    Most everything you believe has come from the testimony of others, if you doubted most of it you would be reduced to silence. Professors, books, language, science was given to you by others, you probably had little to do with creating the information yourself.Sam26


    I'm specifically referring to the trustworthiness and reliability of the verbal reports of experimental subjects in psychological experiments where they are tasked with giving self-reports, possibly including explanations for their own behaviour. A testimony of a subject taken at face value can be terribly misleading when it comes to understanding the actual underlying proximal and distal causes of the subject's verbal behaviour, for there is no reliable mapping between a person's use of sentences and their psychological state, and people don't possess introspective access to the causes of their own behaviour.
  • Evidence of Consciousness Surviving the Body
    I don't even trust personal testimonies when it comes to deciding the veracity of the humdrum theories of behavioural psychology, let alone for deciding the veracity of pseudo-scientific mystical hypotheses.

    That said, i have sympathy with the sentiments expressed by beliefs in "life after death"; not in the sense of it constituting an empirically contingent and testable scientific hypothesis, but because the opposite notion of 'eternal oblivion' is equally nonsensical.
  • The Law of Non-Contradiction as a theorem of Dialectical Logic
    These are very interesting remarks. Sadly, my knowledge of dynamic logics is sorely lacking at this point in time, but I think dynamic logics at best can only have partial applications; for there are many cases where we need to use a static logic. And it is in these scenarios that the Liar Sentence arises.Alvin Capello

    Certainly the semantic contradiction arises when the meaning of the liar sentence is analysed statically, but there is nothing that necessitates this adoption of tenseless logic in either the construction or analysis of liar sentences.

    Indeed the construction of all proofs is a dynamic process over time. In the case of the liar sentence, a typical verbal explanation of the paradox involves alternatively saying "I am telling the truth about my lying, therefore I am lying about my lying, therefore i am telling the truth about my lying...etc". What is static in the construction of this paradox? Isn't the insistence that the liar sentence must be understood statically, the source of the contradiction?
  • The Law of Non-Contradiction as a theorem of Dialectical Logic
    Surely some seeming contradictions can be resolved, but I don’t think this is true of all of them. For instance, I don’t think the Liar Sentence and other similar semantic paradoxes have any consistent solutions, so these are radically contradictory objects on my view.Alvin Capello

    Of course, liar paradoxes are only contradictions if their truth is considered to be atemporal; otherwise these contradiction are avoidable using a tensed logic in which every sentence of a proof is temporally indexed according to the moment of it's creation, wherein the only distinction between premises and conclusions is that the latter is constructed after the former.

    In such a tensed logic, liar paradoxes of the form P(t) => ~P(t+1) are consistent and only the simultaneous derivation P(t) and ~P(t) is inconsistent.
  • What are the First Principles of Philosophy?
    There's a difference between using words to denote objects or relationships between objects in the world, and the objects and relationships between objects in themselves that those words represent. The Law of Non Contradiction is thought to be violated only because it can be shown that a contradiction in terms of the relationships between the symbols (i.e. words) that point the objects, can be true. This results for the false equivalence that the symbols that represent objects and the objects themselves are the same, or rather, have the same logical form which they do not. In an actualized sense, nothing can ever exist and not exist at the same and in the same respect. However, in a state of potentiality, the actualized possibility of x and -x exist at the same time and in the same respect, according to my philosophy anyways.TheGreatArcanum

    I agree that contradictions are properties of sentences rather than of matters-of-fact, for I cannot understand what could be meant by contradictory matters of fact. I would also say the same about truth, for I cannot fathom a false matter-of-fact. The principle of non-contradiction is certainly critical to the practice of science, but I see neither justification nor practice of non-contradiction when it comes to philosophy.
  • What are the First Principles of Philosophy?
    Its not about logical atomistic consistency. Wittgenstein should not be in your list sime.Gregory

    I wasn't specifically thinking of logical atomism, i was referring to his consciously self-refuting Tractatus, as well the latter Wittgenstein's philosophical investigations, that isn't logically consistent. For example, his apparent reliance on the imagination to refute the idea of private language. This isn't a criticism, it's just a general feature of philosophical arguments. For many other examples see Graham Priest's "Beyond the Limits of Thought".
  • What are the First Principles of Philosophy?
    PNC is either rejected or violated in the works of many philosophers, e.g. Heraclitus, Kant, Hegel, Wittgenstein... There isn't much evidence to support the logical consistency of philosophy, especially in epistemology. If philosophy is considered to be primarily a normative activity, this doesn't matter. The loss of PNC isn't a great blow, it just means that philosophers are unstable hypocrites with alternating beliefs.
  • The Reality of Time
    If we are constantly changing our opinions as to the facts of the past on the basis of new information, then why should we believe that the past is real and immutable?

    Suppose that in 2030 society obtains decisive historical evidence concerning the identity of Jack The Ripper in 1888, whereby historians thereafter claim that the riddle regarding Jack the Ripper's identity was solved in 2030. Why should we believe that the actual facts regarding the identity of Jack the Ripper in 1888 existed before 2030? What does this assertion add to our calendar-indexed observations?
  • The Diagonal or Staircase Paradox
    The deeper problem concerns the fact that in Cartesian co-ordinates the notion of vertical and horizontal Euclidean lengths is incommensurate with the notion of diagonal Euclidean lengths; hence the reason why a hypotenuse that is diagonal to the Cartesian axis is assigned an irrational number such as Sqrt(2), which of course isn't a quantity but a non-terminating algorithm for generating a Cauchy sequence.
  • Belief in nothing?
    As 180proof mentioned, If atheism is identified with the absence of belief, then it avoids the 'truth-by-correspondence' problem concerning beliefs that have non-existent referents. However, this is arguably not the case for theism, that your argument can be turned around to defend, by the following argument:

    Premise 1 : All beliefs have referents.
    Premise 2: Theism is a belief.

    Conclusion: The referent of Theism exists, and therefore theism is true.

    Personally, I find this argument acceptable, because the idea that a non-existent object can cause belief-behaviour is scientifically unacceptable, leading me to the conclusion that all beliefs are vacuously true in the epistemological sense of truth-by-correspondence of language to something. Hence any substantial notion of truth cannot be in terms of "truth by correspondence" of language to reality, but in terms of ethics and cultural convention.

    A mistake of atheists is to assume that the object of theistic beliefs is universal, for there are many potential physical causes of religious behaviour and speech.
  • Disproving game theory.
    Well obviously the reason why people enjoy playing Chess is because its outcomes are uncertain due to players bounded rationality and tendencies to make mistakes, assuming that the skill difference between opponents is roughly even. This is especially the case for the variant Chess960 that in being randomly initialised diminishes the role of opening-theory. The Chess community are well aware that the rules of Chess have to evolve if Chess is to remain an interesting non-predetermined spectacle. Perhaps the game will continue to fragment into more and more alternatives. Personally, I think there are more interesting board-games to professionalise.
  • The Notion of Subject/Object
    Irrespective of dualism, it isn't clear in any case what is meant by physical interaction, due to conflicting opinions as to the metaphysics and existence of causality. If one goes so far as to deny the literal existence of counterfactuals then interaction isn't even a substantive concept. Therefore ontological dualism and more generally, ontological pluralism, don't necessarily imply interaction problems, but only that different descriptions of the world cannot be inter-translated.
  • Entropy can be reset to a previous or to an initial state
    All working physicists informally appeal to "directness" whenever they make an inference, even though Physics possess no theory of directness. For otherwise a physicist could not claim to learn anything from an experiment, nor for that matter could he find the sentences of physics intelligible. So although directness/indirectness aren't themselves defined in terms of physical criteria, the converse is true.

    Exactly the same issue applies to language in general, for we are taught the meaning of words either through ostensive definition, or by verbal definitions that implicitly appeal to earlier ostensive definitions for their intelligibility. And yet we have no linguistic criteria for translating verbal definitions into ostensive definitions and vice-versa, for languages are only publicly defined up until verbal criteria.
    .
    In line with language in general, the semantics of Physics is both under-determined and redundant; one Physicist's "natural" object is another Physicist's "metaphysical garbage", because they might each understand physics using different semantic foundations that are rooted in different ostensive definitions.
  • Everything true vs. nothing true
    Everything is 'true' is a position I independently arrived at, without knowing that this epistemological position already existed under the banner of Trivialism.

    Essentially, trivialism (at least as I am using the term) says that every belief is seen to be true once the object of the belief is identified with its immediate causes. Trivialism is a corollary of semantic deflationism and presentism, which denies that a prediction can actually refer to a future event by virtue of the future not existing in a literal sense in being a mere indexical.

    For example, suppose that Alice becomes convinced that she will win the lottery and buys a ticket. According to the causal theory of reference, her belief that she will win the lottery is nothing other than a report referring to her immediate situation. If in fact she doesn't win the lottery, then according to trivialism she is only said to be "wrong" by reinterpreting the object of her belief to refer to the results of the lottery via a post-hoc revision of linguistic convention.
  • Do colors exist?
    The collective use of language constitutes an inconsistent convention, for everybody uses the first-person pronoun to refer to a different subject. This is the central oversight in debates over idealism and realism that entirely ignore who is making an ontological commitment, such as the existence of colour.

    Ordinarily, if I assert "I am seeing a red apple" the meaning of the sentence cannot be decomposed into two independent assertions, namely one of a subject and another of an object, as is in situation where I assert that someone else seeing a red apple. As far as I'm concerned, red, i.e. my red, exists independently of other people's perceptions of my red, and they cannot possibly know this fact, for whenever they talk about red they are referring to their red. And the situation isn't improved by talking only about "objective" optical properties.

    Therefore consider the irrealist alternative; namely that ontological disagreements are partly the result of our collectively inconsistent use of language.
  • Do colors exist?
    'Phenomenal red' is an estimator of 'optical red' in common situations. No necessary relationship between phenomenal colour and optical qualities can be defined nor established, due to the impossibility of exhaustively specifying and testing their relationship.
  • Is counterfactual reasoning always faulty?
    First start with the notion of material-implication:

    Classically, A=>B means that if A is true then B is true,and is equivalent to NOT A is True OR B is True.
    Constructively, A=>B only means that a proof of B can be derived from a proof of A, and says nothing about the actual truth or provability of A or B.

    The classical interpretation of material implication would say that you cannot be a lizard because you don't like flies, which shouldn't be problematic to assert, assuming that we live in a closed world containing a finite number of lizards that we can count in order to check their taste for flies.

    But in the event we live in an open world containing a potentially infinite number of lizards, the classical interpretation runs into a problem in that the truth of A=>B can never be verified, implying that A=>B can never be asserted. And yet we do use conditionals without assuming that we live in finite closed worlds, which indicates our actual use of material implication is constructive rather than classical. For example, our definition as to what a lizard is includes the fact it eats flies, and therefore A=>B becomes somewhat tautologous.
  • It's time we clarify about what infinity is.
    Because I have not seen any resolution to these questions, I would not say that a "rule" has any existence at all.Metaphysician Undercover

    I would say that the laws of Mathematics and Logic are normative principles pertaining to conduct regulation so as to make the world easier to describe and manipulate.

    These normative principles cannot be given a logical justification on pain of circularity, rather their justification stands or falls with their general overall usefulness.
  • Against Fideism
    In your view, what is a belief, and how should the object of a person's belief be ascertained?

    Should the object of a person's belief be identified with the physical causes of their belief, in which case every belief is seen to be necessarily true when it is physically understood, or should the object of a belief be decided impersonally by linguistic convention so as to reflect the normative values of the person's community?
  • It's time we clarify about what infinity is.
    Wait so you just made that up? It's not a real thing? You had me convinced. Why not mod out the reals by the trivial ultrafilter and see what you get? What do you get?

    Why are there so many die-hard constructivists on this forum? If you go to any serious math forum, the subject never comes up, unless one is specifically discussing constructive math. You never see constructivists claiming that their alternative definitions are right and standard math is wrong. Only here. It's a puzzler.
    fishfry

    Well obviously from a pure mathematics perspective, every proof in ZFC is considered construction, in contrast to Computer Science that has traditionally had more natural affinity with ZF for obvious reasons, and there is a long historical precedent for using classical logic and mathematics. As a language, there is nothing of course that classical logic cannot express in virtue of being a "superset" of intuitionistic logic, but classical mathematics founded upon classical set theory IS a problem, because it is less useful, is intuitively confusing, false or contradictory, lacks clarity and encourages software bugs.

    In my opinion, Constructive mathematics founded upon intuitionistic logic is going to become mainstream, thanks to it's relatively recent exposition by Errett Bishop and the Russian school of recursive mathematics. Constructive mathematics is practically more useful and less confusing for students in the long term. Consider the fact that the standard 'fiction' of classical real analysis doesn't prepare an engineering student for working in industry where he must work with numerical computing and deal with numerical underflow.

    The original programme of Intuitionism on the other hand (which considers choice-sequences created by the free-willed subject to be the foundation of logic, rather than vice versa) doesn't seem to have developed at the same rate as the constructive programme it inspired. However, it's philosophically interesting imo, and might eventually find an applied niche somewhere, perhaps in communication theory or game theory.

    BTW, i'm not actually a constructivist in the philosophical sense, since the constructive notion of a logical quantifier is too restrictive. In a real computer program, the witness to a logical quantifier isn't always an internally constructed object, but an external event the program receives on a port that it is listening. What's really needed is a logic with game semantics. Linear logic, which subsumes intuitionistic and classical logic is the clearest system i know of for expressing their distinction and their relation to games.

    As for a trivial ultrafilter, its an interesting question. Perhaps a natural equivalence class of Turing Machine 'numbers' is in terms of their relative halting times. Although we already know that whatever reals we construct, they will be countable from "outside" the model, and will appear uncountable from "inside" the model.
  • It's time we clarify about what infinity is.
    Now that's something I've never run across. Both too big and too small at the same time. But it takes a weak form of the axiom of choice to have a nonprincipal ultrafilter, which is needed to construct the hyperreals. Do constructivists allow that?fishfry

    emmm......... Nope :) for the reason you've just mentioned. For where is the algorithm of construction? Of course , the trivial principle ultrafilter is permitted, which then produces a countable model..

    By "constructive hyperreal" i was merely colloquially referring to using functions such as f(n)=1/n as numbers according to some constructive term-oriented method that didn't involve assuming or using cauchy limits.
  • It's time we clarify about what infinity is.
    Now obviously, any countable list of Provably Cauchy-Convergent Total Functions is unfinished, in the sense that a further PCCTF can be built that is is not already in the list via a diagonal argument. No problem, we just shuffle along the existing enumeration to add the new function into the existing list. But then doesn't this contradict the notion that our previous list was complete?

    There seems to be an ambiguity between two definitions of completeness. If Dedekind completeness is understood to be an axiom of construction then it is trivially satisfiable in the sense that the axiom itself can be used to assist in the generation of a real from an existing list of real numbers. After all, if there wasn't a countable model of the Axioms of the reals, then they would be inconsistent, since Second-order quantification can always be interpreted as referring only to the sets constructively definable in first-order logic.

    On the other hand, if completeness is understood to refer to a finished list of PCCTFs, our list is not complete in that sense.

    So it seems to me that countable model of reals, both first and second order, are especially useful ( not to mention the only models we use in practice),for clarifying the relationship between Dedekind completion, Cantor's theorem and ordered fields.

    If one abandon's the second-order completeness axiom, and possibly cauchy convergence, then there are less constraints in the construction process, allowing one to define a potentially larger field of computable numbers that includes infinitesimals as is done with the (constructive) Hyperreals, and one can even include computable 'numbers' that are aren't provably total. In which case ones countable list is now finished, but now there are no more numbers to be added, because now the diagonal argument cannot be used to construct a new numbers in virtue of one's list including non-numbers that aren't guaranteed to halt on their inputs.

    So i hope this had lead to a satisfactory conclusion.
  • It's time we clarify about what infinity is.
    The computable numbers are countable. That's because the set of Turing machines is countable. Over a countable alphabet there are countably many TMs of length 1, countably many of length 2, etc.; and the union of countable sets is countable. QE Freaking D.fishfry


    ?? Perhaps I should have been clearer from the beginning, but i took everyone's understanding for granted that a computable number refers (in some way) to a computable total function. Apologies if that is the case. For surely you appreciate that the computable total functions aren't countable?

    The computable total functions are a proper subset of the computable functions that also contain partial functions. i.e. that do not halt on a given input.

    It is true to say that the whole set of computable functions is countable, for reasons you'e sketched. It is not true to say that the set of computable total functions are countable, for we cannot solve the halting problem. Hence the reason why we say the computable numbers are sub-countable: the only way we could 'effectively' enumerate the computable numbers is to simulate every Turing machine and wait forever, meaning that any 'candidate enumeration' we construct of our computable numbers after waiting a finite time is also going to contain computable functions that aren't total and hence are not numbers.

    For the constructivist, this "subcountability" is all 'that 'uncountability' means. It is simply means that we can never construct a total surjective function from the natural numbers onto the computable numbers. It doesn't mean in any literal sense that we have more computable real numbers than natural numbers.

    The sequence of n-th truncations of the binary expansion of Chaitin's number is a Cauchy sequence that does not converge to a computable real. End of story. Then you say, "Oh but that sequence isn't computable," and I say, "So freaking what?" and this goes on till I get tired of talking to yet another disingenuous faux-constructivist.fishfry

    We have to be careful there. We can run every Turing Machine and at any given time create a bar-chart of the ones which have halted, and this histogram comprises a sequence of computable functions whose limit isn't a computable function. To my understanding this sequence of functions isn't cauchy convergent, for we cannot construct a bound on the distance between successive histograms. Let's not forget that there are an infinite number of computer programs of every size.

    Compare this situation to a computable total function f(n) representing the "values" of the Goldbach's Conjecture; Let's say that f(n) = 0 if every even number less than n is the sum of two primes, otherwise f(n)=1. Here we can also compute the individual digits in finite time. If GC is decidable, i.e. GC OR ~GC, then f(n) is Cauchy convergent to either 0 or 1. But if GC isn't decidable, then as with Chaitin's constant f(n) doesn't have a cauchy convergent limit, even though f(n) is a computable total function.

    Therefore, in order to know that one has constructed a complete and ordered field of computable numbers, one must only use a set of provably Cauchy-convergent computable total functions, for which every cauchy-convergent sequence of these functions is also provably cauchy-convergent.
  • It's time we clarify about what infinity is.
    Too few, clearly. There are only countably many of them.
    ...
    And no countable ordered field can be complete. It's a theorem.
    fishfry

    The computable total functions are sub-countable. An enumeration of all and only the constructively convergent cauchy sequences isn't possible as this is equivalent to deciding every mathematics proposition. Nevertheless we can construct a countable enumeration of a proper subset of the computable total functions, namely the provably convergent cauchy sequences with locateable limits, which collectively constitute a complete and ordered field, where by "complete" we mean with respect to a constructive least upper-bound principle.
  • It's time we clarify about what infinity is.
    The constructive reals aren't complete because there are too few of them, only countably manyfishfry

    Too few...or too many? The subset of computable total functions that correspond to the provably convergent Cauchy sequences form a countable and complete ordered field, that is a proper subset of the provably total functions.
  • It's time we clarify about what infinity is.
    After all this discussion, I'm starting to reject my claim that Zeno's paradox can be solved by our inability to count and measure things. I'll think about this more at a later time.Michael Lee

    Zeno's paradox is best solved by observing how you would practically explain the paradox. To practically demonstrate the paradox requires one to repeatedly move an object along the same path, but ending the motion at the half-way point of the previously travelled distance and exclaiming "the object must have earlier travelled through this point".

    In other words, a demonstration of Zeno's paradox can only explain what an object position is by destroying the object's motion. In other words, this demonstration shows that the construction of a position is incompatible with the construction of a motion, and hence is an intuitive demonstration of the Heisenberg Uncertainty Principle.

    In my opinion, Zeno was close to discovering this principle characteristic of Quantum Mechanics, purely from ordinary phenomenological arguments.
  • It's time we clarify about what infinity is.
    Well in my constructive understanding:

    The 'Second-order' reals (as described via second-order logic) are also 'unique' from a constructionist perspective; for if the Axiom of Choice is rejected then second-order quantification over the sets of reals is strictly interpreted as quantifying over the constructable-sets of reals. Consequently, what we then have is a first-order countable model of the reals in 'second order' disguise. The reason why the real number field is unique in this interpretation is because we are actually still working within first-order logic; and since the Ultrafilter Lemma isn't constructively acceptable, the Löwenheim–Skolem theorem for first order-logic that depends upon it fails. Therefore constructive first-order models of the reals only possess models of countable cardinality. Consequently, there cannot exist models of constructive reals that are "non-standard" thanks to Tennenbaum's theorem that denies the existence of non-standard countable models that are recursive.

    From this constructive perspective , the semantic intuition behind CH is trivially correct: There are no subsets of R whose size is greater than N but less than R, simply because the real numbers are encodings of natural number elements (via Godel numbering of the underlying computable total functions) and therefore they are of the same number. But alas there only exists an effective algorithm for deciding the provably total functions, i.e the provable real numbers, and hence there is no constructive proof that the number of provably constructive real numbers equals the number of constructable real numbers.
  • What is art?
    What about art semi-automatically generated by a neural network that in effect produces novel images with high artistic potential by interpolating the patterns that exist within large databases of artistic, natural and cultural images? Or that transfers the statistical qualities of an artist's style onto an arbitrary image to produce a novel 'painting' in that artist's style?

    Who is the artist here, and who owns the results?
  • It's time we clarify about what infinity is.
    In computer programs infinity is used as a free-variable that might later become bound to a random finite number. So it is a logical concept rather than a concrete number.
  • An hypothesis is falsifiable if some observation might show it to be false.
    Can you show that idea with practical example?Zelebg

    Dark Energy hypotheses in physics are currently the most fashionable example as to why falsification isn't used in practice. Rather than considering the Hubble data of the speed of receding galaxies as refuting General Relativity, Physicists instead 'fix' GR as being true by proposing new and (individually) untestable auxiliary hypotheses so that GR still 'works in combination. In fact, to my understanding Dark energy isn't even at the stage of being a well-defined 'hypothesis'.

    Auxiliary information also includes the trivial and taken-for-granted assumptions that your instrumentation is in 'full working order', that the laws of physics haven't changed since you began the experiment, that you aren't hallucinating, etc. etc. In short, no hypothesis is ever tested in isolation, and the auxiliary assumptions upon which the credibility of experiments rests aren't even exhaustively stateable, let alone formally stateable. Hence the reason why falsification isn't a good model of science or epistemic judgements in general. It's rooted in the archaic notion of logical Atomism - the idea that language has legible denotational semantics where the truth of a proposition stands or falls in isolation of the truth of every other proposition. But this is only true in toy-world scenarios described in an artificial language.
  • An hypothesis is falsifiable if some observation might show it to be false.
    It's a complex, ill-posed and frankly outdated assertion. Firstly, an observation O can only materially entail the contradiction of a hypothesis H in a closed finite world. For in an open-world, the meaning of the material implication O => ~H isn't empirically reducible to observations, and is instead an auxiliary hypothesis, A, which isn't itself entailed by some other observation on pain of infinite regress. So in an open world we have A => ( O => ~H) , and hence O => (~A OR ~H)
  • My own (personal) beef with the real numbers
    - We know that we cannot enumerate all halting Turing machines, so for every supposedly complete list of halting Turing machines I can find another halting Turing machine that is not in the list.
    Does that mean that the set of halting Turing machines in uncountable? No! It only means that there is no way to enumerate that list!.
    Mephist

    Yet saying that there is no way to enumerate the total computable functions is somewhat ambiguous, for as previously mentioned we can use brute force to simulate every TM on every input and enumerate on-the-fly the algorithms that have so far halted on all their inputs. Furthermore, the classical logician with realist intuitions will go further to argue that there is a definite matter-of-fact as to the set of computable functions and will therefore believe in the independent existence of a 'finished' enumeration, interpreting the limitations of finite constructive arguments to produce such an enumeration as being epistemic limitations rather than metaphysical limitations. Instead they will simply appeal to the Axiom of Choice to claim the independent existence of completed enumerations of the computable total functions.

    As we said earlier, this full enumeration cannot be used if the diagonal function d(x) is to be both computable and total; otherwise if the realist diagonalizes the hypothetical full enumeration of computable total functions, then d(x) cannot be computable, for the enumeration ensures that d(x) is total. That d isn't computable is obvious, since it involves running nearly every Turing Machine for an infinite amount of time and then diagonalizing, meaning that it's godel number g is infinitely long and that it assumes an infinite value when evaluated at d(g)=d(g)+1
  • My own (personal) beef with the real numbers
    We know that Cantor's Theorem concerning the cardinality of the power-set of integers isn't a constructive proof, for we cannot enumerate and diagonalise only the Turing machines representing the recursively enumerable sets due to the Halting Problem, so we must enumerate the larger set of TMs.

    And yet, the entire set of countable TMs can be diagonalised to prove that the set of countable TMs are "uncountable", by dynamically enumerating the halting TMs so as to ensure the termination of the diagonal TM for each of it's inputs; but in fact all that my proof of "uncountability" amounts to with respect to Turing Machines, is the construction of an enumeration of Turing Machines in such a way that the diagonal Turing machine cannot be part of the enumeration. This is analogous to enumerating the odd numbers and then diagonalising them to construct an even number.

    To spell out the difference, in the case of Cantor's Theorem the constructed enumeration of the sets of natural numbers is considered to be prior to the construction of the diagonal set, but in the case of my method, the enumeration of TMs was constructed via the construction of the diagonal function. In other words, selectively constructing a non-exhaustive infinite enumeration via a diagonal procedure isn't a proof that a bijection with the natural numbers doesn't exist under a different enumeration. And in the case of Turing Machines we know that such a bijection does exist.

    But this raises doubts about Cantor's original diagonal argument, for I might have been lucky enough with my original enumeration of TMs to produce a diagonal function without requiring any shuffling of the enumeration. Therefore Cantor's original argument isn't proof enough that the power-set of N is literally larger than N.
  • My own (personal) beef with the real numbers
    A proof as to why the real numbers are absolutely countable, in spite of all pretences to the contrary:

    1. Enumerate the undecidable set of total functions within the entire set of enumerable Turing Machines of one argument {f1(x),f2(x),f3(x),..}, by running every Turing Machine in parallel on each input x=1,x=2,..., and shuffling their enumeration over time as necessary, so as to ensure that fn is defined when run on input x=n.

    2) Define the Turing-Computable total function g(n) =fn(n)+1.

    Congratulations, you've "proved" that the countable set of Turing machines is "larger" than the countable set of Turing machines.
  • Sleep Paralysis and Apparitional Experiences
    Firstly there is the problem of confabulation, especially when subjects are pressed for more detail; are people accurately reporting their earlier experience, or are they improvising novel content over a vague memory? With memories, it's never clear which parts are authentic.

    Also, what are the necessary descriptive features of an apparition? Or is an apparition more of a hypothesis about experience?

    My dozen or so successful experiments with day-time and wake-induced sleep-paralysis were personally interesting, but i have no idea how to report most of them, and they were bizarre and my memory is vague. None of them involved aliens or apparitions. My favourite experience wasn't even visual, but more of a blind ecstatic trance accompanied with a feeling of moving at lightening speed and a wooshing sound like a jet engine. Ironically, the dullest experience was a high-definition visual OBE that lasted for about a minute, that felt so real and ordinary that I briefly wondered whether I was actually awake and physically in my kitchen, but then realised I must have been asleep because the kitchen clock told the wrong time, plus there was this dizzy sensation from feeling pulled around on that mythical elastic cord that I never actually saw...
  • My own (personal) beef with the real numbers
    An important question is the relationship of the Axiom of Choice (AC) to the Law of Excluded Middle (LEM), for Classical Logic is normally distinguished from Intuitionistic Logic on the basis of the latter axiom rather than the former axiom. Furthermore, intuitionists often claim that AC is constructively acceptable by interpreting AC to refer to the very construction of a function, for intuitionists do not accept the existence of non-constructive functions. This is very confusing, because AC's natural role is to refer to an unspecified function for which we do not possess a constructive description. This situation arises all the time in computing when a program points to an externally provided input data-stream that the programmer cannot further describe.

    Now according to the SEP's article on the Axiom of Choice , AC implies LEM in the presence of two further axioms, namely Predicative Comprehension (PC) and Extensionality of Functions (EF). The former says that the image of every predicate applied to individuals is a set, whereas the latter says that every extensionally equivalent pair of sets has the same image under every Set Function.

    https://plato.stanford.edu/entries/axiom-choice/#AxiChoLog

    The author of the article proves that PC & EF => ( AC => LEM), but he then argues that whereas PC is constructively valid, EF isn't. See the argument, but I don't find his argument about EF persuasive, at least as I understand it.

    In my current view, EF is also constructively admissable, implying that the precise difference between classical logic and intuitionistic logic is AC as much as it is LEM, which then cements the view that classical logic describes a game between two players, whereas intuitionisitic logic describes solitaire.

    A further motive for my view (and indeed the most natural motive), is that classical logic involves sequents of the form (a AND b) => x OR y, where it isn't known which of x or y is true, in which the negation of one implies the existence of the other. On the other hand intuitionistic logic only involves sequents with a single conclusion, of the form (a AND b) => x. Thus there is indeterminism in the case of classical reasoning, but not in the case of intuitionistic reasoning.
  • My own (personal) beef with the real numbers
    I interpret the Axiom of choice to be a 'prayer to nature' to send me the desired object already made. The axiom refers to my opponents choices that are not modelled in the formalism i am using.

    If I have no internal strategy for constructing a basis from my formalism of a vector space, then I am reliant upon nature sending me a basis, which I have no control over. But suppose nature never sends me a basis?

    Arguments between constructivists and classical logicians are caused by a fundamental disagreement about the nature of proof. The former equates proofs with fully-determined algorithms under the control of the mathematician, whereas the latter allows proofs to interact with nature in an empirically-contigent and indeterminate fashion.

    Unfortunately, classical logicians are usually in denial about what they are actually doing. Instead of admitting that their notion of proof is empirically contigent and not internal, they insist their notion of proof is internally constructive in a transcendental platonic realm.
  • My own (personal) beef with the real numbers
    A non-computable real number r refers to a truly random infinite process, and yet the distinction between a truly random infinite process and a pseudo-random infinite process isn't finitely testable, since any finite prefix of r is computable. Since r cannot be finished, at any given time r can be equally interpreted as referring to an under-determined pseudo-random process. Yet any process we specify ourselves is fully determined. Therefore r can only be interpreted as referring to a process of nature that we are observing but that we ourselves haven't specified and have only incomplete knowledge of and control over.

    Therefore when a physicist makes the observation x = 0.14 +/- 0.0001, he could be equally described as stating an interval of rational numbers or as stating an interval of real-numbers. If this sounds wrong, "because the real numbers are uncountable, whereas the rational numbers are countable", recall Skolem's Paradox that the set of real-numbers actually possesses a model in which they are countable. The only important thing to know is whether the physicist fixed the result or whether he measured the result, for constructing a certain number is different to measuring an uncertain number - this difference isn't easy to express in either classical or constructive mathematics.
  • Why we don't live in a simulation
    Unless 'Living within a simulation' is defined in such a way as to be falsifiable, the hypothesis is meaningless. And yet it is imaginable that the falsification of any particular simulation hypothesis can be simulated under an alternative simulation hypothesis. Therefore it doesn't make sense to ask the question in a general absolute sense.