• Epistemology versus computability
    So, if I understand you aright, you claim that I can be certain I can flip an omelet, and yet neither believe nor disbelieve the statement: "I can flip an omelet".Banno

    Let me flip the question around on you. Can you state all that you do when flipping an omelet?
  • A Regressive Fine Tuning Argument
    Sure. Paradigm shifts illustrate that even our most "certain" beliefs are subject to revision. So you may be "certain" that the universe wasn't created because of...well, science, I presume. As I said I don't see any overt contradictions there. You still haven't broached that one.Pantagruel

    To my understanding, paradigm shifts occur when privileged statements and techniques at the centre of a research paradigm get revised. It is rare that a statement or technique is part of the core of a scientific research program, nevermind things as far removed from core scientific theory as creation myths. Let's focus on statements as they are relevant here. The revision of a privileged statement seems to occur when the range of phenomena to which that statement is accurate is limited by some discovery which cannot be explained well by other means.

    I see little to no relation between the privileged statements at the core of scientific research programs and creator hypotheses. This would require that they have content amenable to scientific study alone; and I do not believe that. In that regard they are consistent with the claim, but they also give me no reason to believe it. If it were stipulated that creator hypotheses were indeed related to the privileged statements at the core of privileged research programs and were in conflict with them; this would be evidence of the falsehood of the creator hypotheses. In general I would not give the status of even being plausible to any creator hypothesis if it were treated with the required pedantry for scientific claims.

    A counter example to the claim that "X is consistent with Y entails X may reasonably be believed given Y" is two irrelevant statements, "apples are sometimes green" and "pigs are sometimes quite pink", and I am not in the business of believing apples are sometimes green because pigs are quite pink. Consistency alone won't do for justification.

    What I did note was Popper's (correct) position that you can't account for novel hypotheses by evidence, because you would be in an infinite regress.Pantagruel

    The most common example of a paradigm shift that motivates this intuition is the shift from Newton's model of gravitation to Einstein's; the two theories are quite far apart conceptually, and it would be implausible to find sufficient hints toward general relativity in Newton's model. What happened here (per Rovelli's book The Order of Time) is that Einstein noticed that special relativity falls out of a peculiarity of Maxwell's equations breaking down when attached to a moving coordinate system; which motivated the equivalence principle in special relativity, which motivated a localisation of the principle in general relativity and a relationship of this principle to massive objects.

    Per the example, hypothesis generation occurs through a consideration of both theory and experiment; the theory suggests networks of coherent statements (perhaps under some motivating assumptions and framing devices)- new theory research brings out new implications or reveals network blind spots, experiments provide interesting measurements; new experiments test theories, provide phenomena for theory to account for at a later date and furthermore constrain what theoretical implications are plausible to given the experiment. The two can be done in tandem and interweave (as is typical in "series of experiment" theory generating papers in psychology.)

    The majority of (declarative) knowledge does not work like science as regards the theory construction/experiment, of course. For this reason, the stipulated properties of scientific methodology are likely not to be as relevant for non-scientific knowledge; we don't constantly run research programs in first person, or in our friend groups. What remains, for me at least, is a network of items of (declarative) knowledge which are linked by argument and evidence in a broader sense.

    In that regard, I find it implausible that the universe was created by beings with inconsistent properties, or with properties that have sufficient tension with the above scientific or more informal networks, or elaborations/relations using the two. Creator hypotheses lay in that hinterland of (the conceptual consequences of) scientific and non-scientific claims taken together.

    A being "outside the universe" would not exist, as the universe is all that is, a transcendent being would similarly be outside the universe, a transcendent and immanent being is inconsistent (but wait, it just has transcendent and immanent descriptions without having parts blah blah...). The stories of holy books have a terrible habit of getting dates wrong, and their creation myths should be treated as simply allegorical, nice fiction, or plain wrong when considering what it would mean for there to be an agent at the start of the universe and/or outside the universe (assuming time has a first point, even though it radiates out as part of spacetime and we know that "when something first happened" is frame dependent), are wrong on the geology etc...

    What remains after all that is no reason to believe in any creation hypothesis, and much reason to believe that almost all are false. The claim simply doesn't fit with what is known and what can be reasonably inferred.

    It was a general statement of the fact that not every belief in life is scientific, and it is possible to have a reasonable belief which is nevertheless false.Pantagruel

    Yes, scientific knowledge does not exhaust knowledge, and all knowledge is fallible.

    Look at children. They develop "superstitious" beliefs about things, but in light of their limited knowledge those beliefs can be seen as "reasonable." You consistently return to the specific case about the created universe, but it's about the general case of believing and I think I have already restated that cogently in several ways now.

    We can find the reasons someone believes a claim to be understandable even when they are not reasonable.
  • A Regressive Fine Tuning Argument
    I kind of see where the "nitpicky logic" tone that a lot of the threads degenerate into comes from now though. Top down I think.Pantagruel

    I have a nitpicky logical tone because I wanted to get a clear statement of your position. I apologise if it made you uncomfortable.

    The argument was about the reasonability of believing in anything that isn't contradicted by existing evidencePantagruel

    Yes. I understood that was what you were claiming. What I did not see was an argument linking it to your simultaneous reference of falsifiability, paradigm shifts and the fragmentary and limited nature of knowledge. Can you please explain to me how falsifiability, paradigm shifts and the fragmentary nature of knowledge establish (or should convince me) that a belief is reasonably held when it is not contradicted by anything known?
  • A Regressive Fine Tuning Argument
    I have no evidence for or against the proposition (initially)Devans99

    Can you justify this? It seems to me that I can use:

    Your question is of the form 'Is X a Y?' where there are an almost infinite number of different types of Y. So the answer space is clearly not evenly distributed between Yes and No.Devans99

    "Is this universe a created universe?"

    Because there are an almost infinite number of different types of sort Y. So the answer space is clearly not evenly distributed between Yes and No.
  • A Regressive Fine Tuning Argument
    You weren't able to follow the line of reasoning about the origin of hypotheses, contingent and limited character of knowledge, and the possibility of paradigm shifting?Pantagruel

    I imagined the details linking your statements to each other. You didn't spell them out.

    (Hypothesis generation is arbitrary) & (Knowledge is fallible and contingent) & (Paradigm shifts are possible) => (It is reasonable to believe in some creation hypothesis), why?

    Maybe the universe was created. How would the statement "The universe was created" in any way contradict anything else that we know about the universe? Think about it.Pantagruel

    X is consistent with Y is not sufficient grounds for belief in either X or Y.

    Your question is of the form 'Is X a Y?' where there are an almost infinite number of different types of Y. So the answer space is clearly not evenly distributed between Yes and No.Devans99

    "Is this universe a created universe?" same statement form.

    "Is the universe a creation?' however has no skewed underlying answer space.Devans99

    Demonstrate this.
  • A Regressive Fine Tuning Argument
    But it is not normally distributed. We know the universe could be a handbag, a truck, a meat clever, etc... so there are many non-chicken things the universe could be. So it is a boolean question that comes loaded with evidence that the answer is 'no'.Devans99

    The negation of "the universe is an egg" has probability 0. Why would I consider things of probability 0?

    (before taking evidence one way or the other).Devans99

    There is no evidence that the universe is not an egg. Literally none. 0 probability.

    However to think that our current belief-system is somehow "more adequate" than any that has gone before is naive, don't you think?Pantagruel

    "Therefore it's reasonable to believe the universe was created"? How does this possibly follow?
  • A Regressive Fine Tuning Argument


    And what do paradigm shifts and the falsifiability criterion have to say about creation hypotheses again?

    But there are more possibilities than 'the universe being an egg' that you have not allowed for. The universe could be a radio, a chicken, etc... So this is not a boolean sample space.Devans99

    It consists of two outcomes, the empty set and the claim that the universe is an egg. The empty set has probability 0, the universe is an egg has probability 1. Therefore the universe is an egg with probability 1.
  • A Regressive Fine Tuning Argument
    You are being glib. Popper points out that it doesn't matter where hypotheses come from. You can't require that a hypothesis be evidentially based, you end up in an infinite regress: what is the evidence for the evidence when you don't already know the law. Why didn't anyone figure out the theory of gravity before Newton? Some people perceive things that others do not.Pantagruel

    You can't require that a hypothesis be evidentially based

    Tell that to any funding body, ethics committee or practicing scientist.
  • A Regressive Fine Tuning Argument
    But evidence is built into the question 'is the universe an egg?'. We know eggs are generally small, universes are big etc... So 50%/50% is not appropriate in this case.Devans99

    50/50 is impossible in that case. The only consistent assignment of probabilities to that set which satisfies the probability axioms assigns all probability to "the universe is an egg". Therefore, the universe is an egg with probability 1.
  • A Regressive Fine Tuning Argument
    By showing that the 2 possibilities are not symmetrical, you are introducing evidence for/against the proposition. I was assessing the proposition as 50%/50% - before introducing evidence for/against (as a separate step in the probability calculation).Devans99

    Sample space: {empty set, The universe is an egg}, probability of the universe being an egg, 1! Logic! Mathematics! Probability!
  • A Regressive Fine Tuning Argument
    That isn't what I said. I explained my positive hypothesis.Pantagruel

    I suggest that there are types of regularities that perhaps are not evident to trivial observation, that perhaps do become evident through sometimes infrequent idiosyncratic experiences which not everyone has or pays attention to. In that case, it is entirely reasonable that people could find themselves possessed of valid "reasons for believing" in almost anything...anything within "the pale of possibility" shall we say.Pantagruel

    Someone posits something arbitrary, is otherwise logical, and makes a sensible inference; garbage in, garbage out.

    But what if the universe is a egg? Therefore the universe has a shell.

    There are certain regularities that only become evident through infrequent idiosyncratic experiences which not everyone has or pays attention to. In that case, it is entirely reasonable that people could find themselves possessed of valid reasons for believing that the universe is an egg from almost anything... anything within the pale of possibility, shall we say.
  • A Regressive Fine Tuning Argument
    Here I'd have to disagree. There is barely a consensus as to what knowledge is. However one thing that science has established quite satisfactorily is that there is more that is unknown than known. Moreover science has likewise established its own approximate and ever-evolving nature. Look at historical paradigm shifts.Pantagruel

    That is sort of difficult to achieve 14 billion years later. But I imagine God worked out the requirements for a life supporting universe and then built some sort of device (IE bomb) that would result in a life supporting universe. The Big Bang is the remaining evidence of this.Devans99

    The fallible and incomplete nature of knowledge is not evidence for any hypothesis of creation.
  • A Regressive Fine Tuning Argument
    What % probability do you assign to the unknown boolean question 'is there a creator' (before hearing the evidence). Is it:Devans99

    If pushed, almost 0%, it would be very surprising for me. It necessitates a lot of hypothesis with vaguely specified mechanisms relying upon incredible contingencies with no reason to believe them over natural explanations. For me to make any sense of it, I'd want there to be at least a description of the creation mechanism before I even felt comfortable assigning any probability to that outcome whatsoever, never mind quantifying over creation mechanisms like this would require. Consider; you are assigning a probability to a proposition rather than an event, I wanna know more about the proposed events of creation.

    You're also using the principle of indifference to bracket knowledge we have, rather than assigning it after analysing what knowledge we have and concluding a total absence.
  • A Regressive Fine Tuning Argument
    - A 50% chance that the universe was created for lifeDevans99

    "Devan is wrong about all conclusions he tries to derive from maths" or "Devan is right about some conclusions he tries to derive from maths", it's 50-50, boolean outcomes...
  • A Regressive Fine Tuning Argument


    P (Is there a creator | inconsistency of creator concept ) = 0

    There, equal validity to everything you've said. Therefore there's no creator.
  • Views on the transgender movement
    I will try to be more clear in the futuresarah young

    Always a good idea. Advice I could do with following.

    I meant that it is not mutilationsarah young

    Thanks for clarifying.
  • A Regressive Fine Tuning Argument
    I have eggs or bacon on my bed.
    50% chance of eggs, 50% chance of bacon.
    I look at my bed.
    No eggs, no bacon.
    What went wrong?
    Maybe I had salad or marmite sandwiches instead...
    I guess that makes the probability of eggs and bacon and marmite sandwiches 1/16 now!
  • Views on the transgender movement


    Thread has two years since the last post in it, original participants are less likely to respond.

    I may be biased but I do not believe that hormone treatment and sex reassignment surgery is not mutilation and is not sketchy either; to get these things I had to talk to several licensed medical professionals and psychiatrist that specialized in the field I actually felt MORE safe there than I did and do with normal doctors.sarah young

    I don't understand the double negative distributed over that clause. Do you mean that:

    (1) You don't believe that sex reassignment surgery is not mutilation; leaving open the possibility that you do believe that it is mutilation and the possibility that you have no standing at all on the matter.
    (2) You do believe it is mutilation.
    (3) You believe it is not mutilation.

    Or something else entirely?
  • Epistemology versus computability
    I don't think I disagree with any of what you said; but I'm not sure, 'cause I'm not sure what it is you are arguing.Banno

    My overall argument is for the claim that it is possible to be certain of things that we do not believe. "Do not believe" as in "lack belief in" rather than "believe the negation of".

    For this, I introduced a distinction between certainty and belief. Belief applies only to statements; and can thus only be a component of declarative knowledge. Certainty applies to statements and competences; and thus a certainty can be a component of declarative or procedural knowledge.


    I assumed that declarative knowledge consists solely of statements, irrelevant of how those statements are produced. Procedural knowledge consists of competences; abilities to do activities reliably in appropriate contexts.

    For a given item of procedural knowledge, call it a competence to do a task. I assume that every task consists of subtasks, and that all subtasks of a task must be able to be done competently (and reliably in appropriate circumstances) by an agent in order for the agent to know how to do the task. In a formula, task competence entails subtask competence for every subtask of the task.

    For a given task, we can label it; knowing how to ride a bike, knowing how to recognise a starling by its song. But we need not be able to label or state every subtask that goes into the task in order to be competent at the task; to know how to do it. Knowing how to recognise the birdsong of a starling is a network of interplaying competences which may be grouped into the general descriptor "knowing how to recognise birdsong", and in that regard we may make items of declarative knowledge about the know how of recognising birdsong; whenever we may aggregate subtasks of the task into task components. Like, say, splitting up baking a cake into an ingredient mixing phase and a phase involving an oven.

    We can talk about recognising a starling from birdsong. We may be able to state some subtasks of recognising a starling from birdsong, but this is not required to have the competence. In this regard we can talk about knowing how to recognise birdsong without being able to state every subtask it entails.

    Items of declarative knowledge need to be statable, as statements need to be able to be stated to count as statements. Items of procedural knowledge need not be statable, as the statability of the subtasks of a task is not required to know how to do that task.***

    If something is believed, it is a statement. If something is not stateable, it cannot be believed.

    As I've stipulated, certainty can be applied to all items of procedural knowledge; I am certain I know how to bake a carrot cake, I am certain I know how to do all of that entails. I also stipulate in all cases that certainty of a task distributes over its subtasks; to be certain that one knows how to bake a carrot cake entails that one is certain that one knows how to do all activities (subtasks) that entails.

    This was what I had in my head so far and was trying to reason towards.

    ____________________________________________________________________________________________

    I would like to add the following.

    Given all this, it may still be the case that we may only be certain of items of procedural knowledge that we can associate with an item of declarative knowledge; that is, we can only be certain of procedural knowledge items that we can state that we know how to do. For this, I think the paragraph marked with *** would need to be strengthened to:

    Items of declarative knowledge need to be enumerable, as statements need to be able to be constructible using the rules of a language to count as statements of that language. The subtasks of a task which are procedural knowledge are not in general enumerable, as the enumerability of the subtasks of a task is not required to know how to do that task.

    If it is possible to associate an item of declarative knowledge with the procedural knowledge of the subtasks of every task, then the items of procedural knowledge are necessarily enumerable, which goes against the previous paragraph. (Though I do I believe it is possible to associate an item of declarative knowledge with the larger tasks themselves; as task labels.) An analogy here is taking the integer part of a real number; you can't pair off the integer parts with the reals uniquely. The subtasks play the part of real numbers, the integers play the part of tasks. There is still a sense of equivalence in play; two real numbers (subtasks) are equivalent when they have the same integer part (are part of the same task or subtask aggregate). This is where I was going with the numerical identity weakened to qualitative equivalence stuff; there's too many variations of activity within a competence to state (numerically un-identical activities within the task), but we can label the collection of variations as the competence (qualitatively equivalent activities insofar as they all form part of the same task).

    (Read, I claim that there are task components that we cannot state explicitly but can still aggregate, quantify over or incompletely summarise in a manner that does not completely determine each subtask by expressing its propositional content in a statement. "I know how to bake a cake" is true iff I know how to bake a cake, and I know all that baking a cake entails, but each subtask would require an "I know how..." statement, and there are too many.)

    In that regard, we can be certain of things (items of procedural knowledge) that we do not believe.

    As an intuition pump; characterise belief as a propositional attitude, which is a disposition (towards a statement). We lack dispositions towards most subtask components as they are done without impinging upon access consciousness. So we lack propositional attitudes towards some subtasks, so in particular we do not believe them. Intuitively, we're so immersed in them we don't even need to believe them.
  • Epistemology versus computability


    It seems to me your standards for someone demonstrating that there are components of know how which cannot be stated are to state them; that they must be constructed as an example. I think that a more appropriate standard would be that know how components which cannot be stated should be labelled as a class, and only some elements of the class cannot be stated.

    Consider:

    (1) Recognising sizzling noises.
    (2) Knowing how to crack an egg gently into a bowl.

    I can't split up (1) further, especially not exhaustively; maybe I make a sizzling noise or use an onomatopoeia, do I need to be able to do either to recognise sizzling noises? Nah. I can write a rough description of (2), but exhaustively detailing what it means to crack an egg gently in a way where every subtask has an associated item of declarable knowledge... No.

    A person's ability to describe how to do something is much different from both their ability to do something and how they do it, and why should we expect that every component of know how has an item of declarative knowledge associated with it such that "X knows how to do procedural knowledge component Y of task Z implies X can state f( Y )" in these circumstances? To my mind, know how descriptions are much coarser grained; for an arbitrary task and agent, only some of the components of some overall tasks are such that their agent can make statements about them.
  • Epistemology versus computability


    EG, here's a cooking guide for sunny side up eggs:

    Heat oil in an 8-inch nonstick skillet over medium-low. Gently crack eggs into pan. You shouldn't hear a hiss, and the eggs should lie flat and still. If you hear sizzling or the whites flutter or bubble at all, turn down the heat. Cook 3 minutes or until the whites are mostly set, with some still-runny whites near the yolks. Tilt pan toward you so oil pools on the bottom edge; dip a spoon in the oil, and gently baste the uncooked patches of white until they're set. Be careful not to baste the yolks, or they'll cloud over like cataracts. Sprinkle with pepper and salt. Remove eggs from pan, leaving excess oil behind.

    This makes sense. I know how to gently crack an egg into a pan. But if I were to try and list what goes into that knowing how, I'd write lots of vague things like:

    Make sure you don't crack the surface with too much force.
    Make sure you don't elevate the eggs too high above the pan.
    Gently tap the egg against a hard surface to weaken the shell.

    I also know how to recognise an egg sizzling in a pan. But how could I describe the components of hearing sizzling? A undulating high pitched noise that sounds like a series of slaps? If I were to train someone to recognise sizzling noises, I would need examples; ie, knowing how to use the word "sizzling" piggybacks off a competence of recognizing sizzling noises; the components of which I cannot state.
  • Epistemology versus computability
    I'm not going to talk about things we can't talk about. I suggest you don't, either. It's a very common problem for philosophers, easily remedied.Banno

    Oh I have no compunctions about talking about things which are not numerically identical to language items. All that matters is that we can treat what we're talking about consistently; a qualitative identity. I saw it raining today, that doesn't mean my perception was rain, or that "I saw it raining today" the statement was rain, all that matters is an equivalence between them.

    The link between a list of rules for baking a cake and the know how of baking a cake, say, and the discrepancies between the list and the know how. Do you find it inconceivable that X knowing how to bake a cake includes X having procedural knowledge that they could not state?
  • Mathematicist Genesis


    Regarding the sense of necessity thing, maybe this helps spell out my suspicions that it doesn't tell us much.

    Define that a given statement x is possible with respect to a given axiomatic system z if and only if (the theory of z & x) is consistent.
    Define that a given statement x is necessary with respect to a given axiomatic system z if and only if (the theory of z & not(x)) is inconsistent.

    If we have a list of axiomatic systems as the set of possible worlds, with the same underlying logic, and we fix a theory like arithmetic, arithmetic will be possible for axiomatic systems that don't make any of its theorems false (IE, possible worlds look like augmented arithmetic with rules consistent with all of its rules, or weakened arithmetic by deleting rules or modifying them to weaker versions), and arithmetic will be necessary for axiomatic systems that make all of the theorems of arithmetic true (IE, the axiomatic system under consideration contains all theorems of arithmetic in its theory, like set theory or Russel's type theory).

    If we have a list of possible worlds that contained all satisfiable combinations of well formed formulae of the logic, the only statements true in all possible worlds would be those that we can derive from any world; the tautologies of the underlying logic.

    Are the theorems concerning natural numbers necessary in the above sense? Well no, for example the rational numbers and how to manipulate fractions are in the above list of possible worlds; and for the fractions, it's false that every fraction is the successor of some other fraction (under their usual ordering).

    (It's false that every element of the structure is the successor of some other number, but it is true for its sub-structure , in the arithmetic of fractions there will be a sub-arithmetic of naturals that has the successor function behaving normally, but this weakens the in successor axiom statements about natural numbers to , in some sense they're the same statement since they're about the same object, but the universality of the successor property still breaks for the larger object of all fractions.)

    If we go back one layer and allow the rules of logic to vary (over the set of well formed formulae); the only necessities would be the shared tautologies of every logic under consideration.

    If we can vary the set of logics to include just one logic which has, say as satisfiable, not even all the consequences of propositional logic would be necessary (because there exists a satisfiable theory/possible world which is inconsistent with double negation elimination).
  • Epistemology versus computability
    "It's certain, but I do not believe it" is a performative contradiction.Banno

    I agree! "X is certain about P => X believes that P". This doesn't address whether we can be certain of things that are not numerically identical to things we can state. Like perceptual events, or what makes "I am certain that I can ride a bike" true.

    Then, by JTB, it is believed, but we do not believe that we believe it. The scope of each belief statement differs.Banno

    Yes! I imagine that we can be certain of procedural and perceptual knowledge, in some cases we can state that we have such knowledge: "I know how to ride a bike" and "I can recognise a starling when I hear its song", but it seems to me the collection of procedural and perceptual knowledge I am certain of is much larger than the procedural and perceptual knowledge that I could declare that I am certain of (and thereby believe, from the above).

    Under the account that "X believes that P" entails "X can declare that "I believe that P"", these items of certain procedural and perceptual knowledge would not be believed by X (modus tollens) since they could not be declared by X.
  • Mathematicist Genesis
    None of the theorems are themselves necessarily true, but it’s necessarily true that they are implied by their axioms.Pfhorrest

    Only once you've fixed the underlying logic, I think. I'm not too happy with bringing in an exterior sense of modality to the theorems. If we're in a context sufficiently abstract to start playing around with the rules of logic, then necessity and possibility ideas might be perturbed too.

    Edit: though, again, I generally agree with what you've said. I might just be being pedantic here.
  • Mathematicist Genesis
    As I understand it, we’re really saying “all objects with this structure have these properties”, but that’s technically true whether or not there “really” are any objects with that structure at all. All bachelors are unmarried, even if there are no bachelors.Pfhorrest

    I think this is about right. Though it's clearly true that not every first order structure has the empty domain as a model; eg "There exists an x such that x+1 = 2" is not true in the empty domain, but it is true in the naturals with addition.

    Something that gives me that Pascal feeling of infinity vertigo is that we can say things like:

    "If you interpret the the Peano axioms in the usual way, then..."

    Which conjures up a meta language to talk about an implication from a meta-language item to an object language item. It seems the formalism always has to stop at some point, but the reason (and conceptual content) doesn't.
  • Epistemology versus computability


    Maybe a logical reframing of this might be that it's possible to be certain of things that we do not believe that we believe?

    IE: Possibly [(X is certain that P) and not (X believes that X believes that P))]

    If belief as a modality collapses, this is equivalent to:

    Possibly[ (X is certain that P) and not (X believes that P)]

    A certainty might be an unknown known, or the truth-maker to the truth-bearing proposition. We believe only in truth bearers; statements; not their truth makers. What a statement is about and what makes it true is not the statement itself, it is merely equivalent to the statement content insofar as it is expressible.

    Edit: the explanatory paragraph might make certainty range over more than statements; over perceptual events and environmental behaviour, and would require an account of the connection between believing that P and the certainty acting on the truth-maker of P.
  • My own (personal) beef with the real numbers
    I think Vovoedsky's name gets used way too much in vain in these types of discussions. It's a perfectly commonplace observation that isomorphism can be taken as identity in most contexts. The univalence axiom formalizes it but informally it's part of the folklore or unwritten understandings of math.fishfry

    Bit about the univalence axiom was in one of @Mephist's posts, no idea what happened there.
  • My own (personal) beef with the real numbers
    In this mode, the stuff of the proof itself is the medium of thoughtmask

    In my experience, formal intuition works more like an open neighbourhood around a proof than of proofs themselves. There are essential details and inessential details for the understanding of a structure. The essential details are what enable you to generate expectations of how the structure behaves; envisage theorems and ask questions about it.

    EG: "We can label set elements however we like, so functions can be interpreted as ways of permuting the labels of the elements on a set... I wonder if every collection of functions on a set behaves just like a set of permutations on that set?"

    It seems you can write something like mathematical pseudo-code to suggest an intuition and play about with it, an example for the above:

    Let's take the function like f(x)=x+1 on the natural numbers, envisage it as a list of pairs:

    (1,2)
    (2,3)
    (3,4)

    and so on.

    And you can read that as "relabel 1 with 2, relabel 2 with 3, relabel 3 with 4" and so on. If you have a familiarity with disjoint cycle notation for permutation groups, you might think both "those look a lot like permutations" and that whole thing looks like the permutation (123456...) in the cycle notation, which is the original function in another representation. The procedure to generate this intuition didn't look to depend on much besides the choice of set.

    There are a lot of syntactic details that facilitate every step of this "pseudo-code", sometimes (often) I can get them wrong and that blocks the intuition from working. Furthermore, the "essential details" as interpreted by me might not (usually don't) generate all the behaviour of the structure- IE enable me to expect every provable theorem as provable. (They also often make me expect things are provable when they are not.)

    So the essentials of a structure look like "necessary highlights", in a sense they cover the structure in question for the purpose with all relevant detail. These "highlights" I think, ideally, map onto your anchor points for formal intuition. In such a case it seems to me that someone would understand the conceptual content of a mathematical structure (insofar as it is relevant to the concerns) well. If you have mastery over the domain, I imagine the essential details allow you to generate expectations for many provable theorems from the structure, and allow you to easily see if something is inconsistent with it.
  • Mathematicist Genesis


    But yes, thinking about it again, what you've said is accurate. I've not written down what sets are, natural numbers are, rational numbers are etc; but I've gone some way of showing a system in which you can talk about them formally.
  • Mathematicist Genesis
    Would it be fair to say that thus far, we have only discussed the language by which we would reason about any kinds of objects, but (other than for examples) we have not yet formally introduced any objects about which to reason?Pfhorrest

    IMHO you can start wherever you like, the "existence" of any object that satisfies some property is really only relative to some other object. Like "the naturals exist inside the reals" or "the continuous functions exist inside the differentiable functions". "Existence" is just a question of whether an object exists inside another one.

    You can "introduce" an object without it existing. "imagine a signature with these rules, then this follows", this is like the deduction theorem (with a big abuse of notation):

    If you have then you have .

    There is an object with the structure, so it has these properties.
    vs
    If there was an object with this structure, it would have these properties

    In this regard, the reason I've waited to talk about set theory is that the model of set theory contains sub objects which model all the structures we've talked about. If you want to start from "something fundamental", this "contained object" or "the theory of ZFC is a supertheory of the theory of natural numbers", say, is a good intuition for why ZFC is in some sense "fundamental".
  • My own (personal) beef with the real numbers
    The operation on the group was 'really' functional composition, which is why groups weren't automatically commutative.mask

    I agreed very hard on this in my heart. Tutorials and seminars in abstract algebra mixed between people who preferred algebra and people who preferred analysis; it's a shock to the intuition whenever an algebra lover presents the group operation "the other way around", and vice versa. Seeing groups as transformations was how I imagined them; but as for imagining as a group in the same sense it didn't work; those intuitions were tied to quantities and magnitudes, but they happened to correspond to translations along the "real line" of a given length, and that intuition could be passed up to vector spaces of low dimension (magnitude + direction, parallelogram rule).

    For someone who insists on math being beautiful, it has to sing for the intuition. For example, when learning group theory I really liked thinking of groups of permutations. Those were the anchor for my intuitionmask

    In general I have found that working over formalisms is one necessary part of developing understanding for a topic; don't just read it, fight it. Follow enough syllogisms allowed by the syntax and you end up with a decent intuition of how to prove things in a structure; what a structure can do and how to visualise it. Those syllogisms aren't the whole story, the visualisation matters.

    What I want to pick a bone with, though perhaps this is a misreading on my part or a difference in emphasis, is whether such intuition development (associating a mental image or a shorthand for forming expectations regarding a structure) is merely aesthetic. We're quite well trained to think of mathematical objects as formal objects, symbol pushing, or as physically rooted (or obversely grounding reality in mathematical abstraction), but what of the required insight to, as you put it, anchor the intuitions of a structure?

    Developing such anchors and being able to describe them seems a necessary part of learning mathematics in general; physical or Platonic grounding deflates this idea by replacing our ideas with actuality or actuality with our ideas respectively. In either case, this leaves the stipulated content of the actual to express the conceptual content of mathematics. This elides consideration of how the practice of mathematics is grounded in people who use mathematics; and whether that grounding has any conceptual structure; how is actual mathematics understood by actual people and does that have any necessary structure? Put another way; what is the structure of the conceptual content of mathematics?

    Whitehead alludes to something similar regarding philosophical projects:

    Every philosophy is tinged with the colouring of some secret imaginative background, which never emerges explicitly into its train of reasoning.

    Why should this background of mathematics remain a secret? And is it merely aesthetic in nature (a consideration of mathematical beauty alone)?



    This looks cool, the statistician in me likes including uncertainty into the operations of arithmetic, but dislikes characterising uncertainty as the range of a set.
  • My own (personal) beef with the real numbers
    I know you don't like technical stuff by I'm pointing out that I don't need set theory to build a square root of 2.fishfry

    Especially considering the algebraic approach that you presented. I find it intuitively satisfying without considering set theoretic foundations. And the Dedekind cut is satisfying if one can admit sets of rational numbers (intuitively self-supporting, IMO).mask

    Alternatively, the notion of a real number from abstract algebra is one of a complete ordered field. Ultimately, it is the same concept. The properties are the same, except that the approach to the investigation is leaning more heavily towards non-constructivism. Which is fine, because this is what abstract algebra is all about. In fact, in some sense, the abstract definition is the proper definition, and the constructive one serves as an illustration. The latter is pedagogically necessary, but once understood, is not essential anymore.simeonz

    :up:

    Great discussion. I don't really know if this contributes much to it, but I want to throw it among people I'm interested in reading talk about maths.

    Something I find very interesting about these structures (and maybe this is part of what you were alluding to with "non-constructivism" @simeonz?) is that they need not be derived from more fundamental stuff (like set theory) in order to be understood in much the same way as if they were constructed from a more foundational object. Nevertheless, how you stipulate or construct the object lends a particular perspective on what it means; even when all the stipulations or constructions are formally equivalent.

    I remember studying abstract algebra at university, and seeing the isomorphism theorems for groups, rings and rules for quotient spaces in linear algebra and thinking "this is much the same thing going on, but the structures involved differ quite a lot", one of my friends who had studied some universal algebra informed me that from a certain perspective, they were the same theorem; sub-cases of the isomorphism theorems between the objects in universal algebra. The proofs looked very similar too; and they all resembled the universal algebra version if the memory serves.

    Regarding that "nevertheless", despite being "the same thing", the understandings consistent with each of them can be quite different. For example, if you "quotient off" the null space of the kernel of a linear transformation from a vector space, you end up with something isomorphic to the image of the linear transformation. It makes sense to visualise this as collapsing every vector in the kernel down to the 0 vector in the space and leaving every other vector (in the space) unchanged. But when you imagine cosets for groups, you don't have recourse to any 0s of another operation to collapse everything down to (the "0" in a group, the identity, can't zero off other elements); so the exercise of visualisation produces a good intuition for quotient vector spaces, the universal algebra theorem works for both cases, but the visualisation does not produce a good intuition for quotient groups.

    If you want to restore the intuition, you need to move to the more general context of homomorphisms between algebraic structures; in which case the linear maps play the role in vector spaces, and the group homomorphisms play the role in group theory. "mapping to the identity" in the vector space becomes "collapsing to zero" in both contexts.

    There's a peculiar transformation of intuition that occurs when analogising two structures, and it appears distinct from approaching it from a much more general setting that subsumes them both.

    Perhaps the same can be said for thinking of real numbers in terms of Dedekind cuts (holes removed in the rationals by describing the holes) or as Cauchy sequences (holes removed in the rationals by describing the gap fillers), or as the unique complete ordered field up to isomorphism.
  • Got a service offer pm
    Thanks, banned.
  • Mathematicist Genesis
    In case it is not obvious why functions should be defined as having the property "if two inputs to the function are the same, the output should be the same" and allowed as term expressions:

    (Technical reason) Without that restriction, a term could transform into a collection of terms, and a statement like would both require a lot more effort to make sense of (what would it mean for a number to be less than a collection of numbers, say?) and "for all x, x < (some set)(x)" is equivalent (I think) to quantifying over a predicate (a collection of domain items dependent on x) - it breaks the first order-ness of the theory.

    (Expressive reason) It means that you have a way of describing ways of transforming a term into another term. To reiterate: you can describe properties and relations of ways of transforming terms into other terms. These are incredibly important ideas, functions let you talk about change and transformation, in addition to relation and property. The concepts this allows the logic to express are much broader.

    Signatures also let us stipulate properties (strictly: well formed formulae that can be true involving) their constituent elements (domain elements, functions, predicates), EG, if we have (have in mind "natural numbers when you can add them and multiply them and nothing else"), we can stipulate that and obey:

    , which is the (left) distributive law.

    Stipulating the distributive law lets you derive this as a theorem:



    The first and second equalities follow from the distributive law. This is, hopefully, the familiar FOIL (firsts, outsides, insides last) or "smiley face" method of multiplying binomials together from high school algebra.

    This can be written formally as

    when are understood as obeying the distributive law.

    Usually the signature and the axioms which define the properties and relations of its predicates and functions will be the intended meaning of just writing down the signature. This is a notational convenience, it does no harm. We could introduce an extra list of axioms for the operations and add them to the signature (analogous to building a system of inference over well formed formula production rules), but usually in other areas of mathematics this is omitted.

    When you leave this regime of mathematical logic it is extremely convenient to just assume that whenever we write down a signature, we implicitly give it its intended interpretation. This specifies the structure of interest for mathematical investigations/theorem proving in terms of what can be proved and what we hold to be true, but it does not block people from analysing other models; called non-standard models; of the same signature.

    Because of the waning relevance of the distinction between syntax and semantics for our purposes, I am not going to go into all the formal details of linking an arbitrary signature to an arbitrary interpretation; but I will state some general things about models to conclude, and end our discussion of them with a mathematical justification for why set theory is in some sense the "most fundamental" part of (lots of) mathematics.

    (1) Signatures do not necessarily, and usually do not, have unique models. More than one object can satisfy the stipulated properties of a signature. This applies as much for things like arithmetic as it does for my contrived examples.

    (2) Models can be different, but not be interestingly different. If we decided to write all the natural numbers upside down, we would obtain a different set that satisfies all the usual properties of the natural numbers; a formally distinct set anyway. If we started calling what we mean by "1" what we mean by "2" and vice versa, we would have a formally distinct model. But these models are entirely the same for our purposes. In this regard, models are equivalent when each truth of one obtains just when an equivalent corresponding truth obtains in the other. EG, if we relabelled the numbers 1,2 by a,b,c, we would have:

    holds in our new labelling when and only when holds in the usual labelling.

    When this property holds between models, they are called isomorphic.

    (3) Models are extremely informative about the syntax of a formal system, for example, you can prove that the truth value of a statement is independent from a system of axioms (neither it nor its negation are derivable) by finding a model of that system where the statement is true and finding a model of that system where the statement is false. Alternatively, one can assume the statement along with the formal system under study and show that it entails no contradiction (IE that it is consistent) and then assume the negation of the statement along with the formal system under study and show that this too entails no contradiction. This proof method was what showed that the continuum hypothesis was independent of the axioms of set theory ZFC; a result partially proved by Godel and then the rest by Cohen.

    Lastly, you have no doubt noticed (if you've been paying attention) that the discussion of first order logic required us to talk about collections of objects; eg for the domains, for the definition of functions and predicates, for how the quantifiers worked... It showed up pretty much everywhere. If it were somehow possible to make a bunch of axioms of a privileged few symbols and bind them together with a signature that captured how collections of objects behave, whatever modelled such a theory would be an extremely general object. It would contain things like natural numbers, groups in algebra, functions in analysis... It would contain pretty much everything... And in that regard, the theory that expressed such a structure would serve as a foundation for mathematics.

    This happened. It's called ZFC, for Zermelo Fraenkel set theory with the axiom of choice. So that's what we'll discuss (very briefly, considering the complexity of the topic and my more limited knowledge of it) next.
  • Epistemology versus computability
    Verifying the justification's paperwork is a procedure. If there is no procedure possible for that, then the justification is unusable.alcontali

    I think you're equivocating between:

    (1) If someone knows something, they obtained that knowledge through a process they can (at least) partially describe unambiguously in natural language. The description here might be called a procedure.
    (2) If someone completely describes an effective procedure in natural language, it can be implemented in a suitable programming language. (Church Turing Thesis)
    (3) If someone writes a proof (formal justification in a formal system), it can be represented as a computer program in a model of computer programs and vice versa (Curry Howard Correspondence).

    If you accept (2) and (3), it follows that if someone describes an effective procedure in natural language, it can be represented as a proof in a formal language. But they don't have any relevance to (1). IE The claim "knowledge consists only of effective procedures" is completely independent of (2) and (3).

    "A process that someone obtains knowledge from that they can at least partially and unambiguously describe" is in no way "a completely described effective procedure" even if you accept (2) and (3).
  • Epistemology versus computability


    It's difficult to see if you're making an argument or making a series of unconnected statements about formal languages but not about the reduction of epistemology to formal languages, never-mind the reduction of epistemology to effective procedures.

    For someone who touts the central role formal systems play in justification your posts don't read like a tightly constructed syllogism.

    Maybe, maybe not.alcontali

    Demonstration that meaningful discussions of mathematical concepts can occur solely in natural language:

    Alice: "I don't like the Dedekind cut construction of the real numbers from the rationals because it doesn't make completeness of the reals as obvious as the Cauchy sequence construction"
    Bob: "Are you sure? The Dedekind cut construction explicitly axiomatises the holes in the real line that the rationals leave."
    Alice: "But it leaves the intuitive connection to sequences by the wayside for that purpose, considering that we're teaching sequences and convergence to undergraduates before teaching them about the formal construction of the reals, surely it's better to leverage knowledge we can assume the students have?"
    Bob: "The construction of Dedekind cuts only requires that students have intuitions about intervals of rational numbers, not sequences, in essence the knowledge is more elementary..."
    Alice: "I guess we can agree how intuitive each is depends on the strengths of the background knowledge of each student."

    Broader point: formal systems don't just have syntactic rules, don't just have formal semantics, they also have conceptual content. The conceptual content of mathematical objects and systems is what unites them over the varying degrees of formality of their presentation.
  • Epistemology versus computability


    Just look at the quote. "Reasoning within the formal system is much different to reasoning about the formal system itself". You don't even need a formal meta-language to consider differences in axiomatic systems, natural language suffices. This much more general context of natural language and human behaviour is the context in which epistemology resides, not the much more restricted context of formal languages.

    There are justifications for choosing some formal systems over others in some circumstances, given a choice between two formal systems the only thing which can facilitate choice between them is embedding them both in a system of comparison exterior to both, be that system not formal (as with natural language), formal, or natural language talking about formal systems both formally (in a formal meta language) and informally (using exterior considerations; intuition, relevance; to guide the formal meta language principles and object language desirable properties).

    Most of the history of mathematics, science and engineering proceeded without the idea of a formal system and the arbitrarity of their axioms and inference rules... One wonders how it could possibly be so central in all respects but arrive so late in its history.
  • Epistemology versus computability
    The way in which most humans generally come to conclusions amount to stirring in a pile of total bullshit.alcontali

    Well, no. I do not even care if a formal system is useful or meaningful.alcontali

    So, we shouldn't trust you to know when a formal system is relevant for epistemology or not...
  • Epistemology versus computability


    Ah yes, the MU puzzle, something which entirely resembles how humans come to conclusions using evidence and argument...