• Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    If we can't rationally pick between them, something irrational has to decide. It's personal bias, isnt it?frank

    Yes, that's basically what I've been saying. We choose the theories which best fit our favoured social narrative. If we're reasonable people we'll discard anything which is overwhelmed by evidence to the contrary (even at the expense of our favoured narrative if need be). But anything supported by a genuine expert without obvious conflicts of interest automatically qualifies as not being overwhelmed by evidence to the contrary.

    That's the scientism. Science where there is no Church.frank

    I don't believe in such a thing. There's always a 'church'.
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    Science doesn't tell you what you ought to do. It just tells you what is.Olivier5

    I see. 'Follow' doesn't only mean to take instruction from.

    At the very least, I am trying not to undermine trust by my own behavior.Olivier5

    Yes. That was the bit I wanted you to explain the rationale for. How does blindly doing what they say repair the trust in those for whom it has been lost, I can't see what process you imagine taking place?

    I am suggesting that public trust is the only thing that binds us together in societies. Protecting it, when and where it exists, is important to avoid chaosOlivier5

    I agree. So again, how does blindly doing as you're told protect this trust? You already trust your government, it's other people who don't, and they have good reason to not. So how is you doing as you're told helping to restore their trust? Did the problems with the DRC reside in Kabila or the populace?
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers


    Incidentally, if you're interested, Johns Hopkins have published a few essays on the subject. The broad conclusion... compassion, investment in healthcare, education, dealing with inequality... just about everything that is being avoided in discussion here in preference for just pillorying people who don't take vaccines.
    ...
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    There are (historical) case studies regarding pandemic protocols. I recall coming across some out there. They tend to informjorndoe

    Ah, so there are possibly some historical cases which you can't fully recall but which might have tended to show something about responses to pandemics?

    Well, what reasonable person could maintain an alternative position under the weight of that kind of evidence? I concede.
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    science never ever told anyone where to go next, reason for which it would be impossible to "follow it", as you wrongly assumeOlivier5

    Where have I assumed this?

    In my country, public policy generally pursues the public good.Olivier5

    ...

    it might damage the public good here or thereOlivier5

    cases where a policy is crafted to benefit or protect private interest, which is badOlivier5

    token policies, i.e. policies that are not really meant to be implemented, but mere gesticulation. In this case the policy is dishonestOlivier5

    ... So public policy pursues the public good, except when it doesn't. Difficult to disagree with that.

    I need to take a shot of a vaccine, the usefulness of which might not be totally established, in order to protect or rebuild public trust, I will personally do so.Olivier5

    What an odd sentiment. If people don't trust public policy to be in their interests, how does blindly following it regardless help to restore that trust? Surely if trust in public institutions has been eroded that's a problem the public institutions in question need to solve. Are you suggesting the problems in the DRC would have been solved if people would only have just unquestioningly done what Kabila told them?
  • Some remarks on Wittgenstein's private language argument (PLA)
    Okay, but you're no longer talking about the sensation of pain, like Wittgenstein is.Luke

    True, but in my first case (the social construction of natural kinds like 'pain') I am talking about the sensation of pain Wittgenstein is referring to. It is neither something we 'learn' of ourselves, nor is it something which it would make no sense to doubt. If it is something we construct, then we can doubt the appropriateness and/or utility of the construction. We can't doubt the triggering sensations, but they were not (in Wittgenstein's use) 'pain' in the first place, they're just physiological activity.

    Image you've six physiological signals (a, b, c, d, e, and f) you generally model any combination of four or more as 'pain' (by model I mean things like a tendency to use the word 'pain', a tendency to say 'ouch', a tendency to withdraw from the perceived source...etc). The six signals are obviously not themselves 'pain' (again, in the way Wittgenstein is using the term), so it must be the model. But if it's the model, we do doubt it because those six triggering physiological signals overlap with some of the triggering physiological signal for other state/emotion models. Just as we might say "I wasn't hungry, I was just nervous" (misinterpreting the overlapping signals from the digestive system in those two models), we might be able to say "I wasn't in pain, I was just cold and cross". That we don't actually say that is not necessarily a reflection on what is the case so much a cultural artefact of the belief that things like emotions and pains are natural kinds (a belief I believe modern cognitive sciences shows to be unfounded).
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    Public policy is (in this case at least...Olivier5

    ...is exactly the issue in question.

    If you can't make a case that one should always follow public policy (which no one in their right mind would), then the fact that some advice is public policy has no bearing on whether one should follow it does it? If it's good advice, follow it; if it's bad advice, don't.

    If you care for the people around you, you should follow public policy.Olivier5

    Only if public policy does, in fact, on this occasion, pursue social goods based on sound science.

    You've provided no mechanism by which we can distinguish the occasions when public policy is based on science and pursues public goods from the occasions when it is not, rendering the advice to follow public policy completely useless.

    So it's not entirely rational to adhere to the prevailing scientific view.frank

    Given underdetermination it is rational both to adhere to the prevailing scientific view and to adhere to dissenting scientific views. So long as the views meet the threshold for rational views (things like evidence, lack of COI, peer review etc) then it's rational to adhere to them. It's not rational to adhere to the view of Alex Jones or Donald Trump because they're not experts, and have obvious ideological conflicts of interest. Like...

    conservative people (in the best sense of the word) want a conservative opinionfrank

    Yes, so there's definitely an ideological bias in favour of theories which support the status quo, but there's also the draw of the 'maverick genius'... I'm not seeing the virtue though, you might have explain that one.

    If you agree with all of these and also agree that in a community facing an emergency it is a moral imperative that everyone should play their part, just as they are expected to in a military campaign, then what reason could you have for refusing to be vaccinated?Janus

    I understand the link, but I went through my reasons for not getting vaccinated quite exhaustively in the other Coronavirus threads. With only a last minute exception they received nothing but vitriol and cliché both unrelated to the actual arguments put forth (standard fare nowadays unfortunately). There's only so much of that it is worth my while enduring on any given topic. I find everyone's responses very interesting, but not such as to be worth just any price.

    So here, I'm really just interested in this idea that majorities are more likely to be right (in certain cohorts).

    Sorry to cut your line of questioning short. Briefly (if it helps), the answer to all your first questions is no, mainly because the questions are too broad in a complex situation to give a 'yes'.

    I wrote a long rambling response about the American culture war, but I'm replacing it with this:Srap Tasmaner

    Possibly wise, though I'm sure it would have been interesting.

    Yes, orthodoxy is both dangerous and repugnant. I don't cotton to it.Srap Tasmaner

    That's basically what I'm saying here. There are (quite rightly) social norms which set thresholds for the sorts of beliefs it's acceptable to have and act on, beyond those, diversity should be the aim, not the enemy.
  • Some remarks on Wittgenstein's private language argument (PLA)


    Briefly, as I have a meeting to get to.

    a public confirmation... is absent with pain.Banno

    FMRI scans. What I'm saying is that the lack of a public shared referant for pain is a consequence of our technology, not a restriction about the way the world is. As we've discussed before, I think, different sub-cultures require languages in which to talk about their particular models. We're not 'wrong' for using language that way, only context-specific.

    Notwithstanding that, I think constructed experience models do provide us with a public confirmation of pain in the way I described earlier. If I go about saying I'm 'in pain' in response to some particular set of physiological signals and it doesn't have the effect I expect it to have I might well conclude that I'm wrong, maybe I'm not in pain afterall, maybe this is something else. As I said to @Luke above, 'pain' is a socially constructed model, the physiology causing it is overlapping and non-exhaustive so there is definitely an element of 'deciding' you're in pain. There's disagreement about whether any part of that decision is conscious - and if none, then your position would be right; but if any, then your position would be wrong. It hinges on the psychological facts of the matter.

    You're not actually talking about pain.Luke

    If I use the word 'pain' and I'm understood by a reasonable community of language users, then I am 'actually' talking about pain. There's no objective definition, that's why I brought up the standard one, to show it's ambiguity, not to set it up as a gospel.
  • Some remarks on Wittgenstein's private language argument (PLA)
    This reasoning would commit you to saying that patients under anaesthetic are in pain (or equivalent), whereas the entire reason for anaesthetic is to eliminate pain.Luke

    No, because anaesthetic acts differently. It might reduce conscious awareness of pain, reduce memory of pain (amnesiac effects) , it might reduce the signalling of pain at the nerve ending, or it might reduce the transmission of those signals at the brain stem or thalamus. At each point in this chain we can sensibly talk about 'the pain' and be perfectly well understood. If I say, "the pain reaches the thalamus but the the drug interferes with the communication between neurons from there on", no-one says "what do you mean 'the pain' - the patient isn't in any pain because they're anaesthetised? I've no idea what you're talking about". It's perfectly clear what I'm talking about.

    What are you claiming Wittgenstein is wrong about? He says (at PI 246): "it makes sense to say about other people that they doubt whether I am in pain; but not to say it about myself."Luke

    Exactly as I explained previously. There is no such thing as 'pain' (the experienced sensation) in physiological terms. It simply doesn't exist. It's a constructed experience, we interpret interocepted signals using socially constructed models of the meaning of those signals, one of which is 'pain'. Most of that modelling is done subconsciously (which it makes no sense to doubt, since we've no access to it), but the evidence is (evidence which Wittgenstein obviously didn't have access to) that a very small part of that model-fitting is done consciously milliseconds after a shift in attention. The cortico-limbic-striatal circuits use the descending pain modulatory system to modulate pain signalling in line with models of pain at higher cortical hierarchies. You literally decide if you're in pain.
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    Second opinions, corroboration of witnesses, replication of experiments, etcXtrix

    Yes, but you've given no evidence at all that the theories supported by the majority of scientists have a greater quantity of these properties than theories supported only by a minority.

    I'm just correcting Xtrix's first error mistaking variance in a population with variance in a stratified cohort. — Isaac


    In fact that’s exactly what you’re doing, which I pointed out several posts ago.
    Xtrix

    Explain. In what way have I mistaken variance in a population with variance in a stratified cohort?
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    If it turned out that 97% had ties to the fossil fuel industry would it still make sense to go with the majority? — Isaac


    Of course not.
    Xtrix

    Then you agree that other factors (like conflict of interest) are more important than majority support. Now you have to show what mechanisms exist to make it impossible (or less likely) that a majority on any one question could be the result of any of these other factors, of which conflict of interest is just one.

    This isn’t a ridiculous contortion— people simply go with one or the other “expert,” for many reasonsXtrix

    Then it is a contortion to say that they have no other information. How can they use "a number of reasons" yet also have "no other information"?

    I’d say the WHO, the CDC, the AMA, etc, represent a majority of experts. This is all most laypeople know. So is it right to trust the CDC?Xtrix

    No. Not necessarily.

    1. For a start you listed three organisations there so when they disagree (as on the issue of boosters, for example), why pick the CDC?

    2. None of these institutions is free from political, corporate and ideological influence, whilst that's very unlikely to lead them to say something false, it's well within reason (in fact demonstrable historically) that it leads them to choose one strategy over another even if both are equally viable.

    3. All of these institutions produce strategies, they are not publications of science. Journals publish science, institutions interpret it and formulate strategies based on it. Their strategy is not science, it is not subject to peer review, it's statistical methods are not scrutinised and it is never experimentally falsified. The rational incentives to prefer science over guesswork do not apply to the strategies of these institutions, it applies to the science on which they base those strategies. that science is not all in agreement...which leads to...

    4. Only one strategy can be advocated by any given institution. They have to decide, even if the science is 51/49 in favour of it. Public health policy is a very blunt instrument, it must appeal to the lowest common denominator and achieve it's goal despite a heterogeneous, often recalcitrant, often downright idiotic population. Again, public health policy is not science. Following it does not have the same logical imperative as following science would have.

    the overwhelming evidence that supports one theory (which is usually why there is such a consensus) over others (e.g., evolution vs creationism).Xtrix

    So you're saying that when there's two competing theories, there is always overwhelming evidence in favour of one? You're essentially denying underdetermination?

    So, if you deny that underdetermination is possible, then the next question, I suppose, is what do you think is happening to the minority of scientists who dissent? Take Peter Doshi, for example - he dissented from the view that the vaccine should have been given full FDA approval. He's a fully qualified professor of medicine and editor of the world's leading medical journal, so there can be no question about his status as expert. So what happened to make his view wrong (or more likely to be wrong)?

    Did he make a mistake in reasoning? - No, people argued their counter case and he maintained his disagreement, so any error in reasoning would have been obvious at that point.

    Did he miss some evidence? - No, likewise the counter-argument would have contained any missing evidence and he would have corrected his position accordingly. He didn't

    Was he ideologically, politically or financially motivated? - Undoubtedly, yes. But how more so that those holding the majority view? All of them have political affiliations, all have employers, funders and future consultancy work to think about, and all have a belief system which might bias their interpretation of evidence.

    Did he just lack intelligence or some 'spark' which the majority have? - Possibly, but again, why would the majority have this property in greater quantity than any given minority?

    So. If you reject underdeterminism (which to me, and most philosophers of science, is the most obvious explanation) then what is your alternative explanation?
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    I get that. It's an interesting point, a reasonable point, but what kind of point is it?Srap Tasmaner

    Fair question. It comes down to this...

    a shocking failure of citizenshipSrap Tasmaner

    I could give a statistical argument about hedging against uncertainty, but the stats doesn't seem to be going down well, so maybe an appeal to the gut. Does it not strike you a seriously wrong to put a rational or ethical imperative for a population to all believe one single thing, all follow one single solution (that of the majority of experts)? Putting aside all arguments about the greater long-term pay-offs of hedged bets, just as a gut response, you don't find something icky about that?

    Despite the repeated attempts to conflate minority expert opinion with dissenting opinion in general (like Alex Jones is an expert in anything!), we're not talking about a lack of constraint on solutions - they have to meet the threshold of being reasonable, well-supported, evidence-based, peer reviewed etc. But once that threshold has been met, to demand that the range of options is further narrowed down until only one 'most-supported' solution remains which everyone has a duty to believe on pain of being held immoral/irrational... Well, if it's just me who finds that quite repulsive, then maybe I should start again with the statistical arguments.
  • Some remarks on Wittgenstein's private language argument (PLA)
    You would say that they were in pain even if they had no "unpleasant experience"?Luke

    I might not use the expression 'in pain'. It sounds messy "they're in pain but they don't know it". But something like "their body is being wracked by pains but they're unaware due to a malfunction of the thalamus" seems to make sense to me. At least I don't think I would be met with baffled failure to understand if I were to describe a person in those terms. There a condition in which increases the availability of 5-HT at the 5-HT3 receptors at a nerve ending, this results in a sensation of pain (or discomfort), but the rest of the pain pathway is absent. Some talk about this as not being in 'real pain'. Personally, I should stress, I disagree with that use of language, I think it undermines the felt pain of people who suffer from such a condition; but the point - as far as this discussion goes - is that people know what they mean, my disagreement is a psychological one, not a failure to understand what they mean.

    I think Wittgenstein's point is that having a pain (or other sensation) is not something that one can come to know or to learn of, and so it does not constitute knowledge.Luke

    Yes, I agree. But Wittgenstein was not privy to modern understandings of cognitive psychology, so whilst I'm completely on board with the idea that if something could not 'come to be known' there's be no sense in doubting it (The insight Wittgenstein is qualified to espouse), he's wrong in his examples of those somethings, simply because he didn't know then what we know now about how we come to judge the causes of our sensations, including interocepted ones like the activity of nociceptors.
  • Some remarks on Wittgenstein's private language argument (PLA)
    In that millisecond you are supposedly making a judgement - "Does that count as a pain?"

    But do you want to go further and doubt that?


    Where that is a act of pointing.
    Banno

    I'm not entirely clear what you mean here. If by 'that' you mean the sensation itself, then no, I don't think it makes sense to say "I doubt I had that sensation" as 'sensation' is term which covers pretty much anything that such a triggering event might be.

    In a sense it's just putting it on the same footing as our other senses. We might doubt that we've seen an oasis ("is it just a mirage?"), but not that we've seen something. It's nothing but pragmatism, but I just think there's no good cause to go about thinking that nothing at all causes all these sensations we have. I do, however, find it useful to have a language in which I can talk about the difference between causes. The thing I'd say about pain is that it usually associated with tissue damage, or some negative thought (in terms of emotional pain), so it makes sense, in the absence of either to ask "have I got this right?".

    When we name stuff we're not just labelling, we assigning a social role, a set of subsequent behaviours on our part, and expectations of others follow from the naming. If these don't work out as we expected them to, we need to change something about the model, and that usually means changing the name too. This makes everything with a name open to doubt, that doubt being "naming it such-and-such didn't work out as I expected, maybe I should try another".
  • Some remarks on Wittgenstein's private language argument (PLA)
    In that millisecond you are supposedly making a judgement - "Does that count as a pain?"

    But do you want to go further and doubt that?


    Where that is a act of pointing.
    Banno

    I'm not entirely clear what you mean here. If by 'that' you mean the sensation itself, then no, I don't think it makes sense to say "I doubt I had that sensation" as 'sensation' is term which covers pretty much anything that such a triggering event might be.

    In a sense it's just putting it on the same footing as our other senses. We might doubt that we've seen an oasis ("is it just a mirage?"), but not that we've seen something. It's nothing but pragmatism, but I just think there's no good cause to go about thinking that nothing at all causes all these sensations we have. I do, however, find it useful to have a language in which I can talk about the difference between causes. The thing I'd say about pain is that it usually associated with tissue damage, or some negative thought (in terms of emotional pain), so it makes sense, in the absence of either to ask "have I got this right?".

    When we name stuff we're not just labelling, we assigning a social role, a set of subsequent behaviours on our part, and expectations of others follow from the naming. If these don't work out as we expected them to, we need to change something about the model, and that usually means changing the name too. This makes everything with a name open to doubt, that doubt being "naming it such-and-such didn't work out as I expected, maybe I should try another".
  • What is a Fact?
    How is this confirmed by observation?Banno

    Looking it up in a maths textbook?
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    We should rather start with this simple truth and work outward to understand why it’s true— not deny it’s truth altogether, as if consensus means nothing and science means nothing.Xtrix

    Yes, we should start with the conclusion we like and then keep changing our reasoning until we justify it regardless of any mathematics, evidence, or line of reasoning to the contrary - what a brilliant way to go about thinking over a topic. I couldn't have written a better explanation of exactly the process I was describing in theory selection.

    You see this in much discourse these days. When a QAnon supporter is confronted with facts...Xtrix

    Yes, of course. Happens all the time. QAnon are constantly trying to support their views by explaining the effect of stratification over a variable on the predictive power of that variable within the stratified class, they never shut up about it.

    I think Trump/QAnon is becoming the new Godwin's law
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    To show that the layman (assuming he's interested in being right) — Isaac


    Quite an assumption to be made.

    I would think the layman would simply choose the option that fits the closest to his or her Worldview in general. There being two or more opposing views means that the issue isn't a simple tautology and for the layman to hear about opposing views means that either the issue isn't settled or there is a sustained campaign to fight the so-called scientific truth for some reason.
    ssu

    Yes. We haven't even gotten to that question yet. I'm just correcting @Xtrix's first error mistaking variance in a population with variance in a stratified cohort. Once that's fixed (which seemed like a simple explanation of the way stratification affects the variable the stratification is over - but apparently not), the more interesting discussion is over who chooses which option and why.
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    You can't answer that simple questionXtrix

    I can, it's just not relevant. I'd go with the 97%. Largely for the reasons you later give

    it turns out that most of the 3% of dissenters have ties to the fossil fuel industry,Xtrix

    If it turned out that 97% had ties to the fossil fuel industry would it still make sense to go with the majority? No. Because the variable 'degree of conflict of interest' is more significant than the variable 'degree of support within an expert cohort'.

    it's been stated from the beginning that there is no other information that the layman has beyond the majority.Xtrix

    Then you too are engaging in "ridiculous contortions" We have access to tons of information other than degree of support. In fact I'd say we have access to other variables to a greater extent. Do you really claim to have the data on how many epidemiologists/virologists support mass vaccination vs those that don't? Of course you don't, you have access to the general impression of that number from the media, but not the actual number. We do, on the other hand, have access to data such as political affiliation, source of funding, lobbying power, social media trends, employment security, consultancy offers, openness of data, willingness to pre-print... we know all of these variables quite accurately, so it's just nonsense to invoke this hypothetical where the only information we have access to is degree of support, it's one of the variable we have least access to.

    it does correlate. How do we know? For the same reasons that greater experimental confirmation increases likelihood of accuracy. Not only is there historical data, but we know from predictive accuracy as well.Xtrix

    Struggling to even work out what this could mean. What do we "know from predictive accuracy", and "historical data"? That the most well supported theories turned out to be the ones that were true? You can see, surely, that this is obviously wrong? All theories that we currently consider to be true started out as theories which were only supported by a minority. The predictive power of majority support depends entirely on where a theory is in it's arc of acceptance. Notwithstanding that, you've not provided any counter-argument to Duhem-Quine, so at best this principle would yield a set of theories (plural), that are more likely to be true, not just a single theory. It is a statistical impossibility for a majority to support more than one theory, so by definition, one of the perfectly accepted-as-true theories must be nonetheless supported only by a minority.

    When there is overwhelming evidence that supports a theory, the experts (as experts) will be familiar with this, the consensus will change and often reflect the level of confidence in a theory.Xtrix

    Why only the consensus? Why do the minority not also change their confidence in a theory in the face of this overwhelming evidence? Maybe they're corrupted by bias? So it's possible for an expert to modulate their confidence in a theory because of bias. So why only the minority?

    If you aren't able to answer in the affirmative, then you're simply wrong, because that's the correct answer.Xtrix

    Classic.

    If you're arguing it isn't correct, then you're essentially saying that a laymen ISN'T better off going with the overwhelming consensus, and in fact cannot know either way -- perhaps it's 50/50, etc. Which is an absurdity, as demonstrated by the facts.Xtrix

    What facts?
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    I've just started reading Plato again -- been a very long time -- and it's practically the founding claim of philosophy: we don't care what the majority thinks.

    Except it isn't, because that's only half the point. Not everyone in town is a horse-breeder; if you want to know about horses, ask the expert. Not everyone in town is a physician; if you want to know about health, ask the expert. The situation with wisdom is apparently no different:
    Srap Tasmaner

    Absolutely. There's a very strong distinction between the cause of variance within the whole population and the cause of variance within a particular class of a stratified population. The cause of variance in support for a theory among the entire population will probably correlate quite well with education level (the cleverer believing the most plausible theories - in general), but if we stratify the wider population by that very variable (education level), we almost know for a fact that the variance within that strata will be less well correlated with that variable, because it's the variable we used to carry out the stratification, it's relative effect will just inevitably be smaller.

    In other words, yes we should trust experts, and the more expert the better, but within experts of the same education level (ie we limit education level as a variable by stratifying our sample over it) other variables are going to be much more significant - simply because we've eliminated the most significant one by stratification. We haven't made education no longer the most significant factor in likelihood of being right, we've just manipulated our sample to limit its effect.

    I think one of the things that's getting mixed up here is the difference between the question "should we trust experts opinion?" (the answer is yes) and "should we trust the majority of experts over the minority of experts of the same education level?" the answer is no - by specifying that they're of the same education level we've removed (or severely limited) the one variable which had a link to 'rightness' (education level) so the remaining variables responsible for the within class variance may or may not be linked to 'rightness'.

    Here's my question: is expertise the same issue for us that it was for Athens? Or has something changed?Srap Tasmaner

    Interesting question. Yes, I think something has changed. Taking my model above what matters most when deciding between experts are these other variables (as we've removed the most important variable - degree of expertise in the field - by specifying that we're only consulting experts). That most important variable was no different in Athens as it is today, one studies, and uses rules of inference to draw conclusions, others check you've applied the rules correctly. Over time mistakes are minimised evidence is multiplied and good theories develop (note the plural - I'm not dismissing underdetermination).

    But those within class variables have all changed - independence, financial incentive, political affiliation (maybe not so much), publication metrics, tenure, social media outrage, lucrative consultancy gigs, access to data, open pre-print servers, corporate lobbying, increasing specialisation (particularly in statistics)... I don't think the people of Athens had to contend with any (or many) of these when choosing between their experts. Just as I'm sure you find in baseball, the more variables in play the less clear the cause of any trend.
  • Some remarks on Wittgenstein's private language argument (PLA)
    Could you be in pain and not know it?Luke

    Technically, maybe. The International Association for the Study of Pain defines pain as: “an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage.” If a person were to be shown to having sufficient excitation of nociceptor fibres to elicit a report of pain in most humans but for some reason they were oblivious to that state, I don't think it would be nonsensical to describe the situation as their being in pain but without knowing it.

    The changes brought about by a greater understanding on how the brain works is where I think this is interesting. The above might have sounded nonsensical 20 years ago, but not so now.

    When Wittgenstein rhetorically asks what it would even mean to doubt here is one hand, I don't think he's claiming to have discovered a fact about the world, but rather a fact about our culture. That "I doubt I'm in pain" has no meaning is a cultural artefact, it has no meaning to us, not in general. As our culture changes (with things like advances in neuroscience), expressions which previously had no meaning may start to acquire one.
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    OK, so what's the alternative? Given our group of experts, the variance among whom we know is caused by a wide variety of factors, reasoning error being very low on that list (if present at all).

    How do we then talk about that variance in a non-lame way? — Isaac


    By pointing out that the problem at hand is a complex problem and that solving it requires decisions that are based on priorities (which cannot be established scientifically).
    baker

    True, but this is another level of analysis from the one @Srap Tasmaner and I were talking about. It's something I mentioned way back though, that much of the 'expert opinion' we're referring to in this situation is actually the 'opinion of experts' - a different beast entirely. I'm an expert in psychology, and I'm asked for my 'expert opinion' as part of my job, but if I provide an opinion about investment in mental health services, or sentencing guidelines for criminals with mitigating psychological circumstances, I'm providing the 'opinion of an expert' which (unlike my expert opinion) will include a whole set of assumptions about values which are totally outside my area of expertise (like economics or jurisprudence).

    One of the problems with the analysis in this thread is that even where it might apply to 'expert opinion' (say in very well established principles like those of physics or chemistry), it does not apply to the 'opinion of experts', which is what we're dealing with when it comes to "you should take the vaccine".
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    I'll pose this again:

    Should laypeople go with the 97% consensus on climate change? Why or why not?
    Xtrix

    At the risk of flogging the dead horse of statistical misunderstanding, I'll try another explanation. Remove climate change and replace it with issue X. On issue X the facts are such that two possible theories can be both held without being falsified by them (you're familiar with underdetermination of theories?). Theory X1 is favoured by experts with green eyes, theory X2 is favoured by experts with blue eyes. 97% of experts have green eyes. Now does it benefit the layman in any way to go with the 97%?

    Of course it doesn't. Because the variance in the variable {numbers of experts supporting} for each theory is caused by the distribution of the variable {eye colour} whereas the variable the layman is interested in is {rightness/accuracy/utility}, the correlation of which to the variable {numbers of experts supporting} is unknown.

    To show that the layman (assuming he's interested in being right) is better off pinning their flag to theory X1, you'd have to show that the variance in support for each theory is caused by (or at least correlated with) the variable {rightness/accuracy/utility}, otherwise the fact that theory X1 has a high score in the variable {numbers of experts supporting} has no bearing at all on the variable of interest.

    I've provided a long list of variables other than 'rightness' which correlate better with the degree of support a theory gets, and I've given a detailed account of why 'rightness' does not even have a very strong variance once the opinions we're discussing are honed down to those of experts (mainly underdetermination and the availability of informal peer review at early stages).

    If you want to further this discussion you'd have to dispute my list of variables which correlate with degree of support more strongly than 'rightness' and you'd have to provide an argument which undermines the underdetermination described by Duhem and Quine. Without either you've provided no argument to link 'rightness' with 'degree of support' among a range of expert opinion.

    You could, of course, make an entirely academic argument that if we know of no other variables, then a possible weak link between 'degree of support' and 'rightness' might be all we have to go on, but that assumes we've no priors at all which outweigh such a weak correlation, and of course we do have such priors.
  • Some remarks on Wittgenstein's private language argument (PLA)
    Okay, give an example of what you're talking about so we can compare (in terms of doubting one's pain). Are you referring to something like phantom limb pains?Sam26

    Not necessarily (phantom pain is still a pain). I'm talking about the fine line between what we can reflect on having happened in our minds and what we know actually happened, or what we can 'just catch' actually happening when our attention is drawn to it. Interpretation of pain is like this. When we reflect on what we're feeling, we'll generally say we're in pain, no question. But when our attention is drawn to the assessment process, we can often catch a point where it's not clear (in fact, what we're catching here is the action of the descending pain modulatory system, particularly in the involvement of c-fibre signalling). Just as when I see an aberrant object in my field of view I might 'double take', first doubting that I did indeed see such an oddity; with pain, I can sometimes catch myself doing the same. You've had the experience of 'forgetting' you're in pain whilst distracted, yes? What happens just after that moment - the few milliseconds where you return to feeling in pain after having 'forgotten' about it. Focusing on that moment, I wager, will reveal a conscious 'doubt' that you're in pain.

    The grammar of it (I think) is that being 'in pain' is still a public category, we can't simply declare ourselves to be 'in pain' in response to any old feeling. So there's a form of the question "am I in pain?" which makes perfect sense - its "does this sensation I now have meet the public criteria for the category 'pain'?". Most of that questioning is done by subconscious models informed by our experiences to date (and whether that counts a 'doubt' I suppose is debatable), but some of it is conscious - even if only barely - and so excluding it from the definition of 'doubt' would seem question begging.
  • Free spirited or God's institutionalize slave?
    What society makes of that is another matter.Wayfarer

    If you're claiming that Jesus was just a man (ie he didn't spend only a tiny fraction of his existence in pain, the vast majority of it ruling over all mankind), then what society makes of it is the only matter. Other than that - some bloke got crucified. So did thousands. Nothing more than a statistic.

    But I don't do religious threads...have at it to your heart's content. I shan't interject again.
  • Some remarks on Wittgenstein's private language argument (PLA)
    try doubting the pain you're havingSam26

    It's perfectly possible to doubt the pain you're having. Nociception is regulated by a descending pain modulatory system which in turn is regulated by cortico-limbic-striatal circuits dealing with attention, emotional response, cognitive appraisal and behaviour. These can not only alter the autonomic response, but, via the descending pain modulatory system can even use the inflammatory mediators to 'switch off' nociceptor neurons.

    In all, it's perfectly possible to doubt nociceptive sensations in no different a way to the way one doubts retinal sensation. One can question the level, location and type of pain and, via that re-modelling, alter the nature of both the pain perception and the root signals producing it.

    The problem with a lot of these discussions (qualia being the worst culprit) is they they confuse an interesting discussion about grammar with a discussion about the object of that grammar.
  • Free spirited or God's institutionalize slave?
    Jesus didn’t come out of the experience as an all-conquering emperor.Wayfarer

    On the third day he rose again from the dead. He ascended into heaven and sits on the right hand of God the Father Almighty. From thence he shall come to judge the quick and the dead.

    A statue of him in every village in the Western world, worshipped at once a week, if not more. All who didn't worship at said statues in the New World summarily beaten, hanged or put to the sword by his followers.

    In what way exactly did he not come out of the experience as an all-conquering emperor?

    Edit - this should be in the 'God 'n that' category which I usually have switched off to avoid this very response (but it's done now).
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    This is a ridiculous argument.Xtrix

    A couple of possibilities we'd want to reject off the bat...Isaac

    If all you're going to do is skim through my posts for your little triggers then don't bother replying.

    If you have a substantive counter-argument beyond simply re-asserting what you believe to be the case in opposition to any understanding of statistics of or the way expert discourse works, then I'll be glad to consider it.
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    It's a simple point really: a chess player is a cumulative person. When you play an opening, your moves have been vetted by generations before you -- and sometimes they turn out to be wrong. Top players preparing for big matches have a team that helps them come up with new ideas in the opening. Computers have changed a lot of this. (There were still adjournments when I was a young player; you and a buddy would analyze the position and then at the appointed hour, you'd play relying on that analysis. Chess has a lot of non-obvious communal elements.)Srap Tasmaner

    Of course. How naive of me to have the impression Kasparov just rocks up to the tournament, takes a seat and then thinks "now, what's all this about?". I get the comparison now. But the 'blunders' to which you refer, are these moves which this team of prior analysts have come up with? Are you suggesting that the grandmaster comes up with a patently wrong move, discusses it with his colleagues, his analyst team, computer software, etc. all of which say it's a good move, when in fact it's a bad one (and clearly so)? That's the equivalent we're talking about here. Before an expert voices an opinion there's a small army of people they can run it past to check for obvious errors - the actual opinion they're about to voice. In chess, people might well have potential moves vetted beforehand, but the actual move they're about to make is a choice at the time made without consultation and so the chance of error is increased. That's really all I meant - the fact that grandmasters make blunders is a feature of the independent decision-making in real time, this is not s feature experts have to contend with when voicing opinions.

    Two roles to play in two different storylines, am I playing the master negotiator, or the dispassionate calculator of moves... — Isaac


    And the second isn't really optional, not even for Tal.
    Srap Tasmaner

    It is as a role. The archetype doesn't have to be achievable, that's why heroes are all greater than it's ever possible to be. It's a direction, not a destination.

    Still I think there are clear reasons to consider some narratives as unwanted intruders. Which of these two candidates is the better engineer? Your personal race narrative can help you make a better racist decision, but not a better engineering decision.Srap Tasmaner

    Yes, indeed, it's the reason we have stories (by which I mean actual storybook stories).

    If we're forced to say stuff is purpose-relative, that'll work, but it feels lame to say that all the time, hand-wavy pragmatism.Srap Tasmaner

    OK, so what's the alternative? Given our group of experts, the variance among whom we know is caused by a wide variety of factors, reasoning error being very low on that list (if present at all). How do we then talk about that variance in a non-lame way? Should we pretend that reasoning errors are mostly to blame and discuss the differences ad infinitum in the full knowledge that we'll never resolve them because they're not reasoning-based differences in the first place? That seems like a pointless bit of self-flagellation.
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    In chess one doesn't have the benefit of peer review, or even a chat with a couple of colleagues — Isaac


    And all that's basically wrong, but I don't know that it matters.
    Srap Tasmaner

    Intriguing. I meant really that when one makes a move in chess one cannot check with colleagues that it makes sense first, as one can do with an expert opinion. Is that wrong? Or am I missing the point?

    expert opinion in the public domain is largely (if not wholly) past the stage of blunders in basic reasoning — Isaac


    Probably?! But the blunder idea is not the main point anyway.
    Srap Tasmaner

    I see it as being quite essential to the idea that following the majority is a safer bet. If the variance is not caused by blunders (because we're past that) then how is cohort agreement predicting truth (fewest blunders was the original mechanism proposed)?

    I'm saying they were so caught up in this negotiation and deciding what sort of game each felt like playing under the circumstances that they essentially forgot these are also actual moves on the board.Srap Tasmaner

    Ah, I see. Cool insight. Still a storyline though. 'Negotiations' vs 'rule-based game'. Two roles to play in two different storylines, am I playing the master negotiator, or the dispassionate calculator of moves...

    Narratives can help...or they can get in the way of analysisSrap Tasmaner

    I don't see any evidence of analysis existing outside of a narrative, nor can I see any cognitive mechanism whereby it could be done. All conscious activity is always interpreted within a model of it's role in the wider context, it's quite fundamental to how hierarchical neural networks work. I don't think the issue is narratives getting in the way, the issue is poor choice of narrative getting in the way.
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    I used to be a tournament chess player.Srap Tasmaner

    I thought I recalled something along those lines...the analogy was tailored.

    It is a fact that grandmasters make blunders.Srap Tasmaner

    True, but that's just my failure to produce a suitable analogy. In chess one doesn't have the benefit of peer review, or even a chat with a couple of colleagues over the canteen table at which one's embarrassingly amateur oversight can be pointed out before it progresses beyond the university walls (speaking from experience!) So whilst I agree with you about the process, I still think it's true to say that expert opinion in the public domain is largely (if not wholly) past the stage of blunders in basic reasoning.

    The notion of a majority of experts being a safer bet (if what we're after is the truth), relies on the variance in opinion being caused by factors related to proximity to truth (soundness of reasoning, exhaustiveness of evidence...), and on those factors being randomly distributed, such that a distribution mean will approach it*. I'm not arguing that those factors aren't crucially important in approaching truth - they are - I'm arguing that they're not a significant cause of the variance in expert opinion and so we can't use the mean of the sample of opinions as a proxy for truth. If the variance of the sample is unrelated to it, then the mean has no more predictive power for it than the upper quartile, or the second standard deviation.

    *the alternative is that it's not randomly distributed, but then there's no reason to think the mean approaches it and not, say, the upper or lower quartile (except we'd never know which).

    Neither player even got to the point of dismissing the possibility of blund, sort of turns the notion on it's head does it not?er on reputation grounds (and there are stories of that); they just didn't see the position for what it was. Looking over the shoulders of amateurs, they would have though.Srap Tasmaner

    This is a really interesting example. Here the social narrative (grandmasters playing tournament chess) ruled out a storyline which might have worked better. I'm going to risk your polite wrath by suggesting that the 'fact of the matter' (whether the wrong move was a 'blunder') is itself a socially constructed post hoc story. We don't have real-time access to the actual mental activity which precipitated the decision to make the bad move, we tell ourselves a story about it after the fact. Maybe it was originally part of a genius master plan which was forgotten moments after the move was made and so never followed up on...

    ...but I do see what you mean.
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    I haven't read Noise yet. Have you looked at it?Srap Tasmaner

    Briefly, I was given a copy so it's on my reading shelf. As I said before I don't entirely see eye to eye with Kahneman, but I've always liked his style. Oddly, where I've disagreed with him has been on almost exactly the point we are now discussing - the extent to which awareness of these biases is part of a real solution, or just another post hoc justificatory tool. Anyway...

    Here's one example I recall: umpires are, as a group, somewhat reluctant to make game-deciding strike calls. That is, when a called strike would decide the outcome of the game, then and there, umpires are slightly more likely to call a ball a pitch they would usually call a strike.Srap Tasmaner

    Nice example.

    I always get the impression that you think there is no such process being interfered with, that all there is is my myth versus your image, that you can only reduce the influence of one myth by replacing it with another, that it's all noise and bias all the time and nothing else. Say it ain't so, Joe.Srap Tasmaner

    It's an assumption about the audience, that's all. Once one has reached a reasonable level of academic skill, say PhD (or the equivalent, for those who've not had the opportunities, or took an alternative route), the variance in the application of reason is very small (there's a limited number of sanctioned 'moves' and all PhD level theorists will know all of them very well). The issue of analysing flaws in reasoning simply drops down the list in most cases.

    It's a bit like analysing the moves of a chess grandmaster... if they make a suboptimal move, the first theory as to why might be that it's part of some new strategy, the second maybe tiredness or distraction, the third forgetfulness...maybe they want to lose...maybe it's game fixing...The very last possibility anyone considers is that they just didn't know about the en passant rule.

    Likewise with experts in a field. If there's persistent disagreement (lasting beyond peer review error correction, or updated data), we'd be silly to jump to a failure to properly apply one of the basic rules of inference. It's far and away more likely to be one of the other factors, just like with the chess grandmaster, so that's where the interesting analysis lies.

    Proper reasoning is just the qualifying round, not the playoff. Sure, we can still disqualify contenders at the first round, but almost every serious contender has cleared that stage with ease. The role in a social narrative is the playing field on which the finals take place.
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    My question is, how do you know that what's left definitely isn't reasoning?Srap Tasmaner

    Let me start by just clearing up that reasoning (by which I mean a set of thinking methods that are well-known to preserve or approach more true conclusions) is not redundant as an explanation of the differences between various people's conclusions. It's a necessary but not sufficient factor in the explanation. The problem is that reasoning is either not that hard (PhD level experts should all be perfectly capable of it), or it becomes something clandestine and ephemeral - some property we can't quite explicate - which means we can't then demonstrate where is has, and has not, been used.

    Consider the problem the other way around. Let's assume, as our default, that all decisions are arrived at by reason alone. We then want to explain the problem of disagreement among epistemic peers.

    A couple of possibilities we'd want to reject off the bat, as they undermine the whole project, are;

    a) that one peer simply 'gets it' where another doesn't - if we start explaining reasoning in terms of some occult property of some brains then we lose any warrant to claim the process gets us any closer to the truth. We need to be consciously aware of the process, so we can show where it has and has not been used.

    b) that one peer is more intelligent than another (ie they're not quite peers) - accepting this leads to either the notion, again, that there's some property our measurements (PhD qualifications and the like) don't capture, so we don't want to go there. Alternatively, as @Yohan pointed out earlier, this undermines the idea of majority consensus. If it's just linearly related to intellect then the majority are almost certainly wrong, as they don't represent the cohort with most intelligence. The group that are right will will one of the minorities but we won't be able to judge which (are they the most intelligent, or the most stupid?) because we won't understand the arguments.

    In order for it the play the part you want, we need the rational method of thinking (reasoning) to be a series of explicable steps working by agreed rules of inference which any suitably qualified person can 'get'.

    So what's left for reasoning?

    Perhaps one peer made a mistake, missed a step, or some evidence, simply by oversight. But this is either a) easily rectified by simply pointing it out (yet this has already happened in cases of disagreement among epistemic peers); or b) not so easily pointed out, a step the 'right' peer didn't even realise they'd made or that the 'wrong' peer just doesn't 'get' - in which case we're back to reasoning being some orphic process that is partly subconscious and so can't be demonstrated to have been used.

    Perhaps one peer has been mistakenly allowed into the set of suitably qualified persons and so doesn't understand the reasoning give - but that means our methods of selecting the qualified set are flawed and we need to trust that method in order to render the judgements there more likely to be right.

    So we don't seem to be able to explain the problem of persistent disagreement among epistemic peers by differences in reasoning without defining 'reasoning' in such a way as it loses the very properties that make it such an attractive explanation in the first place.
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers


    Indeed. The key variable being how IBM-like any theory is. IBM were the 'right' choice from the first moment they produced their first reliable, capable machine. But they weren't the most popular choice (even among experts) at that moment. Expert opinion took a while to catch up with the way reality actually was. At first, most experts would eschew the newcomer in favour of the tried and tested old-timer (probably triplicate accounting pads in this case). They'd have been wrong.

    New theories take time to become established, so in the meantime proportion of expert opinion will be a poor predictor of a theory's success. All (sufficiently detailed) theories about how to handle COVID are new theories, they have to be since the detailed circumstances are unprecedented. So proportion of expert opinion is a poor predictor of any theory's success.

    Also, I didn't spot this last time, but worth commenting on...

    instead of what experts actually do, learn from each other's mistakesSrap Tasmaner

    I think this is a common misconception which is causing a lot of issues confusing long-standing expert opinion with expert opinion on contemporary issues. We don't learn from each other's mistakes. Or at least if we do it takes ages. It's simply not a significant factor in any contemporary issue. The data is too thin on the ground and underdetermines the theories by even more than usual. The overwhelming majority of theories are perfectly well supported by the data. No one's necessarily made a mistake and no one's necessarily missed anything. It's simply that the data set is too small or low quality to determine between competing theories. So experts fall down on which theories they prefer, find more intuitively compelling, find less risky to throw their weight behind... etc.
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    but how exactly is "how hard the flaw is to spot" defined? As I understand it, you want this to be the independent variable; it is not defined as the percentage of experts within a population that miss it. But that's pretty weird because, on the one hand, "spotting" is a concept that implies the gaze of an expert, and, on the other hand, the percentage of the expert population that misses it tracks exactly how hard it is to spot. They're equivalent, aren't they?Srap Tasmaner

    One can be measured by the other as a proxy, but a flaw's difficulty to spot exists independently of the existence of experts spotting it, so it's still an independent variable.

    I'm having trouble thinking of any conceivable use for it. If we actually did stuff this way (instead of what experts actually do, learn from each other's mistakes) then we would collect data that would help us estimate x. We would not leave ourselves in the position of having absolutely no idea what its value might be.Srap Tasmaner

    Exactly. Consider it a proof of principle if you like. The question is the positive predictive power of the variable {degree of agreement in a cohort}. I've engineered an example where the PPP is actually 0, just to show how it's done, but in reality we do know some things about a question's orthodoxy, so the PPP of the variable might well be above zero. The point is it's just never that good because of the uncertainty about how many experts we'd expect to miss the flaw.

    There are simply way better variables available, in terms of this PPP. The factors I listed in my first post in this, for example. To borrow a term from Taleb, one of the strongest variables in terms of PPP is the degree of 'skin in the game'. The engineer who gets sacked if the bridge fails will spot a flaw 99 of his less involved colleagues will miss. If someone is risking ridicule and ostracisation supporting a theory, they're far more likely to have checked it thoroughly than a hundred comfortable peers publishing what they already know will be well accepted and applauded.

    This is why I find this modern trend toward tribalism so repugnant. It exaggerates the degree to which publishing in line with current thinking is in one's best interests which makes it less likely that such papers are going to be thoroughly checked (there's no real 'skin in the game') and so 'current thinking' can drift unchecked. But that's another issue...

    The point here is simply that degree of agreement in a cohort is simply a low value variable when it comes to likelihood of being right compared to other more powerful ones like skin in the game. It gets more powerful the greater the heritage of the theory (which is why it intuitively feels right - and why xtrix can so easily play on these intuitions by using silly examples like flat earth and vaccination in general). They're well established theories so the PPP of agreement in a cohort is quite high, though still not great. But with newer theories, the PPP of agreement within a cohort is terrible, worse still when the public debate is so toxic.
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    Except maybe add the word professional..."1% change of a professional spotting it". In theory, a professional should have a higher chance of spotting a flaw than a laymen, such that a laymen would have even less than a 1% chance of spotting the flaw.Yohan

    Thanks, that's indeed what I meant, and a very necessary bit of clarity. I've edited accordingly.
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    Let me try another example that might make this clearer (for anyone who might be reading along).

    Imagine there's a bridge across a river with a single fatal flaw. The flaw is so hard to spot that there's only a 1% chance of an expert spotting it. We ask 100 engineering experts to check the bridge. What would be the expected proportion of safe to unsafe assessments? 99:1. We just specified that the problem is so hard to spot that there's only a 1% chance it will be spotted so we'd expect only 1 in every 100 engineers to spot it - 99% of experts would be wrong.

    Now imagine the flaw is so easy to spot that only 1 in every 100 engineers would miss it. Now we'd expect a 1:99 ratio of safe to unsafe assessments - 99% of experts would be right.

    So the variable that matters is how hard the flaw is to spot, not how many experts spot it.

    Since that's an unknown variable, there's a 50% chance we're in the first scenario, and a 50% chance we're in the second. So the ratio of experts judging safe:unsafe is irrelevant, it just cancels out.
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers
    The question is whether the unlikely turns out to be right as often as you predicted it would, neither more nor less.Srap Tasmaner

    Well, it does quite literally 100% of the time, that's just the nature of scientific progress (by which I mean the mechanism, not each and every unlikely idea, of course). Every single thing we now consider to be 'right' started out as the maverick theory of some lone scientist. The question is not whether it happens but how far along the curve are we, how far have we progressed in any question away from lone mavericks and toward everyone thinking the same 'truth' (the truth which 20 years ago was abject nonsense).

    The key thing for the probabilities argument is that the numbers are still exhaustive. If there are 20 scientists in the world, then every single one will take some position on every issue. So for newly emerging theories (say, germ theory at the late 19th century) we'd expect 19 to think it nonsense and 1 to think it right. By the early 20th century we'd expect 19 to think it right and 1 to think it nonsense. Whether the majority are actually right is entirely a function (here) of where one is in the progress of a theory*

    *All of this assumes other factors are equal.

    With our current issue (a novel vaccine, using newish technology to fight a never-before-seen virus on an unprecedented scale), it's very hard to believe we're so far along the curve that what the majority believe has any bearing at all on the matter. Without the data on how far along the normal maverick->commonplace curve we are we've absolutely no way of judging the relationship between majority belief and 'rightness'.

    All of this is, of course, common knowledge. Which is why, prior to this social-media-induced mess, it was perfectly normal to accept a person's position as being reasonable on the grounds of it having...well, reasons... good grounds... justification... adequate support... the rather old fashioned Mertonian principle we used to believe in. This notion that one must hold to whatever the consensus currently think is entirely modern, and quite repugnant.
  • Anti-Vaxxers, Creationists, 9/11 Truthers, Climate Deniers, Flat-Earthers


    You don't really need to go this far, your first approach was fine.

    Probabilities are a function of variables. The variables which make an expert more likely to be right are things like; diligence, lack of conflict of interest, peer review, a good understanding of statistics and experiment design, a secure position (like a tenure) not reliant on frequent publication, a willingness to be wrong, a wide field of funding sources, pre-print publication, a good network of peripheral experts independent employment etc.

    There simply isn't a mechanism whereby the agreement of a majority of one's peers could affect the likelihood of a theory being right. Most theories which differ do so because of underdetermination of data, not because of some error, but even if it were, despite @Xtrix's bizarre notion, we don't all check each other's work.
  • Coronavirus


    Since you're all making exactly the same argument, I'll reply to you all here and save time.

    1. People take risks with their lives all the time for all sorts of trivial reasons (hence my list of preferences, and example of skydiving). My risk of dying from Covid even if unvaccinated is extremely small (1 in several thousand), there's no dispute about this, experts all agree here. As such it is completely unremarkable, on a personal level, that I might choose to remain unvaccinated and take that risk for entirely trivial reasons (preferring not to take prophylactic medicine, preferring not to support the pharmaceutical industry are just two examples). I don't need to justify those preferences any more than a skydiver needs to justify his enjoyment of free-fall. To argue against this position you'd need to show either;
    a) the risk of me dying from covid is not as small as all the experts say it is, or b) people do not normally take such small risks for trivial preferences. Otherwise my taking this small risk for my own trivial preferences is perfectly unremarkable.

    2. People also take small risks with other people's health for their own trivial preferences or to ensure long-term goals which may be non-trivial such a political preferences. That the evidence for reduction in transmission is thin and that there are serious problems with vaccine distribution are not fringe ideas, they are positions taken by institutions like the WHO and the JCVI. That my risk of infecting another despite taking the advised non-pharmaceutical measures is small is again not even in dispute, it is standard opinion among experts. My taking this small risk out of a personal preference for a longer term societal goal, is again, unremarkable and within the range of normal behaviour.

    3. People knowingly acting in a way that puts their health services under strain is a problem for which lack of vaccination among the otherwise healthy is dwarfed by other lifestyle choices. As with the other issues above, it is normal only for a person to limit their imposition to below an acceptable threshold, it is not normal to require a person to limit their imposition until it cannot be any further limited. It is often repeated that vaccines reduce my risk of getting ill, but this alone is insufficient argument. Many actions reduce the risk of harm to others and of needing hospital treatment. We are not normally required to continue taking these action until the risk has been reduced to zero, only that it has been reduced to below an acceptable threshold. Again, the evidence that my chances (even unvaccinated) of needing a hospital bed, or infecting another person (with proper hygiene precautions) are very small is not even in question, it is the consensus among experts.

    Your arguments all suffer from a common theme of error. You assume a single purpose (minimise chances of getting covid) and so any course of action which has a lower probability of achieving that end is considered irrational. But that is simply not how decision-making works. We have multiple goals, only one of which is not getting covid (only one of which is even staying alive). It's completely normal to take a higher risk option in one of our goals in order to reduce risk in another (and no, I'm not talking about 'risk of death', that would just be one of our goals - risk here refers to 'risk of failure').

    It's normal, rational behaviour to balance the risks from a range of strategies toward one goal with the risks from a range of strategies toward another. It's perfectly normal (and indeed healthy) for a society to have within it it a range of people whose risk balancing strategies are different because it hedges overall risk better than having a single risk balancing strategy. Societies, like people, have multiple goals and will balance the risks by adopting perhaps a slightly more risky strategy toward one goal in order to reduce the risk of another. Again, the fact that a range of risk balancing strategies is better than a single one is not even in question, it's the standard opinion of risk management experts. Public policy is, however, required to be simple and decisive. The existence of a public policy on favour of one risk balancing strategy is not indicative of a consensus that it is either the only, nor even the best, risk balancing strategy, it is reflective of the fact that public policy is blunt and has to be interpretable by the lowest common denominator.

    Since I've said all this before and it's just circled back to the same misconceptions (plus another dose of the usual jeering from the crowd and aspersions on my character), I'm going to stop there.