One way I've thought of it is, out of all professionals, the majority will be most likely be of fairly average general competence when compared to all other professionals in that field, while there would be at least two groups of small minorities, the far below average and the far above average professionals. So that when there is any professional who comes to a different conclusion than the majority, there is roughly a 50% chance that the person will be in the far below or far above average group. — Yohan
So the variable that matters is how hard the flaw is to spot, not how many experts spot it. — Isaac
but how exactly is "how hard the flaw is to spot" defined? As I understand it, you want this to be the independent variable; it is not defined as the percentage of experts within a population that miss it. But that's pretty weird because, on the one hand, "spotting" is a concept that implies the gaze of an expert, and, on the other hand, the percentage of the expert population that misses it tracks exactly how hard it is to spot. They're equivalent, aren't they? — Srap Tasmaner
I'm having trouble thinking of any conceivable use for it. If we actually did stuff this way (instead of what experts actually do, learn from each other's mistakes) then we would collect data that would help us estimate x. We would not leave ourselves in the position of having absolutely no idea what its value might be. — Srap Tasmaner
Medicine/science informs, ethics/morals decides, policies/politics implements. — jorndoe
The point here is simply that degree of agreement in a cohort is simply a low value variable when it comes to likelihood of being right compared to other more powerful ones like skin in the game. — Isaac
Just ignore the history of medicine — Ambrosia
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.