Something like this coronavirus situation, despite the way my numerous detractors paint it, there's just no way of pinning down any truth of the matter. Most (sensible) theories can be supported by the range of facts available, so all discussion can show us (if we assume it's anything more than storytelling - of which I've yet to be fully convinced) is the manner in which people muster their particular facts to support their particular theory — Isaac
For simplicity, starting with a single issue: whether to get vaccinated.
I don't know if the following is any good at all -- it's all off-the-cuff -- and it's not perfectly obvious how it connects to our recent more abstract exchange, but it has the virtue of going directly at the main question...
What's interesting, and with any luck helpful, here is that this is not the typical case of ethical judgment. In our case, everyone forming such a judgment has faced the same choice themselves.
That means there are two obvious options, which may or may not be important:
1. Approve of making the same decision I did; disapprove otherwise.
2. Approve of following the same process I did; disapprove otherwise.
For people who want both 1 and 2, there's a potential quandary if someone uses the same process but with a differing result. Presumably that indicates they used differing inputs. They shouldn't do that, hence
3. Approve of using the same inputs I did; disapprove otherwise.
If I did the math right, 2 + 3 = 1, unless the procedure in 2 is stochastic.
This might not seem like much of a basis for an ethical judgment, but if you presume everyone facing this choice does so with the intention of behaving ethically, of judging their own decision to be an ethical one, it's not all that crazy.
Can we make the just-like-me approach fail? Is that even possible, if I've set up my criteria this way?
I'm going to cheat now, because it looks to me like the weak spot is 3 (which in turn will tend to weaken 1). This is the weak spot because "inputs" looks way too big: that's not just what you read in the news, or what you read in scientific journals, if you do that sort of thing, or what you may have experienced either personally or professionally; it's also you, your personal health and your circumstances. If you're allergic to something in the vaccine, you can't take it, even though I can, and there's no way I can ignore that and be ethical.
So how do we account for that sort of difference with a rule as simplistic as the rules above? Remember, we're only doing this -- only making these ridiculous rules -- because in this case everyone judging another's decision has had to make exactly such a decision themselves, and that's not the usual case. We're not crafting the General Rules of Ethical Behavior; we're letting people leverage the work they already put in making their own decision to reduce the burden of judging others. Because we can.
So far as I know, I am not allergic to anything in the vaccine. Does someone who is have to make the same choice I did about whether to get vaccinated? That looks like a definite "no" to me. They had
no choice. What does that mean for our rules? Have we succeeded in forcing failure? Rule 3, being overbroad, fails, and thus many instances of 1? (Some people might just plump for 1 straight-up, and they're fine.)
I don't think so. I think you get to keep just-like-me and simply exclude the allergic. They didn't face the choice I did, made no decision like or unlike mine, and I judge them not.
How far can we go with this faced-the-same-choice-I-did business? Do we expect the circle to shrink and shrink and shrink until it's only me that faced the same choice I did? I don't see why. But I admit it is now unclear whether the hard part -- which we have made shockingly easy for ourselves so far -- is reaching an ethical judgment, or deciding who is subject to our judgment.
For a concrete example, suppose I am obese and have diabetes. I am at risk of getting seriously ill and needing hospitalization if I get infected; for simplicity, let's say I consider it an ethical duty to minimize the risk of serious illness** so I get the vaccine. Now let's suppose someone else, call him "Isaac", has neither of the risk factors I do and is generally in very good health; Isaac chose not to get vaccinated. Do I count Isaac as facing the same choice I did? He had to decide whether or not to get vaccinated; he may have exactly the same goals I do of not getting seriously ill and needing to be hospitalized; he may have weighed the odds just as I did using the same cutoff for acceptable risk I did (this would be a rule 2 sort of thing) -- but wait a minute! What odds was he weighing? Were they the same ones I was weighing?
You get your choice here. I'm inclined to say yes, because it captures the point that we get whole columns of odds from our local public health officer, broken down by risk factor, maybe age, and so on. I kinda want those to count as one thing because they have one source and we acquire them as one thing. More tellingly, the odds are not exactly a fact about you; that certain odds apply to you, and certain odds don't, is a fact about you.
Which brings me right to the next bit: Isaac weighed the same odds I did; he selected from those columns of odds the ones that apply to him, just as I did; but the
particular odds he selected were different because he's different. There is an exact point where -- even though he followed the same process I did with the same
external inputs -- because the process involves direct reference to the decision maker, he diverged!
What do I do about the Isaac case? Remember, I don't really want to say that he failed to use the same inputs as I did (that I used me, and he used Isaac) and so is subject to my judgment but fails rule 3: he read exactly the same odds sheet I did, and I want to call that responsible and ethical. But when he did, and checked for his risk factors, he found different odds applied to him.
That's a problem because I approve of Isaac's inputs, and I approve of his process, so I should approve of his choice, but his choice was different from mine, so how can I approve? The whole point of 2 + 3 = 1 is that it's how I judge my own decision to have been ethical. If I have to let Isaac slide, I have to give up something: either I have no basis for concluding that my own decision was good (before it was because I did 2 and 3 right), or I just give up all the rules past 1 and disapprove of Isaac.
I can plump for same-decision-as-me, but suppose I really like the 2 + 3 = 1 approach; can I rule that Isaac, because his odds were different, did not face the same choice I did and is not subject to my judgment, just as if he were allergic to the vaccine? I think that's a cop-out. You save the model from failure only by pushing the failing case outside the domain of application.
Besides, maybe we don't want to give up judging people with different odds; maybe Isaac is going through the same thought process we are and wants to be able to tell people with multiple risk factors, people like me, that the right decision
for them is to get vaccinated.
Where we stand: we have forced the complete version of just-like-me, with all 3 rules, to fail. I have to approve of Isaac's decision because of rules 2 and 3 -- he did the same thing I did; but I have to disapprove of Isaac's decision because of rule 1 -- he didn't do the same thing I did.
Our options:
- add more rules
- give up everything past rule 1 -- looks bad, that's like just defining your decisions to be ethical
- give up rule 1 but keep 2 and 3 -- appealing because I still count as ethical, but a little weird that my actual decision drops out -- wasn't the whole point to judge the decision itself, mine and Isaac's?
- give up just-like-me altogether -- too much work, or at least too soon
** You could read this as simple rationalist egoism, but there are alternatives: maybe I consider life a gift deserving respect and conservation, and that includes my own, or maybe I feel I have a duty to those who need or care about me, maybe I'm concerned about being burden on the healthcare system. Positing this as my goal is simple and we can treat the moral and rational approaches the same.