Taking a risk implies one lacks the wisdom and/or power to produce the intended effect and must rely on luck. It cannot be a moral act, thus there's no point in talking about justification. — Tzeentch
I'm asking if it's
immoral to take the higher risk option. You answered that it is
not moral. That doesn't answer the question as it could still be neutral.
Perhaps certain certainties are possible, but definitely not to the extent that we can divine the future life of a person. — Tzeentch
That is contradictory for our purposes. If you claim that sometimes we can be certain that our actions will lead to our intentions, then we need to be able to divine the future life of the person who we're acting upon. If we cannot do that this reduces to:
One can conclude that certainty is impossible, and thus moral acts are impossible, — Tzeentch
The certainty you require for moral action is precisely the certainty to divine the future life of a person.
As I said, criteria 3 is a confirmation or criteria 2. If criteria 2 cannot be met, then criteria 3 (ergo the result) is irrelevant. — Tzeentch
It is very relevant. If I lack the wisdom to do something, and attempt it anyways, that's
not moral. However, if it doesn't result in a negative consequence that's
not immoral leaving us at neutral. Again, there is a world of difference between neutral and immoral acts.
That one has no idea of the consequences of their actions, I suppose. — Tzeentch
That's the problem with your system. Since one has no idea of the consequences of their actions, any action is as justified as another when the only criteria to judge
immorality is consequence.
I'm arguing inaction isn't wrong, and pointing out the inconsistencies that arise when one tries to argue it is wrong. — Tzeentch
I know, and I'm saying these "inconsistencies" are just as present in your system of consequentialism. To be moral, one needs to not do immoral things. In your system, what is "immoral" (as opposed to not moral, which is determined by intention) is determined only by consequences. Thus, any time you act with good intent, you would be required to keep track of all the consequences of your actions. Do you do so? Do you have some flowchart keeping track of all the consequences of every action you've ever taken? No. You don't spend all your energy tracking the morality of every act you take.
Thus for the same reason, if inaction is wrong, that doesn't mean I have to spend all of my energy tracking the morality of every time I choose not to act.
one is unavoidably in inaction towards many perceived problems at any given time — Tzeentch
False. I don't perceive a problem I can help with that I'm not helping with at the moment. If there was such a problem, say, a beggar approached me and I had a million dollars to spare, it would be wrong not to help them
Besides, I could very easily argue that spending every ounce of energy tracking whether there is a problem I could help with I'm not helping with doesn't help anyone, and so the best strategy is to just check every once in a while as most do.
If you suspect that the act of buying candy is actively causing people's deaths, it would certainly be a good idea to stop doing it.
In this instance you are already hinting towards the fact that your buying of the candy is not causing people's deaths, just like not pressing the button to save Sarah does not cause her death — Tzeentch
I'm very interested in knowing why I am causing people's deaths in the first example, but am not causing it in the second. What is your definition of "cause"? In both cases mind you, I'm not the one doing the damage, it's the murderer or the kidnapper that's responsible respectively isn't it? So why am I causing deaths in one case but not causing it in another?
Not if the intent was to murder, obviously. Then the act is wrong from the outset. We have already been over this. — Tzeentch
Right, but the intent could always be benevolent. The murderer could bet on the 0.001% chance that the victim is actually suicidal and wants to be killed. You can't say the act is wrong until after it is done, and inevitably the 99.999% is what happens. THEN it becomes wrong.
Let's say there is an extremely lucky serial killer. The killer always has the benevolent intent of helping out suicidal people, or sending as many people to heaven as possible. The killer picks targets randomly, but by some statistical miracle they all turn out to have been suicidal and wanting to die. Assume the killer wants to live morally. Should the killer continue to pick randomly?
That depends, if one wishes to live morally (or avoid immoral behavior) one should probably ensure one isn't enabling serial killers, should they not? And if they cannot guarantee one's behavior isn't enabling serial killers, then maybe one should cease that behavior. — Tzeentch
Can you guarantee that you waking up in the morning isn't enabling serial killers? Maybe someone has broken into your house with the intent to kill you but are hesitating. If you startle them by waking up, they will kill you and start their serial killer career. If you don't, they'll come to their senses and become an upright member of society.
See the problem?
Assuming one wants to live morally it's either:
1- One is obligated to pick the option least likely to harm. Meaning (by your system) that one must always pick inaction and must never pick action. But you already disagreed with this in the original Jeff and Sarah example (where Jeff doesn't rebel against pinching), where you argued that pinching Jeff is
not wrong.
2- One is not obligated to pick the option least likely to harm. Meaning a benevolent serial killer who wants to live morally is justified to kill randomly. As despite the fact that the act he commits has a 0.001% chance of being moral, he is not obligated to pick the 99.999% alternative, so is justified in picking the very unlikely act. Even after the 99.999% alternative happens, he's still not obligated to change his behavior as again, even if he recognizes the very low chance of success he's not obligated to pick the less risky alternative.
Because it refers to something one isn't doing? — Tzeentch
Let's say there is an alternate world history, where "sserping" was defined first. And "pressing" was defined as "Not sserping". Does sserping now become an action?
Pressing refers to not sserping. So are pressing and sserping both inactions?
No, she cannot. One cannot detect the non-existence of something — Tzeentch
Let's say I'm pressing a button. What's the "something" whose existence is detected?
I'm trying to understand what is the "something" that is missing and so can't be detected in sserping, but is present and can be detected in pressing. Or simply, the difference between action and inaction.
I intend to help another person, but instead I end up killing them.
A just intention, but a harmful outcome. Clearly this act cannot be considered moral. — Tzeentch
Disagreed. That's precisely the point of disagreement. But I'm far more interested in the internal workings of your ethical system right now, than a highlight of its differences from mine.
I intend to kill another person, but instead I end up helping them.
An unjust intention, but a helpful outcome. Clearly this act cannot be considered moral either. — Tzeentch
Agreed. Because the intent was to do an act that has a very low chance of helping.
Both intention and outcome have to be regarded to determine the morality of an act. — Tzeentch
You do understand this isn't a majority view or anything right? There is a whole separate form of ethics called deontology which doesn't take into account consequence at all. The idea that both matter is far from a settled conclusion. And I don't intend to debate it right now, I'm interested in your consequentialism specifically.