• ToothyMaw
    1.3k
    I came up with something new, similar to two-level utilitarianism: axioms are extracted from human nature, are then used to develop via reasoning a spectrum of behaviors that the people then consider to be good or bad in specific situations for anyone. The people vote on which behaviors in specific situations are good or bad, and the majority rules, and rules are thus formed – rules that allow the satisfaction of the majority of people's preferences with regard to how they want to act, and how they want other humans to act - morally - as evaluated at a certain point in time. The corresponding definition of morality for this system is: "a set of rules of the form ‘x is right to us’ or ‘y is wrong to us’ that can be used to judge concrete behavior and represent what behavior, as theorized via reasoning using axioms extracted from human nature, is considered by the majority of humans to be good or bad for any person under specific circumstances, and from which any more specific rules, when needed, are deduced, that also reflect the preferences of the majority of humanity with respect to how they want to act, and want others to act, morally, as evaluated at a certain point in time, to be re-evaluated at interval z.”

    One of the issues that I had to try to overcome was that this moral theory is not based in reason. However, by implementing the voting system on behaviors that are created via reasoning with axioms derived from human nature the rules are coherent with respect to the spectrum of human behaviors. Human nature provides the necessary assumptions for the reasoning and rule-finding process; this is true regardless of how the people vote.

    This theory assumes an essentialist view of human nature: there must be characteristics that define humanity and these characteristics must be discoverable.

    Additionally, this theory also avoids some of the pitfalls of act and rule utilitarianism: for one, there are no difficult calculations to be made on the spot unless an unprecedented situation arises, in which case one can synthesize the existing rules in order to come to a conclusion as to how one should act. This could be difficult but plausible. It also avoids collapsing into act utilitarianism despite having a rule utilitarianism bent to it: because the rules encapsulate the preferences of the majority of humans for only an instant in time, the rules are static even though humanity’s opinions change. Thus, even if there is a discrepancy between what humanity currently wants and what the rules say, one should still abide by the moral rules determined by the voting process. Furthermore, the end is that the rules are followed, not the maximization of preference fulfillment for each person in each individual circumstance. Even if following a rule would lead to less satisfaction of preferences the rule should be followed.

    Another thing is that it reconciles subjective ethics with non-arbitrariness: while people’s preferences are subjective, the rules, once voted upon, are correct for anyone regardless of one’s inclinations, motivations, or purpose. Anyone in those specific circumstances is obligated to follow the rule.

    One place that this theory fails is with regard to self-determination: one has laws foisted upon oneself that one opposes. But it seems to me one just has to deal with it; laws are necessary. Furthermore, if the theory is based upon consent I don’t see how anyone could complain.

    Thanks for reading, and please don't hold back with the criticism.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment