I acknowledge the objectivity there, but I don't think that it's necessarily right to call that "immoral". If I am one of those people, and I inadvertently act contrary to my aim of kicking the puppy, then I'm just being unreasonable. But if I have a principle which says that that behaviour is immoral, then sure, it would be immoral accordingly, but only relative to my principle, and only relative to my thoughts and feelings about its application. — S
From my meta-ethical position morality only exists to service existing human values, which is why when given conduct is detrimental to the relevant values in question, it makes some sense to refer to it as "immoral". A more technical way of putting it would be that some actions are
more moral than others (or, some actions are
more immoral than others) because they serve or damage existing moral values to lesser and greater degrees. If an action leads to worse outcomes than abstaining from that action, it's not hard to conceive of it as a morally inferior action. However, I think this is largely a semantic difference rather than a meaningful meta-ethical one.
It wouldn't apply universally, even if I thought and felt that it should. If other people reject that principle, because they think and feel differently, then I can't demonstrate that they're objectively wrong, since our thoughts and feelings are inherently subjective, and there's no warrant for a transcendent standard to override one of us.
You can get some objective truth in moral subjectivism. That I have never denied. It is objectively true that I feel that kicking puppies is wrong, for example. But the moral subjectivist would be like, so what? — S
You're right, but once we have agreed on a basic moral framework (i.e: it's meant to be a cooperative strategy which serves our moral values), there's still quite a bit of room left for strong moral suasion; the subjectivity/relativity of our moral values is only as harmful to moral practice as there is range and variability between them. Keeping in mind that morality is a strategy in service to human moral values, moral agreements, acts, or principles which more effectively serve those values which are more common (and more highly valued in our various value-hierarchies), are statistically more useful as moral heuristics, and objectively more useful in specific situations where the relevant values are in-fact shared. Where our primary moral values do in fact differ (but don't compete) we're left with a similar task of finding moral strategies which accommodate a diversity of human values more effectively.
Where we have mutually exclusive primary moral values (e.g: puppy kicking vs no puppy kicking), the best we can do is challenge and attempt to influence each other's values. It might seem like a craps-shoot, but since most people do share higher order values (e.g: the desire to go on living), it is often possible to manipulate (with reason) lower order values by appealing to higher order ones. In reality (I think) our value-hierarchies are rapidly fluctuating and poorly considered, making them lucrative targets for persuasion and elucidation, be it rational or manipulative.
Playing that game of moral suasion is sometimes an exercise in objective truth (e.g: should I vaccinate my child?), but it is very often an exercise in objective inductive reasoning (eg: How do we know our moral values are internally consistent? How do we know our moral conduct comports with our desired moral outcomes? How do we negotiate an environment filled with agents with sometimes disparate and competing values (i.e: what is the extent of the mutually beneficial cooperative strategies that we can undertake?). If we tried to answer the question "what should we do?" scientifically (given starting values as brute facts), then these are broad questions we would seek to answer.
Ultimately, if a difference in conflicting moral values cannot be negotiated with reason, then appeal to emotion. If it cannot be negotiated with emotion, then the remaining options seem to be forfeit, compromise, stalemate, or attack. Yes, people do sometimes
go down fighting for their moral values, but in how many of these cases did emotion or the absence of reason play the major role? Values disparity might be a problem for the universality of our answers to specific moral situations, but it is not a significant problem for the practical utility of moral systems themselves given how infrequently sound moral reasoning from well ordered values actually necessitates violent conflict or even mutually exclusive values.
Are you basically just saying what @Banno said, namely that despite differences in meta-ethics, normative ethics matters? — S
I'm defining what normative ethics
is from my meta-ethical standpoint. I'm also rebuking the "it's all just preference" line. In truth our preferences are mostly aligned, and the majority of moral dilemmas we're faced with pertain to figuring out how or committing to maximizing our nearly universally shared values in the first place. Our best moral theories are merely inductive and approximate models (of ideal strategies) but so are our best scientific theories (inductive and approximate models of observable phenomenon). It might seem trivial to you to persuasively show that normative ethics matters (and that it requires objective reasoning), but in the midst of strong relativism bordering on nihilism I don't think it's that trivial (not pointing fingers). One of the major sentiments that fuels moral absolutism is the knee-jerk fear people encounter when they consider that right and wrong might be in some way conditional, relative, or subjective (and therefore
truthless/meaningless).
And then you go on to make some normative points, like that the way that you judge it, we shouldn't be greedy, and we should be considerate of others — S
Any specific normative content I put forward was really only meant as a demonstration of objectively reasoning moral conduct from starting values.
Maybe, like Banno, you judge that morality should be about everyone, about how "one" or "we all" should behave, and not particular, like how I should behave. Why should I care in this context, whether I agree or disagree? That does not seem to have any relevance, meta-ethically. It seems beside the point. — S
I took him to mean "
we" as in "
we the interested parties" (as opposed to
everyone who ever lived). Meta-ethically, morality isn't just about what's best for the individual, it's what's best for the individual in an environment filled with other individuals. Without at some point, in some way, considering the "we", the game of morality cannot begin; otherwise it's just competition.