So we observe a few serial killers working together to mass murder people. "Ah, look at that morality in action!" we would say as scientists. But as philosophers we would take a step back and say, "Huh, cooperation as morality in this situation doesn't make sense. — Philosophim
Living is what life does. Living is not an obligation of life because life has no moral obligation to live regardless of needs and preferences.I get to the conclusion of obligation by the fact that the processes to create life in the first place exists at all. The opposite of life and existence is death and nothingness. Life doens't have to happen. But the mere fact it does leads me to believe that to proactively force the opposite is a violation. — Kaplan
"Objective knowledge" cannot be interpreted as a (physical) object whose attributes are thereby equally applicable to all co-existent minds in impartial manners. Hence, I so far can only interpret it as "impartial knowledge" regarding our shared intuition of about the good. — javra
In sum, it so far seems to me that science and philosophy can only happily, satisfactorily, converge on the issue of morality only if both agree on what the meaning of "good" (regardless of the language in which it is expressed) can and does signify, and what it applies to in all its conceivably instantiations. — javra
Science of morality investigators seek answers to questions about what ‘is’, “Why do cultural moral norms and our moral sense exist?” and “How can answers to this question help us achieve our goals?” — Mark S
As I've repeated already, I believe there is no reciprocity implied by the Golden Rule, and I think that this represents a gross misinterpretation on your part. — Metaphysician Undercover
Being “friendly” to people we have just met is a marker strategy for being a good cooperator.
— Mark S
That's a very unreliable principle. If I meet someone on the street who is unusually friendly toward me, I am very wary that the person is trying to take advantage of me in some way or another, because that is how the con works. — Metaphysician Undercover
I explained already why the Golden Rule is very clearly not a cooperation strategy. Cooperation requires a common end. The Golden Rule as commonly stated has no implications of any end. You simply misinterpret it to claim that it states that one should treat others in a particular way, with the end, or goal of getting treated that way back. And I already explained why that particular goal, which is inserted by you in your interpretation, is clearly not a part of the Golden Rule. — Metaphysician Undercover
If morality is about what goals “we imperatively (prescriptively) ought to do” (e.g. when there is a conflict between individual and collective goals), and morality cannot tell us “what our goals somehow ought to be” then there is no science of morality.
If your assumptions leave moral goals to be set and chosen by individuals and not by scientific principles, in what sense we are not ending up in a form of moral relativism? — neomac
Being nice to each other” is cooperation.
— Mark S
It seems we have two very distinct ideas of what constitutes cooperation. I know of no other definition of cooperation other than to work together. And so it follows that people can be friendly toward each other without necessarily cooperating. — Metaphysician Undercover
I will argue the contrary, that fairness and equality moral norms are norms for solving cooperation problems.
— Mark S
You don't seem to be grasping the incompatibility between "cooperation" and "competition". We cooperate, help each other, as the means to an end. So cooperation requires an agreeable end, such that people will work together to achieve that goal. Without the agreeable goal, people can be nice to each other, and behave respectfully, but this cannot be called "cooperation", because they are simply being respectful of each other without cooperating (working together). On the other hand, competition between you and I means that we are both striving for the same goal, but the goal can only be achieved by one of us, exclusively. This rules out the possibility of cooperation. — Metaphysician Undercover
Following the Golden Rule, you would treat others fairly because you would like to be treated fairly.
— Mark S
I think you are misrepresenting the golden rule here. When it says "as you would have them do to you", this is spoken as an example of how you should treat others. In no way does the golden rule imply that you expect an equal, or fair return on the goodness which you give. This is the meaning of Christian/Platonic love, to do good without the expectation of reciprocation. Therefore it is a significant misunderstanding, to represent the golden rule as principle of equality in this way, that one only ought to do good in expectation of reciprocation. — Metaphysician Undercover
If the moral will of human beings, to do good, is dependent on having others do good, then everyone would be looking for bad behaviour from someone else, as an excuse to do something bad, and all of humanity would slip into evil at a very rapid rate, as one bad deed would incite many more. — Metaphysician Undercover
Descriptively moral behaviors are parts of cooperation strategies.
Universally moral behaviors are parts of cooperation strategies that do not exploit others.
— Mark S
To me the meaning of "descriptively" must be contrasted to "prescriptively" (or "normatively") not to "universally". If you use "universality" as a condition for identifying rational moral norms then you are no longer descriptive but prescriptive. Alternatively, you can use "universality" to refer to cross-cultural descriptive moral norms and NOT to a condition of rationality. Conflating these two usages would be fallacious. — neomac
"Fairness" based rules for competition are derived from equality norms, rather than cooperation norms. And equality norms are fundamentally different from cooperation norms because there is no requirement for the intent to cooperate for there to be a desire for equality. That is to say, that when people compete, and there are rules established to ensure fairness of competition, that is the only required end, fair competition. And fairness is based in equality. — Metaphysician Undercover
How do you define "fair" in a competitive sport? Is it a matter of following the rules? How do we know if the rules are "fair"? Consider Mark S 's thread on the science of morality. There, morality is defined by cooperation. But competition is directly opposed to cooperation. So we have a big problem right off the bat. Competitive sport is fundamentally immoral according to the science of morality. How do you propose that we can make "fairness" a principle in any competitive sport, which by its very nature is immoral. — Metaphysician Undercover
That said, I'd think something like, "evolved in automated biasing of neocortex by the limbic system", might be along the right lines, though it's fairly unwieldy. — wonderer1
Banno
20.9k
↪Mark S
Morality as Cooperation Strategies explains fast moral thinking, not slow moral thinking.
— Mark S
So you posit ad hoc distinctions in order to circumvent criticisms of your hypothesis. Until now your theory has been about the whole of morality, but of a sudden it is restricted to gut reactions rather than considered decisions...
...
What I have maintained is the obvious point, that anthropological descriptions, in themselves, do not tell us what we ought to do. — Banno
Oh yeah, I’m a super-duper moral relativist. Which doesn’t mean I don’t believe that there isnt some sort of progress in moral behavior. What it means is that I don’t think that moral progress should be thought of in terms of the yardstick of conformity to any universal norms, whether religious, social or biological in origin. “ Women must be submissive to men” and “Homosexuality is evil” are immoral to the same extent as Newtonian physics, Lamarckism biology or Skinnerian psychology are considered inadequate explanations of the empirical phenomena they attempt to organize in comparison with more recent theories. — Joshs
The problem with using "strategy" in this context is that it suggests that moralistic fast thinking on the part of humans is part of someone's conscious plan, when it is actually a result of unthinking evolutionary processes. — wonderer1
You don’t see the link between your wrapping this narrative in the cloak of science and religious norms of conduct?
Failing to understand why people’s attempts to get along with others fall short of your standards can lead you in one of two directions. You can either experiment with your construction of the puzzling and seemingly ‘immoral’ behavior of a group or individual until you come up with a more effective way to understand why it represented the best moral thinking for therm at the time, or you can blame them for your inability to make sense of their actions , slap a label of immorality on it and try and knock some sense into them. — Joshs
I guess I have a problem with your use of "strategy".
Whose strategy is it? — wonderer1
Inasmuch as evolution might be said to have a 'purpose' that purpose is to produce individuals with a high probability of success in passing on their genes. When evolution is occuring in a species which gets a lot of benefit from social cooperation we can expect evolutionary changes that take advantage of that environmental niche of living as a member of a social species. However, it isn't realistic to think cooperation is the 'purpose' of that evolution. A relatively high level of cooperation is just a side effect of evolution in such an environmental niche. — wonderer1
Morality as Cooperation Strategies explains fast moral thinking, not slow moral thinking.
— Mark S
I think your sense of what is an explanation of what is a bit unrealistic. I think the adaptiveness of fast moral thinking (considered within an evolutionary framework) is more accurate as an explanation for human moral thinking. — wonderer1
Claiming science is, therefore, useless would be silly.
— Mark S
Of course, I've done no such thing. What I have done is simply point to the is/ought distinction, and warned against taking what humans have done as evidence for what they ought do. — Banno
Pushing the large man off the bridge will reduce trust between people (if you stand next to someone they may kill you)
— Mark S
Or will it increase trust, in that those who comment on the event after the fact will see pushing the large man off the bridge as showing that you can be relied on to make difficult decisions, and as an exemplar of how one ought act?
Perhaps things are not so clear as you suppose.
Foot's Trolley problem was conceived as a way of showing some of the limitations of consequentialism. The trolley was to be contrasted with the case of killing a healthy person in order to harvest their organs to save five terminally ill patients. Same consequence, differing intuitions. (I see Rogue is aware of this).
Cooperation seems of little use here, in line with ↪RogueAI's strategy of asking for explicit and practical examples of the use of a cooperation approach, in order to test it's utility. — Banno
Does this lead us into a space that there is nothing intrinsically good or bad and that almost anything might be allowable under the right circumstances? — Tom Storm
Do you think this is a controversial statement? I see where you are coming from but many people who do not share your values could find this problematic. — Tom Storm
I've found @Banno helpful on many subjects. He certainly reminds me that philosophy is not easy and to be wary of easy answers. He alerted me to virtue ethics when I first arrive here. Philosophy seems to be about continually refining the questions we are asking, which may matter as much as, if not more so, than the putative answers. — Tom Storm
I wrote a paper on that once, many years ago, although the case I was looking at was Trolley Car vs abducting a person to harvest their organs and save five people. I think in the trolley car cases, we see that as a rare one-off, so we sacrifice the one, but in the other trolley-car like cases where we get our hands dirty (pushing a person, abducting a person), we can see how society could head down a scary path where it starts to actively look for ways to kill people for "the greater good". — RogueAI
There is no reason to expect them to answer all moral questions that we can think of.
— Mark S
It's going to have to say something about Trolley Car. — RogueAI
The most significant moral issues are regarding exploitation, theft, violence, rape and murder. and those things are almost universally condemned. Other issues such as age of sexual consent, acceptance of homosexuality and so on seem to get worked out sensibly in the absence of dogmatic religious interference.
The question then devolves to 'ought we want to live happy lives" and that question just seems silly since happiness is universally preferred over unhappiness. — Janus
Would travelling back in time (assume it's possible) to kill baby Hitler be the moral thing to do? What about using data the Nazi's collected experimenting on people? What about diverting a runaway trolley car full of children by pushing one child in front of it? What about aborting a baby one minute for non-health reasons? — RogueAI
Does that satisfy you or does it seem to you that it is just repackaging traditional moralism in new garb, as if there is such a thing as “ universal morality” , or that claiming that evolution wires us to be cooperative doesn’t just push back the question posed by social norms into the lap of biology.
For one thing, it passes the buck on the question of why we desire to cooperate with each other. It’s because “Evolution told us to”. — Joshs
Final question and forgive me if this seems obtuse - how to do you discern between good and bad cooperation? — Tom Storm
For the non-philosopher, what do you recommend as a reasonable foundation for morality? — Tom Storm
Not a question that can have a back-of-an-envelope answer. — Banno
↪Banno ↪Mark S I'm confused by this discussion. And Mark I can't seem to understand what you are arguing for - which may be my fault.
Mark does your approach tell us what we ought to do by identifying universal moral behaviors?
What are universal moral behaviors - are they the same as oughts?
What I have said is that:
• Descriptively moral behaviors are parts of cooperation strategies
• Universally moral behaviors are parts of cooperation strategies that do not exploit others.
— Mark S
These sentences confuse me - admittedly I am not a philosopher.
What does ' are parts of cooperation strategies' mean? Which parts? What constitutes the rest of these parts?
Is a universally moral behavior an ought?
What qualifies as a cooperation strategy?
So sure, cooperation, games theory, and anthropology might well be a useful part of a moral perspective; but they are not the whole.
— Banno
For the non-philosopher, what do you recommend as a reasonable foundation for morality? — Tom Storm
Ah, so your account, Mark S, does not tell us what we should do?
...and yet "...science can provide useful instrumental (conditional) oughts for achieving shared goals"? Despite nine threads on the same topic, perhaps your account is not as clearly expressed as you think? — Banno
So sure, cooperation, games theory, and anthropology might well be a useful part of a moral perspective; but they are not the whole. — Banno
Here are the two problems with the view espoused by Mark S.
1. Regardless of how sophisticated it might be, no description of what we do can imply what we should do. — Banno
2. That an act is cooperative is not sufficient to ensure that it is moral. Folk can cooperate to act immorally. — Banno
↪Mark S So, oddly, you are now saying that it is not the case that we ought cooperate?
I'm not too keen on the term, but that looks rather mote-and-bailly. Somehow this tells us
about right and wrong
— Mark S
without telling us what to do? You commence your argument in the bailey of right and wrong, but when challenged retreat to the motte of supposed "objective science". — Banno
Apparently Hilary Putnam also makes this ‘error’. Putnam makes the argument that if the basis of our valuative, ethical judgements is an evolutionary adaptation shared by other animals then it is as though we are computers programmed by a fool ( selection pressure) operating subject to the constraints imposed by a moron (nature). — Joshs
what you call the "bottom-up" is an example of the naturalistic fallacy in which it is presumed that what we ought do is just what we have previously done. — Banno