In another possible world people play Tetris all day. They are otherwise physically and psychologically healthy people, but they make the decision to play Tetris, in a room by themselves, for 10 hours per day. Now, this decision doesn't seem to harm their mind or body, nor the minds or bodies of anyone else; however, making the decision to play Tetris all day doesn't seem like the sort of decision we would normally categorize as "moral" either. But by the lights of your own theory, we would have to do that. How would you account for that? — Wolfman
1. A technical impossibility: human affairs are not predictable. You cannot objectively predict effects from causes as in physics. If these were the case it would be awful. Imagine predictable tools in Hitler's hands. Human slavery would be warranted.
2. There is not logical contradiction in preferring the falling of the whole world before I have a toothache. That is to say, you cannot deduce "to ought" from "to be". Unless you scientifically establish that the lowest good of the highest number is preferable to the highest good of the lowest number. And with what yardstick do you measure the greater or lesser good. But the utilitarians have been trying to solve this question for centuries, without success so far.
That is why I am afraid that in ethics we will always find approximate answers that will convince more or less good people. — David Mo
My answer is, you don't, you calculate the probability of a good choice/act and that calculation is what is morally good — Christoffer
Let me get this straight. The method that one uses to arrive at a moral decision is what morality is about and not the moral decision itself for reasons I can only guess as having to do with the lack of a good moral theory.
Wow! That's news to me although such a point of view resembles virtue ethics a lot - Aristotle, if virtue ethics is his handiwork, seems to have claimed that the highest good lies in being rational - the method, rationality, is more important than the what is achieved through it. That said, if one is rational, a consequence of that would be making the right decision, whether moral or otherwise, no? Unless of course morality has nothing to do with rationality which would cast doubt on your claims. How would you make the case that rationality can be applied to morality? Is being moral rational? I believe the idea of the selfish gene, which subsumes, quite literally, everything about us, points in a different direction. — TheMadFool
The foundation for the method isn't about virtues, but about how we define harm and well being. Virtue is more about characteristics and it's a very loose slippery form of ethics I'm not so sure works very well. — Christoffer
How does this theory escape some of the traditional criticisms leveled at utilitarianism. Imagine a world where 95% of the population believes slavery is a good thing. By enslaving the 5% minority they are able to develop their civilization to new heights and usher in a period of prosperity that has lasted for the last several hundred years. — Wolfman
How do they come to the conclusion that enslaving the 5% follows the framework of avoiding harm to humanity? It harms 5% of humanity, it might harm further by the consequences of slavery in form of civil wars in later years. The justification for slavery falls flat by using the proposed method of thinking. — Christoffer
So how do you define well-being, Christoffer? And how does well-being compute, if at all, into your idea? TMF is quite right to point out some similarities between what you are proposing and virtue ethics, but I'm trying to see you flesh out your position more and take it to its logical conclusion. — Wolfman
I think your defense is one step removed from where it needs to take place. It doesn't matter how their way of life came to be. The point is that it's already happening, and it's working for them now. On what grounds do you tell them to stop? — Wolfman
I tell them to stop since they are harming 5% of humanity and do not have a rational argument built upon the foundation of not harming 5% of humanity. If they don't agree to that point, they aren't morally good, I am. — Christoffer
But they do have a rational argument. Their society is experiencing a boom in industry and commerce, health and life expectancy, more aggregate happiness, and so on and so forth. How is that not a rational argument? — Wolfman
Let's charitably grant that their original decision to adopt slavery was initially suboptimal from a mathematical standpoint. Maybe there was only a 40% chance of success and 60% chance of failure. But they went on with adopting slavery anyway. Against the odds, slavery turned out to work great for them. So while you might say their original plan was suboptimal, nothing in your theory says their decision to continue their way of life is immoral/suboptimal, because it has been found to work for them, and it has withstood the test of time for the last several hundred years. — Wolfman
Here it seems your theory cannot address such a notion because it is entirely explicated from the perspective one takes prior to making a moral decision, and cannot make sense of the intuitively repugnant consequences that follow as a result following through on decisions turned out to have good odds after all. — Wolfman
By the lights of your own theory, nothing says slavery in this case is immoral. — Wolfman
I see your point about virtue ethics, but that has more to do with replicating those with virtue to be good, not to use a method of thinking and reasoning in order to be good? The foundation for the method isn't about virtues, but about how we define harm and well being. Virtue is more about characteristics and it's a very loose slippery form of ethics I'm not so sure works very well.
And using the rational method of thinking still needs to be combined with the foundation of well being and harm, otherwise, you could rationally argue for very immoral things. It needs to have a framework to be a method of good morals.
My argument focuses on this specific form of thinking as a defined morally good way to live. Not by replicating vague virtues or just being rational without a framework around it. — Christoffer
As far as I can tell, and I'm sure you know this like the back of your hand or inside and out (take your pick), every extant moral theory is flawed in some way or other making them hopelessly inadequate as a fully dependable compass when navigating the moral landscape. Given that our moral compass is defective, what course of action do you recommend? Each and every moral problem we face can't be solved by the simple application of a moral rule for there are no moral theories that covers all moral problems. Given this predicament, it isn't complete nonsense to suggest that when faced with moral problems we should do what a rational man would do and this is virtue ethics. I think Aristotle had his suspicions about moral theories - none seem to work perfectly. — TheMadFool
Interesting topic. I notice the issue of practicality has been raised. I wonder how people feel about a hypothetical global human survey which somehow qualifies what the majority of humans take to be moral? Could this data form a legitimate basis for our opinions? If so, this would be a scientific basis. — Zophie
Because when framing the argument through my moral theory, they are acting immoral and have ignored other types of ways to prosper that don't require 5% of humanity to be slaves. They have not respected the foundation of well being and harm and they haven't done any unbiased rational thinking to arrive at their conclusion, objections I listed a few of in the earlier post. — Christoffer
Of course. Science is about discovering what is the case, not what should be the case. Obviously it's not perfect. But it's at least empirical. — Zophie
To my mind law is about as certain as ethics can get, so maybe you'll accept a legal parallel in the notion of common law, where standards are slightly more malleable and descriptive in pursuit of what I'll tentatively call "the least unusual and most popular".* — Zophie
The hypothetical scientific survey I proposed, which gives everyone on the planet some input, would follow a similar intention in order to establish a normative notion of universal morality, or as I would prefer to call it, kindness. Science doesn't do prescriptive knowledge; that's chiefly the job of philosophy. — Zophie
As far as I can tell you are just arbitrarily saying we can't hurt the minority, but this conclusion doesn't follow from any of the principles you supplied in the OP. Additionally, you sprinkle the term "well-being" into your responses in some vague fashion, as if that will solve anything. You make no mention at all of well-being in your OP, by the way, and that was supposedly where you were defining your moral terms :roll: You have a ways to go before your half-baked theory makes any sense. — Wolfman
Yeah, norms change which is why surveys may be repeated. Ideally maybe like once a year. Until then I'm afraid the concept of "morality" is likely to remain an intransitive, incommensurable spectre. — Zophie
This is the basics I try to build past. Because moral theories focus on how to act more than how to figure out how to act. I try to take a step back in the process, proposing that morality comes a step before what the moral theories usually aims at.
Take the trolley problem for example, in terms of utilitarianism you need to pull the lever. The theory demands this action. The method I propose does not say what action you take, it's a method of finding out the action. The use of the method to find out the action to take is the good moral. If you choose something based on this method, you have already acted morally good before pulling or not pulling the lever. It doesn't mean the action is 50/50 good or bad, it means you used a method to calculate the probability of a good choice to the best of your ability and that choice of thinking/reasoning is what is morally good. And the method can't be corrupted to your gain or will either, it respects you and the group (humanity), so you can't abuse it, like the example above with slavery for the greater good.
So what I mean is that since we can accept that all moral theories have flaws, that moral landscapes shift through time and it's impossible to objectively give people answers on what actions to take in moral dilemmas. We can only propose a method used for each moral dilemma dynamically. If the method always leads to a probability of good choices, it is a moral obligation to use such a method in order to act morally good. — Christoffer
That we cannot find what is good or bad moral acts. — Christoffer
I would argue that it's a form of priority. — Christoffer
We can only propose a method used for each moral dilemma dynamically. If the method always leads to a probability of good choices, it is a moral obligation to use such a method in order to act morally good — Christoffer
So much for the definition. In practically any culture you will be considered a good person if you behave according to these conditions. — David Mo
You're partly right. If moral rules imply two different interests, mine and others', a major problem is proportion. — David Mo
I do not believe that there is a scientific yardstick for these uncertainties. Rational debate on them is advisable; scientific solutions are not possible. If you have this yardstick, I would like to know it. It would alleviate many of my daily concerns. — David Mo
And we have not yet entered into a particularly vexing case: what do we do with the cynic who refuses to follow any moral standards? Phew. — David Mo
I agree. There's no magic moral solution. What is moral are the conditions that make a moral choice possible. What are those conditions? — David Mo
Not sure that you fully understand what I argue for in this thread. — Christoffer
I haven't proposed any moral standards. I've proposed a theory for a moral method of calculating moral acts. — Christoffer
I usually present ideas of my own. — David Mo
I'll add something else about your "method" when I have time. — David Mo
Objection: You cannot qualify an act as moral if you do not have a concept of what is moral and what is not. You cannot put a moral act prior others (the human community over the personal interest, for example) if you do not have a criterion of priority or hierarchy of some over others in the form of a rule (Choose x before y). You cannot claim to have an objective method for deciding which act is moral and which is a priority if you have not defined the objective validity of those criteria. And this implies a universal or a priori rule, as Kant would say. — David Mo
Why not start your own thread about morality? — Christoffer
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.