because Chris doesn't even understand half of the criticisms being put in front of him. His responses clearly demonstrate that. — Wolfman
My objections, therefore, still stand. — David Mo
For the above to make sense, you will have to specify the concept of well being, the precise set of rules that you are proposing, why the well being that you have defined is the basis of morality. You will have to explain how you evaluate different concepts of welfare that men have. Etc. — David Mo
As long as you don't do all this, your proposal remains in the field of indefinition and doesn't seem to lead anywhere. If you try it you will find all the difficulties that it entails. You will realize that these difficulties have already been dealt with in moral philosophy many times without finding a solution that satisfies everyone. — David Mo
Therefore, talking about things like "scientific" or "strictly" does not have much future in the field of ethics. With apologies from Sam Harris, Dawkins, de Waal and others like you who seem to be excited by this possibility. — David Mo
I've been leaning towards free will NOT existing, after some deep thought on the subject. And to define 'free will' quickly, I would say it is "the ability to have acted differently". I would argue that there are 3 things that enforce your actions. Beliefs, Desires (or wants), Mood. None of which are your choice. Can you choose to believe in magical leprechauns? Can you choose to desire homosexuality over heterosexuality? Can you choose to be happy instead of sad? — chatterbears
You wrote this yourself in your opening remarks. It corresponds exactly to the objections I made to you. I think your attempts to avoid those objections have made your ideas more confused, rather than more precise. — David Mo
-To know which acts are better than others you need to know what makes a good act and what makes a bad act. In other words, what you mean by "good" in a moral sense. — David Mo
-You have not given a single observable and measurable characteristic that allows you to decide that an act is good. — David Mo
-If you want to evaluate which acts are better than others in a scientific way — David Mo
I don't look at this as a matter of trust. I do business with a lot of different people, many of whom I don't particularly trust, the question of whether I trust them or not just doesn't come up in my mind. The situation is more like one of need. I need the service they offer, so I do business with them without thinking about whether or not I ought to trust them. You, and unenlightened, might argue that the fact I do choose to do business with them implies that I trust them. I don't think that way, and I know that I do business with a few whom I particularly don't trust. I just need to be more wary of these people. — Metaphysician Undercover
I think you have hit upon the stumbling block for many here. This is the naivety of trust, that it does not occur to one to do otherwise. The veteran of Afghanistan who has a panic attack whenever he see[s a curtain twitch has lost his trust in the benignity of strangers. To those of us who have not experienced the constant danger of snipers, it seems a bit mad - we call it PTSD. Why would you think a moving curtain is dangerous? — unenlightened
So I would look at the Google issue more as a question of need. If they offer a service which is needed, then we use it, whether or not we trust them. But doing business with someone whom you do not particularly trust means that you need to be wary. We could assume, that just like doing business with anyone else, the company would want to give us honest service to maintain a reputation, but such assumptions are what leave us vulnerable. — Metaphysician Undercover
1. Subject of the thread you proposed: if a scientific method ("scientific mind") can objectively establish which acts are morally better ("priority"). — David Mo
2. To know which acts are better than others you need to know what makes a good act and what makes a bad act. — David Mo
3. If that method you propose is scientific and objective, it will be based on a set of observable and quantifiable "good" properties. — David Mo
A typical case in moral philosophy is the combination of the lesser good for the greater number and the greater good for the lesser number. — David Mo
Google is just a search engine that provides links to trustworthy, or untrustworthy information. It's not so much should you trust Google, but should you trust the sites that Google provides as a result of your search? Do you trust your own site-searching skills, and use of keywords, to find the right information you are looking for? — Harry Hindu
Yes, just like the milk seller depends on trust. Government, business, everyone in a society depends on trust for every interaction. And if we do not trust google, do we trust the independent body supervising them?
I propose that the sickness of the age is that blows to trust have proliferated and they are indeed hard to recover from. But we cannot function without trust, and we cannot function without a search engine. I don't think there is another answer. Trust comes from honour, and so without honour we die. Thus the unreality of morality is seen to be somewhat exaggerated. — unenlightened
This is not really true. A company may work hard to gain the trust of customers, but once they receive it they have the customers by the balls. And since the company's priority is always its financial well-being there is no good reason why the company would not abuse that trust. — Metaphysician Undercover
It's important not to become naive and comfy in their care, always question them, always question everyone. By constantly challenging them and reviewing them we challenge their handle of our trust and they will do anything to keep that trust. The risk of mishandling trust is such a bad business strategy that it gives us enough trust for the life we live. But always question them, otherwise they will find loopholes. — Christoffer
I usually present ideas of my own. — David Mo
I'll add something else about your "method" when I have time. — David Mo
Objection: You cannot qualify an act as moral if you do not have a concept of what is moral and what is not. You cannot put a moral act prior others (the human community over the personal interest, for example) if you do not have a criterion of priority or hierarchy of some over others in the form of a rule (Choose x before y). You cannot claim to have an objective method for deciding which act is moral and which is a priority if you have not defined the objective validity of those criteria. And this implies a universal or a priori rule, as Kant would say. — David Mo
But do you trust Google? — unenlightened
If you think so, what is then your answer? Not having a democratic elections or what? — ssu
So much for the definition. In practically any culture you will be considered a good person if you behave according to these conditions. — David Mo
You're partly right. If moral rules imply two different interests, mine and others', a major problem is proportion. — David Mo
I do not believe that there is a scientific yardstick for these uncertainties. Rational debate on them is advisable; scientific solutions are not possible. If you have this yardstick, I would like to know it. It would alleviate many of my daily concerns. — David Mo
And we have not yet entered into a particularly vexing case: what do we do with the cynic who refuses to follow any moral standards? Phew. — David Mo
I agree. There's no magic moral solution. What is moral are the conditions that make a moral choice possible. What are those conditions? — David Mo
Yeah, norms change which is why surveys may be repeated. Ideally maybe like once a year. Until then I'm afraid the concept of "morality" is likely to remain an intransitive, incommensurable spectre. — Zophie
As far as I can tell you are just arbitrarily saying we can't hurt the minority, but this conclusion doesn't follow from any of the principles you supplied in the OP. Additionally, you sprinkle the term "well-being" into your responses in some vague fashion, as if that will solve anything. You make no mention at all of well-being in your OP, by the way, and that was supposedly where you were defining your moral terms :roll: You have a ways to go before your half-baked theory makes any sense. — Wolfman
Of course. Science is about discovering what is the case, not what should be the case. Obviously it's not perfect. But it's at least empirical. — Zophie
To my mind law is about as certain as ethics can get, so maybe you'll accept a legal parallel in the notion of common law, where standards are slightly more malleable and descriptive in pursuit of what I'll tentatively call "the least unusual and most popular".* — Zophie
The hypothetical scientific survey I proposed, which gives everyone on the planet some input, would follow a similar intention in order to establish a normative notion of universal morality, or as I would prefer to call it, kindness. Science doesn't do prescriptive knowledge; that's chiefly the job of philosophy. — Zophie
Interesting topic. I notice the issue of practicality has been raised. I wonder how people feel about a hypothetical global human survey which somehow qualifies what the majority of humans take to be moral? Could this data form a legitimate basis for our opinions? If so, this would be a scientific basis. — Zophie
How long people will believe that utterly stupid line? Trump hasn't shaken up the system. Not a bit. On the contrary, corruption flourishes extremely well under an inept and defunct administration. All he has been able to do is that tax cut for the rich. — ssu
As far as I can tell, and I'm sure you know this like the back of your hand or inside and out (take your pick), every extant moral theory is flawed in some way or other making them hopelessly inadequate as a fully dependable compass when navigating the moral landscape. Given that our moral compass is defective, what course of action do you recommend? Each and every moral problem we face can't be solved by the simple application of a moral rule for there are no moral theories that covers all moral problems. Given this predicament, it isn't complete nonsense to suggest that when faced with moral problems we should do what a rational man would do and this is virtue ethics. I think Aristotle had his suspicions about moral theories - none seem to work perfectly. — TheMadFool
But they do have a rational argument. Their society is experiencing a boom in industry and commerce, health and life expectancy, more aggregate happiness, and so on and so forth. How is that not a rational argument? — Wolfman
Let's charitably grant that their original decision to adopt slavery was initially suboptimal from a mathematical standpoint. Maybe there was only a 40% chance of success and 60% chance of failure. But they went on with adopting slavery anyway. Against the odds, slavery turned out to work great for them. So while you might say their original plan was suboptimal, nothing in your theory says their decision to continue their way of life is immoral/suboptimal, because it has been found to work for them, and it has withstood the test of time for the last several hundred years. — Wolfman
Here it seems your theory cannot address such a notion because it is entirely explicated from the perspective one takes prior to making a moral decision, and cannot make sense of the intuitively repugnant consequences that follow as a result following through on decisions turned out to have good odds after all. — Wolfman
By the lights of your own theory, nothing says slavery in this case is immoral. — Wolfman
I think your defense is one step removed from where it needs to take place. It doesn't matter how their way of life came to be. The point is that it's already happening, and it's working for them now. On what grounds do you tell them to stop? — Wolfman
So how do you define well-being, Christoffer? And how does well-being compute, if at all, into your idea? TMF is quite right to point out some similarities between what you are proposing and virtue ethics, but I'm trying to see you flesh out your position more and take it to its logical conclusion. — Wolfman
How does this theory escape some of the traditional criticisms leveled at utilitarianism. Imagine a world where 95% of the population believes slavery is a good thing. By enslaving the 5% minority they are able to develop their civilization to new heights and usher in a period of prosperity that has lasted for the last several hundred years. — Wolfman
Let me get this straight. The method that one uses to arrive at a moral decision is what morality is about and not the moral decision itself for reasons I can only guess as having to do with the lack of a good moral theory.
Wow! That's news to me although such a point of view resembles virtue ethics a lot - Aristotle, if virtue ethics is his handiwork, seems to have claimed that the highest good lies in being rational - the method, rationality, is more important than the what is achieved through it. That said, if one is rational, a consequence of that would be making the right decision, whether moral or otherwise, no? Unless of course morality has nothing to do with rationality which would cast doubt on your claims. How would you make the case that rationality can be applied to morality? Is being moral rational? I believe the idea of the selfish gene, which subsumes, quite literally, everything about us, points in a different direction. — TheMadFool
1. A technical impossibility: human affairs are not predictable. You cannot objectively predict effects from causes as in physics. If these were the case it would be awful. Imagine predictable tools in Hitler's hands. Human slavery would be warranted.
2. There is not logical contradiction in preferring the falling of the whole world before I have a toothache. That is to say, you cannot deduce "to ought" from "to be". Unless you scientifically establish that the lowest good of the highest number is preferable to the highest good of the lowest number. And with what yardstick do you measure the greater or lesser good. But the utilitarians have been trying to solve this question for centuries, without success so far.
That is why I am afraid that in ethics we will always find approximate answers that will convince more or less good people. — David Mo
In another possible world people play Tetris all day. They are otherwise physically and psychologically healthy people, but they make the decision to play Tetris, in a room by themselves, for 10 hours per day. Now, this decision doesn't seem to harm their mind or body, nor the minds or bodies of anyone else; however, making the decision to play Tetris all day doesn't seem like the sort of decision we would normally categorize as "moral" either. But by the lights of your own theory, we would have to do that. How would you account for that? — Wolfman
↪Christoffer The scientific method consists of the following:
1. collecting unbiased data
2. analyzing the data objectively to look for patterns
3. formulating a hypothesis to explain observed patterns
How exactly do these 3 steps relate to ethics?
What would qualify as unbiased data in ethics? Knowing how people will think/act given a set of ethical situations.
What is meant by objective analysis of data and what constitutes a pattern in the ethical domain? Being logical should make us objective enough. Patterns will most likely appear in the form of tendencies in people's thoughts/actions - certain thoughts/actions will be preferred over others. What if there are no discernible patterns in the data?
What does it mean to formulate a hypothesis that explains observed patterns? The patterns we see in the ethical behavior of people may point to which, if any, moral theory people subscribe to - are people in general consequentialists? Do they adhere to deontology? Both? Neither? Virtue ethicists? All?
Suppose we discover people are generally consequentialists; can the scientific method prove that consequentialism is the correct moral theory? The bottomline is that the scientific method applied to moral theory only explains people's behavior - are they consequentialists? do they practice deontological ethics? and so forth.
In light of this knowledge (moral behavioral patterns) we maybe able to come up with an explanation why people prefer and don't prefer certain moral theories but the explanation needn't reveal to us which moral theory is the correct one; for instance people could be consequentialists in general because it's more convenient or was indoctrinated by society or religion to be thus and not necessarily because consequentialism is the one and true moral theory.
All in all, the scientific method, what it really is, is of little help in proving which moral theory is correct: the scientific method applied to morality may not lead to moral discoveries from which infallible moral laws can be extracted for practical use. Ergo, the one who employs the scientific method to morality is no better than one who's scientifically illiterate when it comes to making moral decisions.
That said, I can understand why you think this way. Science is the poster boy of rationality and we're so mesmerized by the dazzling achievements it has made that we overlook the difference between science and rationality. In my humble opinion, science is just a subset of rationality and while we must be rational about everything, we needn't be scientific about everything. In my opinion then, what you really should be saying is that being rational increases the chances of making good decisions, including moral ones and not that being scientific does so. — TheMadFool
In a sense i wasn't questioning whether they are morally good, but if they have all the necessary kinds of skills and knowledge needed to make decisions. — Coben
My concern here is that the scientific mind tends to ignore things that are hard to track and measure. For example, let's take a societal issue like drug testing in the work place. Now a scientist can readily deal with the potential negative issue of false positives. This is fairly easy to measure. But the very hard to track effects of giving employers the right to demand urine from its employees or teachers/administrators to demand that from students, also, may be very significant, over the long term and in subtle but important ways, is often, in my experience, ignored by the scientific mind. And I am thinking of that type of mind in general, not just scientists, including non-scientists I encounter in forums like this. That a lot of less easy to measure effects for example tend to be minimized or ignored.
A full range mind uses a number of heuristics, epistemologies and methods. Often scientific minds tend to not notice how they also use intuition for example. But it is true they do try to dampen this set of skills. And this means that they go against the development of the most advanced minds in nature, human minds, which have developed, in part because we are social mammals, to use a diverse set of heuristics and approaches. In my experience the scientific minds tend to dismiss a lot of things that are nevertheless very important and have trouble recognizing their own paradigmatic biases.
This of course is extremely hard to prove. But it is what I meant.
A scientific mind, a good one, is good at science. Deciding how people should interact, say, or how countries should be run, or how children should be raised require, to me at the very least also skills that are not related to performing empirical research, designing test protocols, isolating factors, coming up with promising lines of research and being extremely well organized when you want to be. Those are great qualities, but I think good morals or patterns of relations need a bunch of other skills and ones that the scientist's set of skills can even dampen. Though of course science can contribute a lot to generating knowledge for all minds to weigh when deciding. And above I did describe the scientific mind as if it was working as a scientist. But that's what a scientific mind is aimed at even if it is working elsewhere since that is what a scientific mind is meant to be good at. — Coben
And a handsome work it is, too! But I wonder: many of the legs holding up your argument are either themselves unsupported claims or categorical in tone when it seems they ought to be conditional. In terms of your conclusions it may not matter much. The question that resounds within, however, is of how much relative value a "scientific mind" is with respect to the enterprise of moral thinking. It's either of no part, some part, or the whole enchilada. If it's not the whole thing, then what are the other parts? — tim wood
I am not sure why you're equating benefit and value in P1. Both "beneficial" and "valuable" are value judgements, and there doesn't seem to be any obvious reason to use one term or the other.
Furthermore, what you mean by "humanity" remains vague. Is humanity the same as "all current humans"? When you write "valuable to humans" do you mean all humans or just some?
In P2, it's questionable to define a benefit as the mere absence of harm, but it's not a logic problem. What is a logic problem is that P1 talks about benefits to humanity, and p2 about benefits to a single human. That gap is never bridged. It shows in your conclusion, which just makes one broad sweep across humans and humanity.
P3 is of course extremely controversial, since it presupposes a specific subset of utilitarianism. That significantly limits the appeal of your argument. — Echarmion
Again, I am confused by your usage of valuable and beneficial here. Since P1 already talks about what's valuable, it doesn't combine with P2, which defines value in terms of benefit. So the second half of P2 is redundant. — Echarmion
I have to nitpick here: the scientific method works entirely based on evidence within human perception. It doesn't tell us anything about what's outside of it. The objects science deals with are the objects of perception. What the scientific method does is eliminate individual bias, which I assume is what you meant. — Echarmion
That's not a syllogism. Your conclusion is simply restating P2, so you can omit this entire segment in favor of just defining the term "scientific mind". — Echarmion
While I understand what you want to say here, the premises just don't fit together well. For example P1 is taking only about what is less valuable and has a high probability of no benefit. It's all negative. Yet the conclusion talks about what has a high probability for a benefit, i.e. it talks about a positive. And p4 really doesn't add anything that isn't already stated by p3. — Echarmion
Your conclusion is that knowing the facts is important to making moral judgement. That is certainly true. Unfortunately, it doesn't help much to know this if you are faced with a given moral choice.
What you perhaps want to argue is that it's a moral duty to evaluate the facts as well as possible. But that argument would have to look much different. — Echarmion
Ok, so is that individual good translate to the group? I would argue that it doesnt, that the group consideration is different since now you also have to weigh the cost to the group, which you never have to do with the individual consideration. Thats why we have laws against vigilantism, because people can lie about their moral reasons or moral diligence in concluding that killing the murderer is correct. Hopefully the possibilities are fairly obvious.
So that would be an example of whats good for the individual not being good fir the group.
I think that this part of your argument is foundational, and it will all fall apart unless you can alter the premiss to exclude exceptions to the rule like we did above. — DingoJones
Are you agreeing that under a certain set of circumstances, after all due consideration of all options (there is a scenario where police are not the best option for example) etc, its good (avoiding mind/body harm) to go kill this guy? — DingoJones
I would need to see evidence that people with scientific minds are as empathetic as other people, have emotional intelligence, have good introspective skills so they know what biases they have when dealing with the complicated issues, where testing is often either unethical or impossible to perform, that are raised around human beings. And I am skeptical that the scientific minds are as good, in general, as other people when it comes to these things. I mean, jeez, look at psychiatry and pharma related to 'mental illness', that's driven by people with scientific minds and it is philosophically weak and also when criticized these very minds seem not to understand how skewed the research is by the money behind it, the pr in favor of it, selective publishing and even direct fraud. Scientific minds seem to me as gullbile as any other minds, but further often on the colder side. — Coben
Its always a temptation with presenting a theory to jump around between all the explanations and arguments and supporting arguments and premisses because you are uniquely familiar with them. Im not though, so one thing ar a time. — DingoJones
So that seems like it qualifies as good in your view, since the individual mind/body harm is at stake. Is that right? — DingoJones
Have you read “the Moral Landscape” by Sam Harris? — DingoJones
Ok, so your central claim seems to be that what is good for the individual is whats good for the group aa long as the good is defined as not doing harm to the body/mind. Is that correct? — DingoJones
Well Im not sure how that would change that there are exceptions to your claim that haven't been accounted for. How exactly do you mean objectively valuable? — DingoJones
What about if there are two harms, smoking and stress. The smoking relieves the stress, but harms the body, but so would stress. In that case, the smoking is harmful to body but its also beneficial to the human. — DingoJones
On a macro scale, what about decisions that benifit more people than it harms. Wouldnt any kind if utilitarian calculation be an exception to your rule? — DingoJones