Belief
p1 Choices made from unsupported belief has a high probability of chaotic consequences.
p2 Supported belief with evidence has a high probability of arriving at calculated consequences.
p3 Chaotic consequences are always less valuable to humanity than those able to be calculated.
Conclusion: Unsupported belief is always less valuable to humanity than supported belief. — Christoffer
Morality based on value
p1 What is valuable to humans is that which is beneficial to humanity.
p2 What is beneficial to a human is that which is of no harm to mind and body.
p3 Good moral choices are those that do not harm the mind and body of self and/or others.
Conclusion: Good moral choices are those considered valuable to humans because they are beneficial to humans and humanity. — Christoffer
Your conclusion should be that unsupported belief has a high probability of being less valuable to humanity (where chaotic consequences are bad for humanity). The “always” doesnt follow from the rest of your equation.
Also, you can have calculated consequences which are bad for humanity so P3 doesnt follow either. — DingoJones
P1 is not true at all. Many large groups of humans value things that are not beneficial to all humanity. Its arguable humanity as a whole doesnt value what is beneficial to humanity as a whole, so I would say you need more support for p1. — DingoJones
P2 seems weak as well, as its quite a stretch to claim everything that does no harm to mind and body is beneficial to humanity. Don’t you think there are somethings which do no body/mind harm but do not necessarily benefit mankind? Or vice versa...the sun harms your body but is beneficial to humanity, — DingoJones
What if I change to "objectively valuable"? Seems that within a context of objectively valuable for one the benefit for the many includes that one person. So to have a value objectively it needs to be of benefit for the whole? Or am I attacking this premise in the wrong direction? — Christoffer
What things are beneficial to humanity and humans that do harm to the body or mind? The sun does only damage when exposed to it too much, so that means overexposure to the sun is not beneficial to humans and humanity while normal exposure to the sun is.
So what is beneficial is valuable as too much exposure to the sun is not beneficial or valuable. The premise also specifically points to one human, so not humanity as a whole, but could be applied with expansion to it. But it's hard to see anything beneficial to a human that is at the same time harming the body and/or mind. Even euthanasia can't be harming the mind of body if the purpose is to relieve the body or mind from suffering. — Christoffer
Well Im not sure how that would change that there are exceptions to your claim that haven't been accounted for. How exactly do you mean objectively valuable? — DingoJones
What about if there are two harms, smoking and stress. The smoking relieves the stress, but harms the body, but so would stress. In that case, the smoking is harmful to body but its also beneficial to the human. — DingoJones
On a macro scale, what about decisions that benifit more people than it harms. Wouldnt any kind if utilitarian calculation be an exception to your rule? — DingoJones
Ok, so your central claim seems to be that what is good for the individual is whats good for the group aa long as the good is defined as not doing harm to the body/mind. Is that correct? — DingoJones
Have you read “the Moral Landscape” by Sam Harris? — DingoJones
Its always a temptation with presenting a theory to jump around between all the explanations and arguments and supporting arguments and premisses because you are uniquely familiar with them. Im not though, so one thing ar a time. — DingoJones
So that seems like it qualifies as good in your view, since the individual mind/body harm is at stake. Is that right? — DingoJones
I would need to see evidence that people with scientific minds are as empathetic as other people, have emotional intelligence, have good introspective skills so they know what biases they have when dealing with the complicated issues, where testing is often either unethical or impossible to perform, that are raised around human beings. And I am skeptical that the scientific minds are as good, in general, as other people when it comes to these things. I mean, jeez, look at psychiatry and pharma related to 'mental illness', that's driven by people with scientific minds and it is philosophically weak and also when criticized these very minds seem not to understand how skewed the research is by the money behind it, the pr in favor of it, selective publishing and even direct fraud. Scientific minds seem to me as gullbile as any other minds, but further often on the colder side. — Coben
Are you agreeing that under a certain set of circumstances, after all due consideration of all options (there is a scenario where police are not the best option for example) etc, its good (avoiding mind/body harm) to go kill this guy? — DingoJones
If the inductional thinking of the situation leads to the best option to kill the killer and that the killer doesn't have any justification for that killing other than malice or mental illness that is impossible to change, then yes, it is justified since you are defending lives from a morally bad choice another is taking. — Christoffer
Ok, so is that individual good translate to the group? I would argue that it doesnt, that the group consideration is different since now you also have to weigh the cost to the group, which you never have to do with the individual consideration. Thats why we have laws against vigilantism, because people can lie about their moral reasons or moral diligence in concluding that killing the murderer is correct. Hopefully the possibilities are fairly obvious.
So that would be an example of whats good for the individual not being good fir the group.
I think that this part of your argument is foundational, and it will all fall apart unless you can alter the premiss to exclude exceptions to the rule like we did above. — DingoJones
Morality based on value
p1 What is valuable to humans is that which is beneficial to humanity.
p2 What is beneficial to a human is that which is of no harm to mind and body.
p3 Good moral choices are those that do not harm the mind and body of self and/or others.
Conclusion: Good moral choices are those considered valuable to humans because they are beneficial to humans and humanity. — Christoffer
Combining belief and morality
p1 Unsupported belief is always less valuable to humans and humanity than supported belief.
p2 Good moral choices are those considered valuable to humans because they are beneficial to humans and humanity.
Conclusion: Moral choices out of unsupported beliefs are less valuable and has a high probability of no benefit for humans and humanity. — Christoffer
The scientific method for calculating support of belief
p1 The scientific method (verification, falsification, replication, predictability) is always the best path to objective truths and evidence that are outside of human perception. — Christoffer
A scientific mindset
p1 A person that day to day live and make choices out of ideas and hypotheses without testing and questioning them is not using a scientific method for their day to day choices.
p2 A person that day to day live and make choices out of testing and questioning their ideas and hypotheses is using the scientific method for their day to day choices.
Conclusion: A person using the scientific method in day to day thinking is a person living by a scientific mindset, i.e a scientific mind. — Christoffer
A scientific mind as a source for moral choice
p1 Moral choices out of unsupported beliefs are less valuable and have a high probability of no benefit for humans and humanity.
p3 When a belief has been put through the scientific method and survived as truth outside of human perception, it is a human belief that is supported by evidence.
p4 A person using the scientific method in day to day thinking is a person living by a scientific mindset, i.e a scientific mind.
Final conclusion: A person living by a scientific mind has a higher probability of making good moral choices that benefit humans and humanity. — Christoffer
This argument is a work in progress and is changing as objections are raised. — Christoffer
In a sense i wasn't questioning whether they are morally good, but if they have all the necessary kinds of skills and knowledge needed to make decisions.I see your point and I agree that there are problems with viewing scientists as morally good, but that's not really the direction I'm coming from. It's not that science is morally good, it's that the method of research used in science can create a foundation of thinking in moral questions. Meaning, that using the methods of verification, falsifiability, replication and predictability in order to calculate the most probable good choice in a moral question respects an epistemic responsibility in any given situation. — Christoffer
My concern here is that the scientific mind tends to ignore things that are hard to track and measure. For example, let's take a societal issue like drug testing in the work place. Now a scientist can readily deal with the potential negative issue of false positives. This is fairly easy to measure. But the very hard to track effects of giving employers the right to demand urine from its employees or teachers/administrators to demand that from students, also, may be very significant, over the long term and in subtle but important ways, is often, in my experience, ignored by the scientific mind. And I am thinking of that type of mind in general, not just scientists, including non-scientists I encounter in forums like this. That a lot of less easy to measure effects for example tend to be minimized or ignored.It does not simplify complicated issues and does not make a situation easy to calculate, but the method creates a morally good framework to act within rather than adhering to moral absolutes or utilitarian number calculations. So a scientific mind is not a scientist, but a person who uses the scientific method to gain knowledge of a situation before making a moral choice. It's a mindset, a method of thinking, borrowed from the scientific method used by scientists. — Christoffer
p2 What is beneficial to a human is that which is of no harm to mind and body.
p3 Good moral choices are those that do not harm the mind and body of self and/or others. — Christoffer
I am not sure why you're equating benefit and value in P1. Both "beneficial" and "valuable" are value judgements, and there doesn't seem to be any obvious reason to use one term or the other.
Furthermore, what you mean by "humanity" remains vague. Is humanity the same as "all current humans"? When you write "valuable to humans" do you mean all humans or just some?
In P2, it's questionable to define a benefit as the mere absence of harm, but it's not a logic problem. What is a logic problem is that P1 talks about benefits to humanity, and p2 about benefits to a single human. That gap is never bridged. It shows in your conclusion, which just makes one broad sweep across humans and humanity.
P3 is of course extremely controversial, since it presupposes a specific subset of utilitarianism. That significantly limits the appeal of your argument. — Echarmion
Again, I am confused by your usage of valuable and beneficial here. Since P1 already talks about what's valuable, it doesn't combine with P2, which defines value in terms of benefit. So the second half of P2 is redundant. — Echarmion
I have to nitpick here: the scientific method works entirely based on evidence within human perception. It doesn't tell us anything about what's outside of it. The objects science deals with are the objects of perception. What the scientific method does is eliminate individual bias, which I assume is what you meant. — Echarmion
That's not a syllogism. Your conclusion is simply restating P2, so you can omit this entire segment in favor of just defining the term "scientific mind". — Echarmion
While I understand what you want to say here, the premises just don't fit together well. For example P1 is taking only about what is less valuable and has a high probability of no benefit. It's all negative. Yet the conclusion talks about what has a high probability for a benefit, i.e. it talks about a positive. And p4 really doesn't add anything that isn't already stated by p3. — Echarmion
Your conclusion is that knowing the facts is important to making moral judgement. That is certainly true. Unfortunately, it doesn't help much to know this if you are faced with a given moral choice.
What you perhaps want to argue is that it's a moral duty to evaluate the facts as well as possible. But that argument would have to look much different. — Echarmion
And a handsome work it is, too! But I wonder: many of the legs holding up your argument are either themselves unsupported claims or categorical in tone when it seems they ought to be conditional. In terms of your conclusions it may not matter much. The question that resounds within, however, is of how much relative value a "scientific mind" is with respect to the enterprise of moral thinking. It's either of no part, some part, or the whole enchilada. If it's not the whole thing, then what are the other parts? — tim wood
In a sense i wasn't questioning whether they are morally good, but if they have all the necessary kinds of skills and knowledge needed to make decisions. — Coben
My concern here is that the scientific mind tends to ignore things that are hard to track and measure. For example, let's take a societal issue like drug testing in the work place. Now a scientist can readily deal with the potential negative issue of false positives. This is fairly easy to measure. But the very hard to track effects of giving employers the right to demand urine from its employees or teachers/administrators to demand that from students, also, may be very significant, over the long term and in subtle but important ways, is often, in my experience, ignored by the scientific mind. And I am thinking of that type of mind in general, not just scientists, including non-scientists I encounter in forums like this. That a lot of less easy to measure effects for example tend to be minimized or ignored.
A full range mind uses a number of heuristics, epistemologies and methods. Often scientific minds tend to not notice how they also use intuition for example. But it is true they do try to dampen this set of skills. And this means that they go against the development of the most advanced minds in nature, human minds, which have developed, in part because we are social mammals, to use a diverse set of heuristics and approaches. In my experience the scientific minds tend to dismiss a lot of things that are nevertheless very important and have trouble recognizing their own paradigmatic biases.
This of course is extremely hard to prove. But it is what I meant.
A scientific mind, a good one, is good at science. Deciding how people should interact, say, or how countries should be run, or how children should be raised require, to me at the very least also skills that are not related to performing empirical research, designing test protocols, isolating factors, coming up with promising lines of research and being extremely well organized when you want to be. Those are great qualities, but I think good morals or patterns of relations need a bunch of other skills and ones that the scientist's set of skills can even dampen. Though of course science can contribute a lot to generating knowledge for all minds to weigh when deciding. And above I did describe the scientific mind as if it was working as a scientist. But that's what a scientific mind is aimed at even if it is working elsewhere since that is what a scientific mind is meant to be good at. — Coben
↪Christoffer The scientific method consists of the following:
1. collecting unbiased data
2. analyzing the data objectively to look for patterns
3. formulating a hypothesis to explain observed patterns
How exactly do these 3 steps relate to ethics?
What would qualify as unbiased data in ethics? Knowing how people will think/act given a set of ethical situations.
What is meant by objective analysis of data and what constitutes a pattern in the ethical domain? Being logical should make us objective enough. Patterns will most likely appear in the form of tendencies in people's thoughts/actions - certain thoughts/actions will be preferred over others. What if there are no discernible patterns in the data?
What does it mean to formulate a hypothesis that explains observed patterns? The patterns we see in the ethical behavior of people may point to which, if any, moral theory people subscribe to - are people in general consequentialists? Do they adhere to deontology? Both? Neither? Virtue ethicists? All?
Suppose we discover people are generally consequentialists; can the scientific method prove that consequentialism is the correct moral theory? The bottomline is that the scientific method applied to moral theory only explains people's behavior - are they consequentialists? do they practice deontological ethics? and so forth.
In light of this knowledge (moral behavioral patterns) we maybe able to come up with an explanation why people prefer and don't prefer certain moral theories but the explanation needn't reveal to us which moral theory is the correct one; for instance people could be consequentialists in general because it's more convenient or was indoctrinated by society or religion to be thus and not necessarily because consequentialism is the one and true moral theory.
All in all, the scientific method, what it really is, is of little help in proving which moral theory is correct: the scientific method applied to morality may not lead to moral discoveries from which infallible moral laws can be extracted for practical use. Ergo, the one who employs the scientific method to morality is no better than one who's scientifically illiterate when it comes to making moral decisions.
That said, I can understand why you think this way. Science is the poster boy of rationality and we're so mesmerized by the dazzling achievements it has made that we overlook the difference between science and rationality. In my humble opinion, science is just a subset of rationality and while we must be rational about everything, we needn't be scientific about everything. In my opinion then, what you really should be saying is that being rational increases the chances of making good decisions, including moral ones and not that being scientific does so. — TheMadFool
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.