• A scientific mind as a source for moral choices
    because Chris doesn't even understand half of the criticisms being put in front of him. His responses clearly demonstrate that.Wolfman

    Does my edit of the first post not show just how well I understand the criticism? Do you think philosophy is about accepting "defeat" and that it's about who's right, wrong and that it is a contest? I can't begin to tell you how sloppy it is to write such a thing during a philosophical dialectic. Please refrain from such things. It's the equivalent of all the unknown philosophers over the course of history who's names go unnoticed because all they do is attack with closed minds.

    I have never claimed to have a bulletproof argument, even stated so in my opening post. I have always said it's a work in progress that I wanted to test the merits of at the moment. I have taken all the criticism into account for the next revision of it and tried to explain the point of view I'm working from in order to get more discussion out of it, that's it.

    If you want to gloat at a work in progress, you're clearly misunderstanding the basics of philosophy.
  • A scientific mind as a source for moral choices
    My objections, therefore, still stand.David Mo

    Of course, I need to go back to the drawing board, as I have said numerous times. I have tried to explain the foundation I'm building the argument upon. But since you pick and choose and even said to ignore other writings in this thread I don't think you want to understand my point of view here, you want to enforce your own.

    For the above to make sense, you will have to specify the concept of well being, the precise set of rules that you are proposing, why the well being that you have defined is the basis of morality. You will have to explain how you evaluate different concepts of welfare that men have. Etc.David Mo

    Which I could try and do when updating the argument. It's also the core element that is the weakest in the argument and that has already been addressed by others, as said.

    As long as you don't do all this, your proposal remains in the field of indefinition and doesn't seem to lead anywhere. If you try it you will find all the difficulties that it entails. You will realize that these difficulties have already been dealt with in moral philosophy many times without finding a solution that satisfies everyone.David Mo

    I know that there are plenty of sources that are like this argument and I'm drawing from many of them, trying to unify different ideas. But the problem I have with your objections is that you seem unable to view upon an unfinished set of ideas and understand the concept of it, that is what I have been trying to explain. This is why you confuse the scientific method with what I am actually saying. You cannot move past that you think my concept is about using science to determine morality, that is not what I'm saying. But you keep at it. If you can't understand the basics of the concept I have been trying to explain, then you absolutely will find things vague.

    So, by me saying "borrowing the four cornerstones of falsifiability, verification, replication and predictability from the scientific method to apply to the method of thought in order to come to rational conclusions of a situation", does that sound like "using the entire scientific method to research moral choices"? Forget about the now invalid argument in the OP, read this thing again and tell me what you think I'm talking about here specifically.

    Therefore, talking about things like "scientific" or "strictly" does not have much future in the field of ethics. With apologies from Sam Harris, Dawkins, de Waal and others like you who seem to be excited by this possibility.David Mo

    Sam Harris ignores previous philosophy, thinks we can define everything by (in his case) sloppy neurological research, have nothing to support claims and has segments just about blasting religion because he... dislikes it. I understand that Sam Harris and my argument seem alike, but they're not. If you misunderstand how I include "science" in my argument I can see how you draw that conclusion, but you are mistaken again.
  • Does free will exist?
    I've been leaning towards free will NOT existing, after some deep thought on the subject. And to define 'free will' quickly, I would say it is "the ability to have acted differently". I would argue that there are 3 things that enforce your actions. Beliefs, Desires (or wants), Mood. None of which are your choice. Can you choose to believe in magical leprechauns? Can you choose to desire homosexuality over heterosexuality? Can you choose to be happy instead of sad?chatterbears

    We are formed by nature and nurture, they are two sides of the same coin. They define how we act. But free will needs to be defined first. If you are talking about free will versus determinism then no, we don't have any free will. There are those arguing for quantum randomness to be a part of the neurological activity and therefore randomness can be part of how we choose something, but outside of evidence supporting it, it won't give you free will anyway. You are a product of deterministic pathways and you can't change that.

    However, in terms of practical philosophy, the nature of the universe is separated from how we define acts of freedom as human beings. Even though we live in an illusion of free will, it doesn't mean you are doomed to fate. That's a universal law that we live within, but not something we perceive. The choices you take might be determined, but you act through your experience and knowledge in a way where choices feel free.

    One of the best cases to study the consequences of free will as we live by it would be to look at the justice of criminals. By the very definition of determinism, criminals are the result of deterministic paths that lead them there. If you put aside emotion when viewing justice of criminals you realize that there are no criminals at all; it is a social construct of defining the outliers who suffer consequences from society, other people or mental illness. By determinism, they haven't chosen to be criminals, no one in their right mind would, they are forced or compelled by different factors.

    So through this lens, criminals should be treated as victims of determinism. And correcting those paths is the only way to get rid of criminality. Everyone who studied justice and society concludes the same thing, that harsh punishment for their actions won't change a dime in terms of fighting crime.

    But we still punish them and many advocates for harsher punishments. This is the act of our emotion towards them, not our intellect. And if we do, we are really acting as if free will existed. You cannot be a determinist without accepting this fact of justice, that would be cognitive dissonance.

    So how do we apply practical philosophy towards this? How do we draw the line between determinism and practical ideas about free will? Because if we all just say we are the result of determinism we could argue against any change. A criminal would just say that he's a product of determinism and he doesn't have free will. But in order for us to treat the criminal back to a place where he can function and be part of the society we first need to cut the deterministic sources for crime, but also enforce the illusion of free will onto him in order for him to choose a new path.

    Therefore, we apply free will as a practical concept towards people, in order to open up change within them. It's an illusion, but it's a practical illusion for society to work. The philosophical challenge, however, is where to draw that practical line. Most people draw that line out of emotion, without any rational thought put into it, the path of harsher punishment. But the empathic, empirical path is to study the determinism of every situation and draw the line where it is rational to do so. In terms of justice, most people are unable to draw the line correctly.
  • A scientific mind as a source for moral choices
    You wrote this yourself in your opening remarks. It corresponds exactly to the objections I made to you. I think your attempts to avoid those objections have made your ideas more confused, rather than more precise.David Mo

    But the argument in the OP is not valid anymore, which has been stated numerous times. So I urge you to read what has been written through the thread first since you ignore that I am trying to expand on the issues to present a new version later. If you only return to the argument in the OP and ignore what I write now I understand that it becomes confusing.


    -To know which acts are better than others you need to know what makes a good act and what makes a bad act. In other words, what you mean by "good" in a moral sense.David Mo

    Not if moral acts in themselves aren't good or bad. We can establish a foundation around well-being and harm that you then use when addressing all data points surrounding a certain choice. If you exhaust and maximize the data to the best of your ability and question your own biases when doing so while strictly following a ruleset of the well-being/harm foundation you are acting morally good by the process of thought itself. The act and consequence has nothing to do with this, it can be a bad consequence and it could be a bad moral act, but the argument I am describing is proposing that the morally good or bad is within the act of calculating, not the act that is calculated out of it. That the act of actively making the effort of epistemic responsibility is what is morally good, not the consequences of the calculated act or the calculated act itself.

    -You have not given a single observable and measurable characteristic that allows you to decide that an act is good.David Mo

    Because you are still talking about the act. This method I talk about here has nothing to do with good or bad acts, it has to do with calculating the act. Ignore the argument in the first post, it is outdated.

    -If you want to evaluate which acts are better than others in a scientific wayDavid Mo

    Still not what this is about.
  • Trust
    I don't look at this as a matter of trust. I do business with a lot of different people, many of whom I don't particularly trust, the question of whether I trust them or not just doesn't come up in my mind. The situation is more like one of need. I need the service they offer, so I do business with them without thinking about whether or not I ought to trust them. You, and unenlightened, might argue that the fact I do choose to do business with them implies that I trust them. I don't think that way, and I know that I do business with a few whom I particularly don't trust. I just need to be more wary of these people.Metaphysician Undercover

    I think you have hit upon the stumbling block for many here. This is the naivety of trust, that it does not occur to one to do otherwise. The veteran of Afghanistan who has a panic attack whenever he see[s a curtain twitch has lost his trust in the benignity of strangers. To those of us who have not experienced the constant danger of snipers, it seems a bit mad - we call it PTSD. Why would you think a moving curtain is dangerous?unenlightened

    I think that it's a problem of interpretation of the word trust, then. We use trust when we mean need or dependence. As we look at money, which is a social construct around trust, need, necessity for the cogs of society to work etc. As we talk about trust we will bend the word and its definition into many different types of interpretations. But they are indeed different versions of the same concept and the concept is the core we need to discuss.

    So I would look at the Google issue more as a question of need. If they offer a service which is needed, then we use it, whether or not we trust them. But doing business with someone whom you do not particularly trust means that you need to be wary. We could assume, that just like doing business with anyone else, the company would want to give us honest service to maintain a reputation, but such assumptions are what leave us vulnerable.Metaphysician Undercover

    Exactly, but I think that's the thing here, trust is need, is a necessity, is a contract. It's a contract that works until society sees it not working. How many companies have died because of misconduct? It happens and the fear within companies to do things that destroy themselves is indeed a reality, just as the reality of people fearing the companies doing misconduct. This is why we have ethical boards, laws and regulations, in order to keep everyone in fear of doing things wrong.

    It's also a question of morality. We have laws that force us not to kill each other, but people can also already have morals that prevent them from killing, as a basic result of empathy. As long as the company isn't corrupted by its own complexity, it will have some form of morality through the people working there. And of course, that morality fails, just as people fail and do crimes. But as a general rule, we have trust not in each other, but in the morality of others, which guides us even if we don't have laws.

    So can we trust Google? I don't think so. Can we trust them to do their best to be moral against their customers? Yes. If they don't they will one day fall as a company as long as society is upheld as free and laws and morality can review them. Being morally bad in front of their customers is not good for business, so either they don't do it or they hide it. But hiding such actions is a very risky venture, possibly lethal for a company. All it takes is one person with empathy to speak out against the company and their misconduct is stamped out, or the entire company itself.

    So, as you say, we can only assume them to be good, just as we can only assume others around us to be good. But outside moral theory, most people have empathy which guides many moral choices and people make up the companies we do business with. Google is a massive company, so there can be misconduct in some areas while others are perfectly fine, the key here is that we know we are vulnerable. As long as we do, we question.

    To question a service we use, is a kind of agreement in the contract of trust. It's the "I can trust you with this, right?" -interchangeable with "if I can't trust you with this, I will take you down". This kind of agreement is a foundation of the trust we give and have; fear and trust is two sides of the same coin. If we are to trust someone we agree upon the fear of breaking that trust. The trust comes out of an agreement of that fear.
  • A scientific mind as a source for moral choices
    1. Subject of the thread you proposed: if a scientific method ("scientific mind") can objectively establish which acts are morally better ("priority").David Mo

    You start out directly wrong in this by saying I'm looking for a scientific method to objectively establish morally better acts. I'm not, I propose borrowing four cornerstones of the scientific method into a mindset to calculate the most probable good moral act, based on a foundation of well-being and harm. Objectively and probable aren't the same things and that is an important factor for this theory and fundamental to its core.

    2. To know which acts are better than others you need to know what makes a good act and what makes a bad act.David Mo

    No, not for this, because it doesn't put a value in the act, it puts a value into the method used to calculate the act. This is what I mean by taking a step back from other moral theories that try to define the acts themselves. It's a Kantian duty-type theory, where the duty is the method applied to find out a morally good act, not the act or consequence itself. This theory is a moral anti-realist theory that doesn't focus on either consequence or the act, but how to form a probability around good and bad options before an act.

    3. If that method you propose is scientific and objective, it will be based on a set of observable and quantifiable "good" properties.David Mo

    The method is not, the "scientific mind" is not the scientific method, it's an idea about a mindset, a, in other's word, virtue of a person, holding not a thought or absolute idea but a method to think that is drawing on the four cornerstones of the scientific method. It's never meant as a way to calculate objective moral truths.

    If I say this is theory is moral anti-realism but still incorporate scientific method as an idea into it, you must take a moment to think about how I actually use the concept within the argument. Because now you are making an assumption about what my argument is really about and then counter-argue it from that point of view, which means you have a misunderstanding of my argument before you counter it.

    This can be because of many of the problems that others have pointed out and that my argument is in fact flawed in its inductive reasoning in the OP, that's why I suggest reading through them all to see the ongoing discussion around all the factors that are problematic about the original induction. Because I'm open for counter-arguments as long as they focus on the details of my argument, not a faulty interpretation of it.

    A typical case in moral philosophy is the combination of the lesser good for the greater number and the greater good for the lesser number.David Mo

    That has to do with consequences, my argument is more focused on deontology. How someone calculates the probability of the act has to do with the consequences, but what I talk about is that the only moral action we can take is that calculation. What the calculation itself is based on is a set of rules/foundation that guides the calculation, just like the laws of physics guide new hypotheses about physics. I talk about the duty of epistemic responsibility in calculating an act, not the consequences or the act itself. I propose that the duty to calculate is the only moral thing we can do, consequences and acts themselves are impossible to evaluate within moral theory.
  • Trust


    So what level of trust is enough for a functioning society? Do you trust scientists? Do you trust hospitals? Do you trust your mechanic not to tamper with the breaks? The building blocks around trust are many more than "if there's a chance of abuse, there will be abuse". That's a Murphy's law type reasoning that isn't very nuanced. It is true that abuse happens, so how do we minimize it? We can't get rid of the risk of abuse without losing freedom, so we can only minimize it. Repercussions to companies conducting such abuse, risk of closure, legal actions etc. Alongside that the risk of the business losing the trust of the customers which is a major part of having a business running. Risking that trust is not a good business strategy and doing so requires extreme measures that could be even riskier.

    So what level of trust can you work with? And if you can't give trust in any direction how would you solve that?

    Google is just a search engine that provides links to trustworthy, or untrustworthy information. It's not so much should you trust Google, but should you trust the sites that Google provides as a result of your search? Do you trust your own site-searching skills, and use of keywords, to find the right information you are looking for?Harry Hindu

    Exactly and it's in their interest to look trustworthy. They do not gain anything from falsely marking other websites as trustworthy or not, quite the opposite, people would want to use Google more in order to be certain in their web searches if there was a clear marking system for trustworthy sites in the searches.
  • Trust
    Yes, just like the milk seller depends on trust. Government, business, everyone in a society depends on trust for every interaction. And if we do not trust google, do we trust the independent body supervising them?

    I propose that the sickness of the age is that blows to trust have proliferated and they are indeed hard to recover from. But we cannot function without trust, and we cannot function without a search engine. I don't think there is another answer. Trust comes from honour, and so without honour we die. Thus the unreality of morality is seen to be somewhat exaggerated.
    unenlightened

    Yes, this is the fundamental problem of the post-truth era and it's a tricky one. I think that trust comes from repetition. Repetition of competence, repetition of providing evidence and facts.

    If a political leader provides facts and evidence, act upon educated ideas etc. they will after repetition of such conduct be treated as trustworthy political leaders.

    An independent body supervising this marking system will have to be founded by trustworthy people that exist within its committee. Experts in their field that has earned trust through repetition within their job. Then the independent body itself needs to repeat until being labeled trustworthy.

    The marking system itself is based on repetition, repeated acts of trustworthy nature will keep the marking for their websites. Misconduct will mark them as not trustworthy. It might even need regulations and laws around it, so that Google is not handling it, but maybe demanded to have it and if they abuse it, it's considered a crime against information.

    The big question is, in a time when no one can be trustworthy, can we create a system that can guide people to trustworthy sources of information? If we can have systems of review within science in order to exclude pseudoscience, why not for marking information so that people know where to find evidence and facts and where to be careful. I think it's possible and I think the alternative is worse chaos.
  • Trust
    This is not really true. A company may work hard to gain the trust of customers, but once they receive it they have the customers by the balls. And since the company's priority is always its financial well-being there is no good reason why the company would not abuse that trust.Metaphysician Undercover

    Yes, agreed, that's why I said:

    It's important not to become naive and comfy in their care, always question them, always question everyone. By constantly challenging them and reviewing them we challenge their handle of our trust and they will do anything to keep that trust. The risk of mishandling trust is such a bad business strategy that it gives us enough trust for the life we live. But always question them, otherwise they will find loopholes.Christoffer

    In essence, the larger the corporation, the heavier the fall. If financial well-being is their concern, a major blow to trust would be a major blow to financial well-being. The more a company relies on trust in their business, the worse the consequences of trust abuse.

    That's why we always have to review these companies, that's why it's so important with things like whistleblowers, protection of them, and company practice transparency.

    The thing I wrote about markings though, has to do standardized markings of websites that provide information. It would be in Google's interest to do this since people want a trustworthy search engine. There isn't much gain to abuse such a marking system for their searches and they would be praised for battling the post-truth era problems of information.
  • A scientific mind as a source for moral choices
    I usually present ideas of my own.David Mo
    I'll add something else about your "method" when I have time.David Mo

    Sure, but why are you using this thread for this? Why not start your own thread about morality? This discussion is about this method, so the time spent here should be about that, not different subjects.

    Objection: You cannot qualify an act as moral if you do not have a concept of what is moral and what is not. You cannot put a moral act prior others (the human community over the personal interest, for example) if you do not have a criterion of priority or hierarchy of some over others in the form of a rule (Choose x before y). You cannot claim to have an objective method for deciding which act is moral and which is a priority if you have not defined the objective validity of those criteria. And this implies a universal or a priori rule, as Kant would say.David Mo

    But you object by presenting a moral absolutist concept when I reject moral absolutism entirely.
    There is no morally good or bad acts that can be defined, only a method to find a probability of the best morally good choice. To have a framework for that method, you need a foundation that guides the reasoning and that foundation is well-being and harm. The reasoning built on top of that foundation will then find the parameters of well-being for any given situation based on the current knowledge zeitgeist.

    This is why I urge you to read through this thread first. Because you seem to miss that I do not propose an objective way to act, but an objective way to calculate moral and propose that ethics philosophy will never find any solutions if it tries to create a framework of acts, it needs a framework of reasoning and that is the only way to come close to an objective moral way to live.
  • Trust
    As for the marking system, the system itself should be independent as a standard. Google should implement it with search results, but the standardized system is not Google's. Review of how the system is handled by Google is therefore done by that independent committee.
  • Trust
    I agree about the concept of trust.

    But do you trust Google?unenlightened

    Not really, but I trust corporate image and Google is actually in the business of trust. Their lifeblood is that we trust the safety google provides and that the services provided are trustworthy. When reports came in about how Google handled paid search results it was a major blow to their brand. Same goes for Facebook, who need to keep the trust of their users.

    As long as the business requires the trust of its users, then it's a level of trust that can be used by the users themselves. I believe that a Google-branded trust-marking system is possible, because Google wants to be the most trustworthy search engine. And if they start to mark pages as trustworthy because those pages pay Google for it, that would be a blow to their brand of trust that is hard to recover from.

    We can trust the fear of losing trust. As long as there's a cold war balance in trust between consumers and producers in a capitalist society, it will regulate itself. Customers want to trust a company and the company needs the customer's trust. Failure to comply results in failure of the business.

    So can we trust google? No, but we can trust that they want to keep their business. It's important not to become naive and comfy in their care, always question them, always question everyone. By constantly challenging them and reviewing them we challenge their handle of our trust and they will do anything to keep that trust. The risk of mishandling trust is such a bad business strategy that it gives us enough trust for the life we live. But always question them, otherwise they will find loopholes.
  • Donald Trump (All General Trump Conversations Here)
    If you think so, what is then your answer? Not having a democratic elections or what?ssu

    Not sure how you conclude what I wrote into that. But in terms of elections, first, a two-party election that forces Republicans to vote for a person like Trump enforce a mentality where they need to post-justify their choice and defend someone they clearly don't want to have as a president. It creates a cognitive dissonance that further push chaos.

    Then I have the idea that people can only vote if they can answer basic questions about the politicians and parties involved in an election. A form, free to be filled out with any source of information, online, in libraries etc. but need to be correct in order to vote. This way, people who doesn't really care about their vote or politics, those who just vote because of bullshit reasons would probably not feel the energy does go through that process before a vote and it would concentrate votes to those with basic understanding of the parties and people that get the votes. A basic understanding is a fundamental thing in a democracy that aims to lower the risk of demagogue politics.

    We also need a standardized marking system for online information. Official, scientific, trustworthy media, trustworthy individuals and red marks for those who actively spread disinformation/misinformation. Such markings can start off as being handled by Google as Google handles most of the searches in the world.

    If I'm gonna describe details about the practical implementation of the above I need to write lots of pages, but as a general point to where I stand in how to improve democracy and current information age and tackle the post-truth epidemic.
  • A scientific mind as a source for moral choices
    So much for the definition. In practically any culture you will be considered a good person if you behave according to these conditions.David Mo

    But we are discussing ethical philosophy. Just like scientific theory doesn't have the same definition as theory in common tongue, a moral act or defining morally good acts in ethical philosophy is not the same as how it's commonly used in language outside of the philosophical dialectic.

    You're partly right. If moral rules imply two different interests, mine and others', a major problem is proportion.David Mo

    You have taken that out of context. The priority I provided had to do with which order you think about harm and well-being through the method I proposed.

    I do not believe that there is a scientific yardstick for these uncertainties. Rational debate on them is advisable; scientific solutions are not possible. If you have this yardstick, I would like to know it. It would alleviate many of my daily concerns.David Mo

    Not sure that you fully understand what I argue for in this thread. I recommend that you read my posts to get more insight into the theory.

    And we have not yet entered into a particularly vexing case: what do we do with the cynic who refuses to follow any moral standards? Phew.David Mo

    I haven't proposed any moral standards. I've proposed a theory for a moral method of calculating moral acts. So if someone doesn't follow moral absolutes or standards it's irrelevant to this theory since I dismiss moral standards in favor of the method. There are no standards, only ways to figure out what is good in case to case. Someone who doesn't do this is epistemically irresponsible and immoral. What we do with them depends on what they are immoral about.

    I agree. There's no magic moral solution. What is moral are the conditions that make a moral choice possible. What are those conditions?David Mo

    Again, recommend you to read the comments and posts in this thread if you want to understand the theory of method i propose. If you mean that the conditions are the foundation on which the method is used, then I've listed them eariler.
  • A scientific mind as a source for moral choices
    Yeah, norms change which is why surveys may be repeated. Ideally maybe like once a year. Until then I'm afraid the concept of "morality" is likely to remain an intransitive, incommensurable spectre.Zophie

    I think it always will be, which is why the only thing we can decide on, in order to hack the It-Ought problem (Hume), is to find the Is in common basic good and bad things, such as well-being and harm and combine them with the Ought, in order to calculate the best possible choice or act in any given situation. Any other attempt at finding an objective morality or trying to settle on what is a moral will fail. And I don't think a survey will hold up, even if we do it every year. It could lead to a shift in morality that could really be harmful to a lot of people just because civilization had "a bad day" that year of the survey.
  • A scientific mind as a source for moral choices
    As far as I can tell you are just arbitrarily saying we can't hurt the minority, but this conclusion doesn't follow from any of the principles you supplied in the OP. Additionally, you sprinkle the term "well-being" into your responses in some vague fashion, as if that will solve anything. You make no mention at all of well-being in your OP, by the way, and that was supposedly where you were defining your moral terms :roll: You have a ways to go before your half-baked theory makes any sense.Wolfman

    That's because the argument is a work in progress, as I state in the last sentence of the OP. The discussion following throughout is part of the process to make the argument clearer and better. I haven't updated the OP yet. I see the discussion as the philosophical dialectic to improve the argument, to review it and I'm thankful that you and the others do this.

    I have earlier made a remark that there are flaws in the argument based on how people have answered it, you included. That especially the part of value morality is not working at all etc.

    So you are right that the OP is flawed, that's why the discussion is important. I'm pretty convinced that the theory will hold up, but it's very vague and needs changes that makes it crystal clear. The slavery argument didn't hold up against it, but I need to change the OP argument to show clearly why, if you get what I mean? :smile:
  • A scientific mind as a source for moral choices
    Of course. Science is about discovering what is the case, not what should be the case. Obviously it's not perfect. But it's at least empirical.Zophie

    Just to be clear, my idea of scientific method or mindset is about borrowing the method into a framework of though at any given moral dilemma. So my theory is not directly using scientific method, but a guidance of thought by it.

    To my mind law is about as certain as ethics can get, so maybe you'll accept a legal parallel in the notion of common law, where standards are slightly more malleable and descriptive in pursuit of what I'll tentatively call "the least unusual and most popular".*Zophie

    Law can only follow moral philosophy. Law can never be ahead of philosophical definitions of morality. We base laws on the morality we have decided is correct. This is why laws are always changing, both through time and by consequences being analyzed by philosophy.

    The hypothetical scientific survey I proposed, which gives everyone on the planet some input, would follow a similar intention in order to establish a normative notion of universal morality, or as I would prefer to call it, kindness. Science doesn't do prescriptive knowledge; that's chiefly the job of philosophy.Zophie

    What someone ought to do as a good moral choice based on a normative notion of universal morality, is only as good as the knowledge the people creating that statistical norm. I would argue that a democracy voting forth a president like Trump shows how badly asking the people will grant you norms that are objectively good. And what about shifting tides in world views? In 2035 it's proposed that the world consists of 50% atheists. That means that if we do a survey right now, a lot of the morality norms will come from religious scripture, but in 2035 we will have much less of it. So we can't get norms as the norms are shifting, we can only get a method based on commonalities of what is good through time.
  • A scientific mind as a source for moral choices
    Interesting topic. I notice the issue of practicality has been raised. I wonder how people feel about a hypothetical global human survey which somehow qualifies what the majority of humans take to be moral? Could this data form a legitimate basis for our opinions? If so, this would be a scientific basis.Zophie

    Sure, but it would be a flawed foundation for morals based on opinion and tradition instead of universal. This is why a method that has a foundation that implies doing no harm to humanity and the self works better to stop reasoning immoral actions as moral.
  • Donald Trump (All General Trump Conversations Here)
    How long people will believe that utterly stupid line? Trump hasn't shaken up the system. Not a bit. On the contrary, corruption flourishes extremely well under an inept and defunct administration. All he has been able to do is that tax cut for the rich.ssu

    If people want to change the system they need to have someone to change the system. Bernie Sanders stance on social democracy would definitely have shaken up the system if applied, but the idea that Trump would shake up the system is only based on their idea that because Trump is so incompetence it would destroy the fundamentals of government, and then everyone rebuilds a new government upon it. It's the Animal Farm idea of revolution and it's a delusional strategy that has no grounds in reality.

    People who thought like this were only interested in the chaos, to "get back at those politicians". I cannot see their reasoning behind it as anything other than idiocy and uneducated emotional outburst to blame someone for their own shitty lives. It's the same idea behind racism and blaming immigrants, how strange that there are so many of these racist haters of immigrants who also voted for Trump. What a coincidence.
  • A scientific mind as a source for moral choices
    As far as I can tell, and I'm sure you know this like the back of your hand or inside and out (take your pick), every extant moral theory is flawed in some way or other making them hopelessly inadequate as a fully dependable compass when navigating the moral landscape. Given that our moral compass is defective, what course of action do you recommend? Each and every moral problem we face can't be solved by the simple application of a moral rule for there are no moral theories that covers all moral problems. Given this predicament, it isn't complete nonsense to suggest that when faced with moral problems we should do what a rational man would do and this is virtue ethics. I think Aristotle had his suspicions about moral theories - none seem to work perfectly.TheMadFool

    This is the basics I try to build past. Because moral theories focus on how to act more than how to figure out how to act. I try to take a step back in the process, proposing that morality comes a step before what the moral theories usually aims at.

    Take the trolley problem for example, in terms of utilitarianism you need to pull the lever. The theory demands this action. The method I propose does not say what action you take, it's a method of finding out the action. The use of the method to find out the action to take is the good moral. If you choose something based on this method, you have already acted morally good before pulling or not pulling the lever. It doesn't mean the action is 50/50 good or bad, it means you used a method to calculate the probability of a good choice to the best of your ability and that choice of thinking/reasoning is what is morally good. And the method can't be corrupted to your gain or will either, it respects you and the group (humanity), so you can't abuse it, like the example above with slavery for the greater good.

    So what I mean is that since we can accept that all moral theories have flaws, that moral landscapes shift through time and it's impossible to objectively give people answers on what actions to take in moral dilemmas. We can only propose a method used for each moral dilemma dynamically. If the method always leads to a probability of good choices, it is a moral obligation to use such a method in order to act morally good.
  • A scientific mind as a source for moral choices
    But they do have a rational argument. Their society is experiencing a boom in industry and commerce, health and life expectancy, more aggregate happiness, and so on and so forth. How is that not a rational argument?Wolfman

    Because when framing the argument through my moral theory, they are acting immoral and have ignored other types of ways to prosper that don't require 5% of humanity to be slaves. They have not respected the foundation of well being and harm and they haven't done any unbiased rational thinking to arrive at their conclusion, objections I listed a few of in the earlier post.

    Let's charitably grant that their original decision to adopt slavery was initially suboptimal from a mathematical standpoint. Maybe there was only a 40% chance of success and 60% chance of failure. But they went on with adopting slavery anyway. Against the odds, slavery turned out to work great for them. So while you might say their original plan was suboptimal, nothing in your theory says their decision to continue their way of life is immoral/suboptimal, because it has been found to work for them, and it has withstood the test of time for the last several hundred years.Wolfman

    Your reasoning here require that we divide humanity before making it logical, but you break the morality of it before trying to make it logical, meaning, you can't break the foundation of the method, you cannot harm 5% of the group. You can only harm 5% of the group (humanity) if the other option was unproportional suffering for the entire humanity. And even by that reasoning, you need to rationally support why other methods aren't better. Like, if humanity faced extinction, then arguing for the joint effort of everyone to help against and that a few of the group will suffer because of this, is not the same as enslaving people for it. But in your example, it's about capitalistic commerce prosperity, ignoring any other possible prosperity method that includes the 5% with the ones getting the well being out of it.

    There's no way to spin your argument without breaking against the moral method I proposed, they will never be able to support slavery if applying this moral method to it. If they don't use the method, they aren't acting morally, so their choice was immoral however we look at it.

    Here it seems your theory cannot address such a notion because it is entirely explicated from the perspective one takes prior to making a moral decision, and cannot make sense of the intuitively repugnant consequences that follow as a result following through on decisions turned out to have good odds after all.Wolfman

    You ignore the foundation of well being and harm here. You cannot reason without it and ignoring it to argue against the method is ignoring a huge part of the argument. You are essentially ignoring Asimovs law of robots to argue that a robot would kill a human. Which makes that argument flawed as you ignore the specifics of the theory.

    By the lights of your own theory, nothing says slavery in this case is immoral.Wolfman

    Yes, harm to the group (humanity). You cannot start by reasoning the probability of good with harm to the group, you start with no harm to the group or self and then reason for a solution. How can you reason for slavery without breaking the foundation of the method?

    This method has the foundation and the rational reasoning as two parts that need to exist together, you cannot ignore the foundation and you cannot act without reasoning. Your conclusions about slavery don't survive either of them. Either you harm humanity (since the 5% is part of that group) or you fail to reasonably falsify your justification for slavery since you ignore other possible ways to prosper.

    At the moment it seems you are grasping for straws, so I hope you look at the details of the method first. Right now you conclude the method to support slavery by ignoring a big factor that isn't in compliance with such a conclusion.
  • A scientific mind as a source for moral choices
    I think your defense is one step removed from where it needs to take place. It doesn't matter how their way of life came to be. The point is that it's already happening, and it's working for them now. On what grounds do you tell them to stop?Wolfman

    I tell them to stop since they are harming 5% of humanity and do not have a rational argument built upon the foundation of not harming 5% of humanity. If they don't agree to that point, they aren't morally good, I am.
  • A scientific mind as a source for moral choices
    So how do you define well-being, Christoffer? And how does well-being compute, if at all, into your idea? TMF is quite right to point out some similarities between what you are proposing and virtue ethics, but I'm trying to see you flesh out your position more and take it to its logical conclusion.Wolfman

    This is the tricky one, I agree. The foundation is the "Asimov law of robots" for this argument, to prevent the rational method going bonkers. Virtue ethics is more about replicating the good characteristics of a person, while mine is more about method, so that's where I see the difference between them.

    At this time of the work on this argument, I define well being and harm through their basic definition applied to humans in priority of three: humanity (group), then the self, then the other. So you can act for your gain as long as you don't act against the group of humanity and you can act against another person as long as you don't attack that person as part of the group. If that person attacks you, he himself attacks the group (since you're part of it) and defending yourself is defending the group.
    All of this through harm and well being of the mind and body, so you can't harm the body just to get well being of the mind and vice versa. You can harm the body if it's necessary for the well being of the mind, meaning if the harm of the mind is so severe that harming the body is necessary for reducing harm to the mind, it is justified within harm/well being.

    So

    Reduce harm to the body/mind
    Maximize well being of the body/mind
    Doing harm to any of them is justified in proportion to the harm of the other. (so operating a body to heal the mind is harm to the body in proportion to the harm of the mind by not doing it).

    Set within a priority of act against/towards
    Humanity (group)
    Individual Self
    Individual Other (as long as not against going against the first (humanity)
  • A scientific mind as a source for moral choices
    How does this theory escape some of the traditional criticisms leveled at utilitarianism. Imagine a world where 95% of the population believes slavery is a good thing. By enslaving the 5% minority they are able to develop their civilization to new heights and usher in a period of prosperity that has lasted for the last several hundred years.Wolfman

    How do they come to the conclusion that enslaving the 5% follows the framework of avoiding harm to humanity? It harms 5% of humanity, it might harm further by the consequences of slavery in form of civil wars in later years. The justification for slavery falls flat by using the proposed method of thinking.
    And even if it only harms the body of a few to give a utilitarian good for the others for many years to come, what about the harm of the mind? Aren't we still tackling harm to the mind by the resulting culture of what came after the slave culture in US? A society can prosper on the bodies of the few, but it's questionable or even certain (by historical records) that the well being of the people with the knowledge of the historic slavery as the foundation for their society will have maximized well being. To choose to enslave 5% is a choice to speed up the development of a capitalistic society. Wouldn't it be better to change the society's foundation of economics if the goal is to improve society fast and then prosper, minimizing the harm to 5% and include them into the well being of the entire group (humanity)?

    I just applied the method to the question of which utilitarianism would just settle on a yes.

    The counter-argument you present here ignores the foundation of well being and harm you build your rational thinking upon. And using a scientific mindset is about exhausting all possible outcomes of a choice and excluding your biases, so it would be impossible to see the benefit of slavery over the benefit of other solutions.
  • A scientific mind as a source for moral choices
    Let me get this straight. The method that one uses to arrive at a moral decision is what morality is about and not the moral decision itself for reasons I can only guess as having to do with the lack of a good moral theory.

    Wow! That's news to me although such a point of view resembles virtue ethics a lot - Aristotle, if virtue ethics is his handiwork, seems to have claimed that the highest good lies in being rational - the method, rationality, is more important than the what is achieved through it. That said, if one is rational, a consequence of that would be making the right decision, whether moral or otherwise, no? Unless of course morality has nothing to do with rationality which would cast doubt on your claims. How would you make the case that rationality can be applied to morality? Is being moral rational? I believe the idea of the selfish gene, which subsumes, quite literally, everything about us, points in a different direction.
    TheMadFool

    I see your point about virtue ethics, but that has more to do with replicating those with virtue to be good, not to use a method of thinking and reasoning in order to be good? The foundation for the method isn't about virtues, but about how we define harm and well being. Virtue is more about characteristics and it's a very loose slippery form of ethics I'm not so sure works very well.

    And using the rational method of thinking still needs to be combined with the foundation of well being and harm, otherwise, you could rationally argue for very immoral things. It needs to have a framework to be a method of good morals.

    My argument focuses on this specific form of thinking as a defined morally good way to live. Not by replicating vague virtues or just being rational without a framework around it.
  • Donald Trump (All General Trump Conversations Here)
    Here's a question about the disinfectant-gate.

    Even if he doesn't directly tell people to drink bleach, by some reports people have followed his reasoning and hurt themselves or died because of it. Would it legally be possible to charge Trump with manslaughter or constructive involuntary manslaughter or criminally negligent manslaughter?

    I find no one asking that question as if it couldn't be applied to him? In all logical reasoning, it should?
  • A scientific mind as a source for moral choices
    1. A technical impossibility: human affairs are not predictable. You cannot objectively predict effects from causes as in physics. If these were the case it would be awful. Imagine predictable tools in Hitler's hands. Human slavery would be warranted.
    2. There is not logical contradiction in preferring the falling of the whole world before I have a toothache. That is to say, you cannot deduce "to ought" from "to be". Unless you scientifically establish that the lowest good of the highest number is preferable to the highest good of the lowest number. And with what yardstick do you measure the greater or lesser good. But the utilitarians have been trying to solve this question for centuries, without success so far.

    That is why I am afraid that in ethics we will always find approximate answers that will convince more or less good people.
    David Mo

    This is the foundation for my thinking. That we cannot find what is good or bad moral acts. So how can we be morally good? We can be so by having a mindset of tackling moral questions that maximize the probability of making a good choice based on definitions outside of ourselves and our biases. That the act of calculating what choice to make is the morally good thing to do, not the act as a result.

    Your second point is the tricky one. It has to do with the definitions that are used as a foundation for the scientific mindset to build upon. How do we define the well being and harm definition? I would argue that it's a form of priority. First, humanity, then the self, then the other. Meaning, well being of yourself cannot come before humanity, but it can come before another person. So you cannot propose well being for yourself over humanity, you can do so over another person as long as that doesn't also set your well being before humanity. That way, killing another person to save yourself can happen and be good, since it was an act to also preserve the well being of the group (humanity) as you are a part of it, but killing someone else for your gain cannot be done since you act against the well being of the group (humanity) which the other is also a part of.
    By defining when well being can be broken we have a more clear definition of well being and harm and can use that definition as a foundation about how to test our hypothetical moral choice in any given situation.

    So, it isn't about defining the morally good by the act and choice we make, but by how we figure out what choice and act to make. The act of figuring out, by method of bypassing biases and understanding consequences through a set of definitions about harm and well being, is what defines us as morally good or bad, not the act or choice we arrive at in the end.
  • A scientific mind as a source for moral choices
    In another possible world people play Tetris all day. They are otherwise physically and psychologically healthy people, but they make the decision to play Tetris, in a room by themselves, for 10 hours per day. Now, this decision doesn't seem to harm their mind or body, nor the minds or bodies of anyone else; however, making the decision to play Tetris all day doesn't seem like the sort of decision we would normally categorize as "moral" either. But by the lights of your own theory, we would have to do that. How would you account for that?Wolfman

    Are there any choices that aren't fundamentally moral choices? The decision to choose milk in coffee instead of plain coffee might not seem moral, but calculating the consequences of the choice, it has moral ramifications in many directions. What are the consequences of playing Tetris 10 hours per day, how does that affect them outside of those hours? How does it affect the rest of the world? And so one. It becomes a moral choice since the only choice that isn't moral is a choice made outside of a human mind, but such a choice can't be made since we first need a human mind to make a choice and by making a choice in a human mind it affects the self and others.

    I fail to see how any choice isn't a moral choice. It depends on the perspective and scale of how you look at the consequences of a choice.
  • A scientific mind as a source for moral choices
    ↪Christoffer The scientific method consists of the following:

    1. collecting unbiased data
    2. analyzing the data objectively to look for patterns
    3. formulating a hypothesis to explain observed patterns

    How exactly do these 3 steps relate to ethics?

    What would qualify as unbiased data in ethics? Knowing how people will think/act given a set of ethical situations.

    What is meant by objective analysis of data and what constitutes a pattern in the ethical domain? Being logical should make us objective enough. Patterns will most likely appear in the form of tendencies in people's thoughts/actions - certain thoughts/actions will be preferred over others. What if there are no discernible patterns in the data?

    What does it mean to formulate a hypothesis that explains observed patterns? The patterns we see in the ethical behavior of people may point to which, if any, moral theory people subscribe to - are people in general consequentialists? Do they adhere to deontology? Both? Neither? Virtue ethicists? All?

    Suppose we discover people are generally consequentialists; can the scientific method prove that consequentialism is the correct moral theory? The bottomline is that the scientific method applied to moral theory only explains people's behavior - are they consequentialists? do they practice deontological ethics? and so forth.

    In light of this knowledge (moral behavioral patterns) we maybe able to come up with an explanation why people prefer and don't prefer certain moral theories but the explanation needn't reveal to us which moral theory is the correct one; for instance people could be consequentialists in general because it's more convenient or was indoctrinated by society or religion to be thus and not necessarily because consequentialism is the one and true moral theory.

    All in all, the scientific method, what it really is, is of little help in proving which moral theory is correct: the scientific method applied to morality may not lead to moral discoveries from which infallible moral laws can be extracted for practical use. Ergo, the one who employs the scientific method to morality is no better than one who's scientifically illiterate when it comes to making moral decisions.

    That said, I can understand why you think this way. Science is the poster boy of rationality and we're so mesmerized by the dazzling achievements it has made that we overlook the difference between science and rationality. In my humble opinion, science is just a subset of rationality and while we must be rational about everything, we needn't be scientific about everything. In my opinion then, what you really should be saying is that being rational increases the chances of making good decisions, including moral ones and not that being scientific does so.
    TheMadFool

    I'm not sure you have carefully read my reasoning in this thread, there are a lot of things mentioned in the responses to others that further explains my point. I argue that the scientific mind is about how a scientist tackle the scientific method and borrowing this mindset into how we tackle moral questions on a foundation of definitions around well being and harm creates an act of unbiased epistemic responsibility in thinking and that this mindset is how we act morally good, not the act itself. To rationally reason past our own biases and arrive at the best moral choices is how we act with good morals and that the act and consequences cannot be considered objectively good or bad, only how we arrive at what choice we make.

    What you are describing is closer to use the scientific method to arrive at the best moral system, which isn't what my argument or conclusion is about.

    But your last sentence touch upon what I'm saying. That being rational through using a method of thinking is how we define good morals. What I'm defining the method as is to borrow how scientists tackle their questions, to tackle moral questions you encounter. That being moral is to detach yourself, examine the choices through the scrutiny of unbiased reasoning and arrive at a choice. By doing so you act with good morals, however immoral the chosen act seems to be at a surface level.

    The fundamental question in ethics is, how do we choose/act good or bad? My answer is, you don't, you calculate the probability of a good choice/act and that calculation is what is morally good, not the resulting act of the calculation itself since trying to answer what is objectively good morals is impossible.
  • A scientific mind as a source for moral choices
    In a sense i wasn't questioning whether they are morally good, but if they have all the necessary kinds of skills and knowledge needed to make decisions.Coben

    No, they don't, but the method scientists use are focused on bypassing biases and perception to arrive at truths outside of the human mind. If such a process can be reframed as a mindset, a way to filter your choices to arrive at moral acts that exist outside of your biases, based on foundational definitions about harm and well being, the method itself is that which defines good morals, not the act.

    My concern here is that the scientific mind tends to ignore things that are hard to track and measure. For example, let's take a societal issue like drug testing in the work place. Now a scientist can readily deal with the potential negative issue of false positives. This is fairly easy to measure. But the very hard to track effects of giving employers the right to demand urine from its employees or teachers/administrators to demand that from students, also, may be very significant, over the long term and in subtle but important ways, is often, in my experience, ignored by the scientific mind. And I am thinking of that type of mind in general, not just scientists, including non-scientists I encounter in forums like this. That a lot of less easy to measure effects for example tend to be minimized or ignored.

    A full range mind uses a number of heuristics, epistemologies and methods. Often scientific minds tend to not notice how they also use intuition for example. But it is true they do try to dampen this set of skills. And this means that they go against the development of the most advanced minds in nature, human minds, which have developed, in part because we are social mammals, to use a diverse set of heuristics and approaches. In my experience the scientific minds tend to dismiss a lot of things that are nevertheless very important and have trouble recognizing their own paradigmatic biases.

    This of course is extremely hard to prove. But it is what I meant.

    A scientific mind, a good one, is good at science. Deciding how people should interact, say, or how countries should be run, or how children should be raised require, to me at the very least also skills that are not related to performing empirical research, designing test protocols, isolating factors, coming up with promising lines of research and being extremely well organized when you want to be. Those are great qualities, but I think good morals or patterns of relations need a bunch of other skills and ones that the scientist's set of skills can even dampen. Though of course science can contribute a lot to generating knowledge for all minds to weigh when deciding. And above I did describe the scientific mind as if it was working as a scientist. But that's what a scientific mind is aimed at even if it is working elsewhere since that is what a scientific mind is meant to be good at.
    Coben

    I think there's a fundamental misinterpretation of how I use the idea of a scientific mind as a method. Maybe it's closer to traditional rational inductive methods. The idea isn't to science the hell out of day to day moral choices, but to have the scientific method in mind as a guide for how to arrive at moral choices.

    Meaning, if I have a moral problem to solve, I need to factor in "data" and think about the moral problem with my biases in mind. Can I verify my hypothetical act? does it hold up against falsification (is it the act that arrives at the best conclusion in terms of well being and reduction of harm), is my hypothetical choice applicable to replication (can it be universal in other situations or is it only a gain for me?) and can I through this thinking predict the outcome (even past obvious consequences). As I filter my hypothetical act through the cornerstones of the scientific method, does it hold up as the best choice on the foundation of well being and minimizing harm for me, another individual and humanity as a whole? The best choice does not equal the act to be objectively good, but the process of arriving at that conclusion is objectively good since it's the limit of how well humans can arrive at truths outside of their own minds. It's a way of thinking that maximizes the ability we have to rationally find an answer to a moral question and by doing it, we act with good morals regardless of the consequences as we cannot factor in things that haven't happen yet, only what we know at the time of calculating the choice. To calculate the best moral choice is the good moral act, not the calculated act itself.
  • A scientific mind as a source for moral choices
    And a handsome work it is, too! But I wonder: many of the legs holding up your argument are either themselves unsupported claims or categorical in tone when it seems they ought to be conditional. In terms of your conclusions it may not matter much. The question that resounds within, however, is of how much relative value a "scientific mind" is with respect to the enterprise of moral thinking. It's either of no part, some part, or the whole enchilada. If it's not the whole thing, then what are the other parts?tim wood

    Thank you :) And yes, it's obvious when reading comments that there's more work to be done on this.

    As I see it, we can definitely find some truths about well being and harm for humans and humanity. But those truths are still shifting with the tides of new knowledge findings of human health. Still, the intention to draw upon the current knowledge of well being and harm is the foundation and the method of thinking, the mindset that is built upon that foundation is the morally good act. So what I'm proposing is that you can never create axioms of moral or calculate absolutes of moral, but the intention and method to choose morally can be the defining form of what is good moral. So the method to find out how to act is what morality is about, not the act itself. So finding a method that excludes biases is to find a system for good morals that are objectively working for that purpose. It's my hypothesis that such a system can exist and should be the foundation for how we act morally.
  • A scientific mind as a source for moral choices
    I am not sure why you're equating benefit and value in P1. Both "beneficial" and "valuable" are value judgements, and there doesn't seem to be any obvious reason to use one term or the other.

    Furthermore, what you mean by "humanity" remains vague. Is humanity the same as "all current humans"? When you write "valuable to humans" do you mean all humans or just some?

    In P2, it's questionable to define a benefit as the mere absence of harm, but it's not a logic problem. What is a logic problem is that P1 talks about benefits to humanity, and p2 about benefits to a single human. That gap is never bridged. It shows in your conclusion, which just makes one broad sweep across humans and humanity.

    P3 is of course extremely controversial, since it presupposes a specific subset of utilitarianism. That significantly limits the appeal of your argument.
    Echarmion

    As per the discussion that followed with DingoJones, I'm aware of the problems in my premises for this part of the argument, so I'm reworking this (I should maybe mark it).

    Again, I am confused by your usage of valuable and beneficial here. Since P1 already talks about what's valuable, it doesn't combine with P2, which defines value in terms of benefit. So the second half of P2 is redundant.Echarmion

    Agreed. I think the main culprit is the argument about value morals which would change the one combining the two conclusions.

    I have to nitpick here: the scientific method works entirely based on evidence within human perception. It doesn't tell us anything about what's outside of it. The objects science deals with are the objects of perception. What the scientific method does is eliminate individual bias, which I assume is what you meant.Echarmion

    Scientific theories are still the best measurement for truth that we have and some of them bypass our perception by pure math. I guess if you apply cartesian skepticism you could never know anything, but the theories we arrive at in science still have practical application that further proves their validity outside of mere perception.

    But, as you say, my point here is how the scientific method eliminates the individual bias and it's important for epistemic responsibility.

    That's not a syllogism. Your conclusion is simply restating P2, so you can omit this entire segment in favor of just defining the term "scientific mind".Echarmion

    Makes sense, might have been overly focused on making the argument foolproof that it fooled itself :)

    While I understand what you want to say here, the premises just don't fit together well. For example P1 is taking only about what is less valuable and has a high probability of no benefit. It's all negative. Yet the conclusion talks about what has a high probability for a benefit, i.e. it talks about a positive. And p4 really doesn't add anything that isn't already stated by p3.Echarmion

    I think that the first premise needs rework since it's based on the flawed first part of the argument. So if I rework that I think it's gonna be more logical.

    Your conclusion is that knowing the facts is important to making moral judgement. That is certainly true. Unfortunately, it doesn't help much to know this if you are faced with a given moral choice.

    What you perhaps want to argue is that it's a moral duty to evaluate the facts as well as possible. But that argument would have to look much different.
    Echarmion

    I think you're right and I might have clogged down the argument in parts that are unnecessary. The conclusion I'm arguing for is that using a scientific method of thinking about day to day moral choices is how you act morally good. Since morals shift and change, we either have to say that there are no morals and there's no point in discussing morals if there aren't any, or conclude that there are some basic things for which we think morally. If there are such things, what are they, and how do we act morally good according to them.

    So the argument needs to prove that we have an objective need for well being and objective need to avoid harm of body and mind. Then how we can arrive at the most probable truth possible using methods found in the scientific method and how this method can be applied to look for the best outcome in any given choice. Point being that borrowing the scientific method as a framework for how to think and applying that to a set of basic objective human needs is a moral strategy that is what we can consider morally good.

    That we cannot define good morals by acts or consequences alone, only by maximizing understanding of a complex moral issue and acting to the best of our ability using such a method can we maximize the probability of doing good and therefore the act of doing this is what it means to act with good morals.
  • A scientific mind as a source for moral choices
    Ok, so is that individual good translate to the group? I would argue that it doesnt, that the group consideration is different since now you also have to weigh the cost to the group, which you never have to do with the individual consideration. Thats why we have laws against vigilantism, because people can lie about their moral reasons or moral diligence in concluding that killing the murderer is correct. Hopefully the possibilities are fairly obvious.
    So that would be an example of whats good for the individual not being good fir the group.
    I think that this part of your argument is foundational, and it will all fall apart unless you can alter the premiss to exclude exceptions to the rule like we did above.
    DingoJones

    Yes, but it is good for the group if I defend the group (more people, i.e the family and possibly other families after mine). Also by proxy-choice if I try to help people with mental illness not to be handled wrongfully by society in order to end up like the killer I act upon my kill of this person to help others to not end up in a situation of being killed in self-defense. The choices accumulate and if more and more act through the method I propose, they all help each other.

    The act of killing is also in a sense forced vigilantism. The moral choice of killing the killer only occurs because police and other systems fail to protect. Killing the killer is only a last resort when looking at the options in front of you. So lying about moral reasons is in itself morally bad and doing vigilante actions fail when using the method I propose. It's not an act of vigilantism if it's the only option left to protect others from harm. But it is vigilantism if done without regard to other options and solutions.

    Doing something as a vigilante is to act against the possibility of other options. You can't be a vigilante if there isn't a choice to act outside of a legal system and second, you can't be a vigilante if there isn't a system to begin with. To act as a vigilante is to actively act against a system that's supposed to protect you. If such a system fails or doesn't exist, then you are not a vigilante.

    If you think about a real situation with the current legal system, police and everything we have for justice. There's no court that would charge you with murder or call you out as morally wrong if the police fail to act, there weren't any protective measures against a known killer where the information of his whereabouts existed and the threat of the killer acting out his threats where concrete truth based on previous killings. Your action is in that case, not an act of a vigilante since there was no system to balance your act against. If the system was there and worked, then the police would have acted upon it and if the police was mislead and you ended up in a situation of self-defense, then it would be self-defense, not vigilantism.
  • A scientific mind as a source for moral choices
    Are you agreeing that under a certain set of circumstances, after all due consideration of all options (there is a scenario where police are not the best option for example) etc, its good (avoiding mind/body harm) to go kill this guy?DingoJones

    If the inductional thinking of the situation leads to the best option to kill the killer and that the killer doesn't have any justification for that killing other than malice or mental illness that is impossible to change, then yes, it is justified since you are defending lives from a morally bad choice another is taking.

    But the research into the situation also requires understanding the reasons the killer has to kill you and the family. So what if you actually caused the death of the killer's family or more people around them? Then surely the killer has the moral on his side to take out that justice. Well, not really, since its an act of retaliation and such an act doesn't hold up to harm since my family didn't do anything, I did. And there isn't really anything to say that my death, if I've done such a thing, is bad. But that requires insight into who I am today compared to who I was when I caused this killer's family to die. So it almost always becomes a bad moral choice when balancing factors in terms of retribution against someone. The killer also have the obligation to validate his actions based on this method of thinking. Will I kill more families? Am I a changed man? Is the better act for him to propose me to help others as a justice for all harm I caused?

    But if I didn't do anything and this killer is on his way to kill my family out of malice or mental illness and I have no help from others than to act in a situation to defend other lives (my family and my self in this case), then there's no time to change the laws of mental health care etc. and the only option is to kill the killer. The morally good thing to do here is to kill, but also to maybe push for better mental health care so that there aren't any situations like these happening to other families. And also arguing for better handling by police and protectors that couldn't handle the situation I ended up in.

    So, even after the act of killing the killer, I could find further moral actions to be taken that are proxy-choices to the initial moral choice of killing him.
  • A scientific mind as a source for moral choices
    I would need to see evidence that people with scientific minds are as empathetic as other people, have emotional intelligence, have good introspective skills so they know what biases they have when dealing with the complicated issues, where testing is often either unethical or impossible to perform, that are raised around human beings. And I am skeptical that the scientific minds are as good, in general, as other people when it comes to these things. I mean, jeez, look at psychiatry and pharma related to 'mental illness', that's driven by people with scientific minds and it is philosophically weak and also when criticized these very minds seem not to understand how skewed the research is by the money behind it, the pr in favor of it, selective publishing and even direct fraud. Scientific minds seem to me as gullbile as any other minds, but further often on the colder side.Coben

    I see your point and I agree that there are problems with viewing scientists as morally good, but that's not really the direction I'm coming from. It's not that science is morally good, it's that the method of research used in science can create a foundation of thinking in moral questions. Meaning, that using the methods of verification, falsifiability, replication and predictability in order to calculate the most probable good choice in a moral question respects an epistemic responsibility in any given situation.

    It does not simplify complicated issues and does not make a situation easy to calculate, but the method creates a morally good framework to act within rather than adhering to moral absolutes or utilitarian number calculations. So a scientific mind is not a scientist, but a person who uses the scientific method to gain knowledge of a situation before making a moral choice. It's a mindset, a method of thinking, borrowed from the scientific method used by scientists.
  • A scientific mind as a source for moral choices
    Its always a temptation with presenting a theory to jump around between all the explanations and arguments and supporting arguments and premisses because you are uniquely familiar with them. Im not though, so one thing ar a time.DingoJones

    Agreed.

    So that seems like it qualifies as good in your view, since the individual mind/body harm is at stake. Is that right?DingoJones

    In a sense, yes, but the mind/body harm is only a springboard towards how to tackle the situation. So, we can first assess that because we have a murderer who's always following his threats we have a real possible situation where the harm of me and my family is at risk, the murderer is acting out of bad moral and I can prevent this act by doing the same kind of harm. Now, if the choice was to either be killed or kill we can easily conclude that the one killing the other to prevent a killing is the best moral choice. But reality is never that black and white and what I propose is that we have an obligation to gather as much "data" as possible about the situation to know what choice is the moral one.

    1. Why does the killer want to kill me and the family?
    2. What is the timeframe for me to act upon knowledge of the threat?
    3. Are there any other preventive measures that can be taken instead of killing the killer?
    4. If other actions are taken to prevent the killer, will the killer always have try again until succeeding?


    1. If the killer wants to kill me and my family because of something I have done to him, I must have already done something to justify this action from him and if so, is it proportional, reasonable? Is it out of rage or thoughts of justice? Or have the killer chosen me and my family randomly and is acting out of mental health issues?

    2. Will this act of killing me and my family happen five minutes from now, a day from now or a week from now? If the answer is unclear or if it's just a minute before the killer bursts into my home, then acting out in deadly defense is justifiable since there's no time to find other options. If it's a week from now, I have an obligation to seek answers that can help me decide if killing the killer is the moral thing to do or not. Which makes less sense in such a long timeframe.

    3. Can I call the police? Can I get protection? If I have whereabouts of the killer so that I could kill him, then I can take other precautions and actions to stop the killer instead of killing him.

    4. If I have options to prevent the killer, but the killer will always return and try again, or do anything to succeed, then I have exhausted all the options and would need to permanently stop him.

    Taking all of these into consideration you can find the best moral action to take. Which could be to kill the killer, based on the parameters of the problem, but it could also be that the initial thought was to kill the killer, but there are options only seen when looking closer at the problem. And even if there was, if the timeframe was too short, it's not a morally bad thing to kill the killer if there wasn't any time to act upon further research.

    The idea of the method is to always ask questions and research the moral choice to take and that the act of research is the morally good thing to do. The intention of figuring out the best outcome with respect to harm of body/mind of everyone involved, including the killer, is the morally good path to take.
  • A scientific mind as a source for moral choices
    Have you read “the Moral Landscape” by Sam Harris?DingoJones

    I have not but am familiar with his thinking. The problem is that he tries to expand the idea of the objective to parts that are questionable (and he's also pushing the argument to favor his anti-Islam ideas and seems to be his primary goal, not creating a moral theory). And the overly focus on neurological facts for well-being seems to demand us to fully understand the mind before his ideas can be applied, which we can't yet. So before we know everything about the mind, he can't really claim science to conclude what is moral. That's why I'm trying to tackle this in another direction.

    I'm more based in the idea of epistemic responsibility as a foundation for morality. That choices should be made out of the individual opening up their minds to a scientific method of questioning their choices before a choice is made. To scrutinize all options until the option and choice that makes the most sense in terms of well being can be decided.

    It's the trolly problem. 5 people against 1. You have 30 seconds to decide, you choose according to utilitarianism because that makes most sense according to the situation. Even pushing the fat man is. But with more time you can question, who is the one person against the other five? Is that person someone who is of such importance to humanity that the utilitarian approach is to let five die and the one to survive and do the deeds that make it the better utilitarian choice. It's a variable method of morality based on the probability of a very basic definition of well being.
  • A scientific mind as a source for moral choices
    Ok, so your central claim seems to be that what is good for the individual is whats good for the group aa long as the good is defined as not doing harm to the body/mind. Is that correct?DingoJones

    Central claim to that part of the argument, yes. As what is good for the individual will eventually also be good for the group. What is good for the machine is good for each of the cogs in that machine and what is good for one cog will benefit the entirety of that machine. So a good moral choice is that which is good for the individual and the group. The idea for this part of the argument is that what is considered morally good can be calculated as a basic form in terms of harm/wellbeing, but the variations in how we know what is good is the problem with morality. Which is tackled in the other parts of the argument.
  • Joe Biden (+General Biden/Harris Administration)
    In a civilized society based in democracy, voting for stability and competence is a priority over partisan ideologies. Bad decisions out of ideological reasons left or right are not worse than incompetence that creates nation-wide chaos.

    It doesn't matter what ideology or world-view someone has, no one benefits from Trump's incompetence and it doesn't matter who challenges him, as long as they are competent enough to keep stability. Under stability, we have the time and balance to question ideological ideas and debate specifics of politics, but the clusterfuck of incompetence negates that playing field.

    No one in their right mind, no one rational who is capable of deductive thinking would ever propose Trump to stay in power, or getting that power in the first place. His presidency was the result of a nihilistic narcissism, greed and boredom in the voters who voted for him. People who wanted to create chaos against others because of a jealousy towards the educated.

    The problem might be that there are only two choices in each election. Because when someone like Trump appears, the party belonging to that candidate need to vote for that candidate in order to get the power, even though the candidate is mentally incompetent to lead. So, on one hand, you have the nihilistic people wanting to just create chaos and on the other the republicans who have no choice but to vote Republican since they cannot choose a Democrat.

    It's fundamentally broken as a form of democracy, enforcing a demagogical result every time.

    It's interesting that any occupation in the world need education before handled, sometimes, in cases of dangerous jobs, it needs a specific license. But in the case of the leadership of a nation, no such certification or license is needed, even though the occupation is one of the most dangerous we have. I would argue that the bar to which we hold the standards of politicians in roles as leaders should be much higher. I would argue that while Plato was wrong in his philosopher-king argument, he is right in that they need to be philosophers.

    We need a philosopher-republic instead, in which the ones able to be voted into power can only have that position with the right competence in philosophy and leadership. So that whoever is voted into power, they have a basic competence in order to enforce stability. That way we would minimize the risk of chaos and incompetence in politics that is downright lethal to the people.
  • A scientific mind as a source for moral choices
    Well Im not sure how that would change that there are exceptions to your claim that haven't been accounted for. How exactly do you mean objectively valuable?DingoJones

    By objectively valuable I mean things that do not have to do with preferences but necessities. The value of things that reduce harm and suffering while increasing well being. A smoker value smoking, but it isn't objectively valuable to that person as the smoking harms him. The objectively valuable thing is to stop smoking.

    p1 What is objectively valuable to humans is that which is beneficial to humanity.

    The value of things that do not harm and increase well being is that which is beneficial to humanity. For one is for all.

    Or maybe this premise needs to be phrased differently? Maybe the intention of the premise is weak due to its rhetoric? What might be a better premise that speaks of what is good for one is good for all?
    I might need to rephrase the entire argument of value-based morality?

    What about if there are two harms, smoking and stress. The smoking relieves the stress, but harms the body, but so would stress. In that case, the smoking is harmful to body but its also beneficial to the human.DingoJones

    In this example, I would argue that the long term is an important factor as well, the smoking relieves stress, but the harm isn't visible until later. If the smoke directly caused instant harm, no one would use it to relieve stress. If someone got cancer after one smoke as a replacement for the harm of stress, no one would smoke to relieve stress. Human ignorance is the only thing arguing for a smoke being a good thing for them. What about yoga? Yoga has scientific support for relieving stress, so why choose a cigarette to battle stress when yoga has no side effects? By breaking down acts we can find to the best of our ability and time, which is most beneficial to humanity.

    In this, we can go on to object that there are some things people feel is beneficial to them even if it harms them. Meaning, a smoker just likes to suck smoke and would gladly trade a few years of their life to reap the benefit of that smoke rather than doing yoga. But that would not be beneficial to others, people who need to deal with the consequences of this person's degradation in health, death, or people affected by second hand smoke.

    So they are related. But in terms of one person, what is beneficial to one is not really always something that they agree with, but that still doesn't take away the fact that beneficial in an objective sense needs to be defined as not doing harm to body/mind. It's beneficial to be in good health and doing something that has the consequence of putting you in bad health is not beneficial to you.

    p2 What is beneficial to a human is that which is of no harm to mind and body.

    The counter-argument has to prove that there are beneficial things that do harm to the mind and/or body. What things are good for us, short and long term, that is harmful to the mind and/or body?

    On a macro scale, what about decisions that benifit more people than it harms. Wouldnt any kind if utilitarian calculation be an exception to your rule?DingoJones

    This one is trickier, but I don't think it's an exception really. It can be argued as an extension to the argument and I think the final conclusion I'm trying to build to has to do with using a scientific mindset in order to calculate the best moral choice and that the intention to use the method will lead to the most probable good moral choice. So in terms of utilitarianism, if you calculate case to case that killing one to save 10 actually has merit, it is the good moral choice to do. The method is supposed to bypass absolutism and utilitarianism as both being valid and invalid depending on individual cases. It's a form of epistemic responsibility to not slave by moral broad concepts and/or teachings, but by a scientific method mindset of calculating each situation based on basic objective properties about benefit and harm. I guess it's a form of Nonconsequentialism?

    It's more about the probability of good or bad rather than objectively good or bad. To calculate the probability of an outcome, choosing for the probability of most good, and in calculating that and choosing that, you are acting with good morals.

    Maybe this moral theory needs another name. Something like Probabilitarianism (though another field of philosophy), Moral Probabilitarianism?