It seems to me Pfhorrest is a meta-ethical relativist, as long as he thinks that everyone has, in fact, the same terminal goals such that rational argument can always in principle result in agreement. — bert1
But being open to seeing problems with the rules you live by and revising them as needed, as often and however long as needed, is the exact opposite of following them blindly. — Pfhorrest
Honestly, I use meta-ethical relativism to say that moral positions don't have a truth value, they're not objectively true. — Judaka
I think that morality is a conflation of our biological proclivity for thinking in moral terms, the intellectual positions that we create, the personal vs social aspects of morality. Hence, people say "you need a basis for your intellectual position to be rational" but to me, morality is not based on rational thought. — Judaka
I don't believe a supercomputer A.I. can reach the moral positions that we do and for it, I think it would really struggle to invent meaningful fundamental building blocks towards morality which for us just come from our biology. — Judaka
Morality is often just you being you, the relativity of morality frames morality as being exactly that. You can be logical but your base positions aren't logical, they're just you being you. Morality is not simply an intellectual position. My reasoning is based on feelings which discount any possibility for objectivity, my feeling aren't dependant on reasoning. — Judaka
Reasoning becomes a factor when we start to talk about the implications of my feelings. I may instinctively value loyalty but we can create hypothetical scenarios which challenge how strong those feelings are. I may value loyalty but we can create scenarios where my loyalty is causing me to make very bad decisions. That's the intellectual component of morality, interpretation, framing, decision-making and so on. I find all of this happens very organically regardless of your philosophical positions. Even for a normative relativist, I imagine it changes very little in how morality functions for that person. — Judaka
That much I understand. But, in the case where you are faced with a moral dilemma, don't you then run into a performative contradiction? In order to solve the dilemma, you employ reasoning, and that reasoning will, presumably, reject some answers. What is that rejection if not assigning a truth value? — Echarmion
But isn't it the case that, while you may intelectually realize that your basic moral assumptions, your moral axioms, are merely contingent, you are nevertheless employing them as objective norms when making your moral decicions? — Echarmion
I’m taking that to mean what I call “fideism”: holding some opinions to be beyond question. You’re taking it to mean what I call “liberalism”: tentatively holding opinions without first conclusively justifying them from the ground up. But the latter is fine, it’s no criticism of me to say I’m doing that, and I’m not criticizing anyone else for doing that. It’s only the former that’s a problem — Pfhorrest
Because if reasons to question them come up, I will. Someone who does otherwise won't. That's the "blindly" part of "blindly follow": turning a blind eye towards reasons to think otherwise. — Pfhorrest
Those answers rejected aren't being described as untrue, they're being judged in other ways. An emotional argument like "it is horrible to see someone suffering" for why you should not cause suffering might or mightn't be a logically correct argument, it is based on my assessment. — Judaka
Everything about my choice to call a thing moral or immoral is based on me, my feelings, my thoughts, my interpretations, my experiences. The conclusion is not a truth, the conclusion can be evaluated in any number of ways. Is it practical, pragmatic, fair and the options go on. For me, it is never about deciding what is or isn't true. — Judaka
As for A.I, I don't agree, intelligence doesn't require our perspective, I think it is precisely due to a lack of any other intelligent species that this is conceivable for people. It's much more complicated than being based on empathy, one of the biggest feelings morality is based on is fairness - even dogs are acutely aware of fairness, it's not just an intellectual position. We are also a nonconfrontational species, people need to be trained to kill and not the other way around. All of these things play into how morality functions and morality looks very different without them. An A.I. computer would not have these biases, it's not a social species that experiences jealousy, love, hate, empathy and it has no proclivity towards being nonconfrontational or seeing things as fair or unfair. — Judaka
As humans, we can go beyond mere instincts and intellectually debate morality but that's superfluous to what morality is. Certainly, morality is not based on these intellectual debates or positions. I think people talk about morality as if they have come to all of their conclusions logically but in fact, I think they would be very similar to how they ended up if they barely thought about morality at all. One will be taught right from wrong in a similar way to lions and dogs.
Since morality isn't based on your intellectual positions, it doesn't really matter if your positions are even remotely coherent. You can justify that suffering is wrong because you had a dream about a turtle who told you so and it doesn't matter, you'll be able to navigate when suffering is wrong or not wrong as easily as anyone else. The complexity comes not from morality but interpretation, characterisation, framing, knowledge, implications and so on. — Judaka
Doesn't the ability to evaluate anything in any way require assigning truth values? Even the question "do I feel that this solution is fair" requires there to be an answer that is either true or false. — Echarmion
How do you suppose an A.I. would gain consciousness without human input? — Echarmion
No, I just “have terminal goals” (i.e. take morality to be something*) that involves the suffering and enjoyment, pleasure and pain, of all people. — Pfhorrest
Whether or not other people actually care to try to realize that end is irrelevant for whether that end is right. Some people may not care about others’ suffering, for instance; that just means they’re morally wrong, that their choices will not factor in relevant details about what the best (i.e. morally correct) choice is.
*One’s terminal goals and what one takes morality to be are the same thing, just like the criteria by which we decide what to believe and what we take reality to be are the same thing. To have something as a goal, to intend that it be so, just is to think it is good, or moral, just like to believe something just is to think it is true, or real.
Do you think they have made a mistake such that they could be reasoned with? — bert1
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.