If two people had differing subjective experiences of red, whether they liked it, whether they didn't, we wouldn't say that means that red itself has no objective basis. The confusion MoK has is he thinks that a debate over liking or not liking things means there's no underlying objective notion of morality that transcends simply like and dislike. — Philosophim
Perhaps i'm not hte best one to take this up, given my anti-realist stance to color, but I don't think this is really doing a lot.
If two people experience the wavelength you're talking about as different things, then the 'object' is not redness, but a wavelength of light. It is wholly subjective, between those two, what 'redness' is (under some constraints, for sure). Maybe I'm not getting what you're saying here..
"In my experience," — Philosophim
That's always fair.
Interestingly, I've never seen anyone seriously put forward either argument you make. The main motivator for the claim seems to be more an atheistic type of thinking. A thought akin to 'No one has ever provided a reasonable account of an objective morality which isn't imposed from without, and so we are free to reject the claim that there is one'. Is that a bit better for you? I mean, doesn't align with your experience, but just as a response to the egoic type of charge..
I truly have not found a good and unbiased rational argument which leads to morality only being a subjective outcome. — Philosophim
I think you are reversing the onus, then. The claim to objective morality must be proved. Not the rejection of the claim, surely? Proving a negative (which this amounts to) can obviously be done, but in this case it would require exhausting all possibility within our Universe before making a conclusion... surely, that's a less rational requirement. I think your position is fine, no issue, but impugning others on the basis that you require proof of a negative doesn't seem all that ...good?
reason to pursue objective morality — Philosophim
This, I can accept. There is always good reason to 'align' or 'unify'.
I have to say, your reasons don't appear to be reasons, but interpretations that would support an emotional attachment to objective morality
;) ;)
First, as I mentioned earlier ignorance is not bliss. It is powerlessness. The handling of ignorance results in superstitions and emotional decisions. Anytime we can replace this with rational thought, we as a species gain power to understand ourselves, the world, and make smart decisions that help us navigate through it better. — Philosophim
What is the reason here? You'd have to already accept ab objective morality for 'ignorance' to even come up here, right? So, I can't see how this supports the point - just the activity of 'sussing out' morality generally. Which I agree with, fully.
Only an objective morality can ensure that AI develops rightly and co-exists peacefully with the rest of life on Earth. — Philosophim
This paragraph sounds like pure fear to me, and not a rational argument in any sense of the word. Its practical argument to avoid what you foresee as a negative consequence of a technology. ANd sure, for programming, ab objective morality is best. This, however, smacks of exactly my issue: There is no rational basis for the claim from within. Here, we, the people, are imposing "a morality" on the AI which we want to constraint. We're playing God. We, the people, don't have this constraint... Unless that's what you want to posit? Not wild - just one i reject on lack of evidence grounds. I understand the concerns around AI - I grew up with T2 lol - but, I don't think fearing a possible outcome of a technology has to do with the metaethics of our universe.
Are you able to outline a positive argument which would evidence an objective morality? I don't think you've done so. The three things I can see you've used to support here are are:
- Patterns of behaviour (this is one is unclear as your first para doesn't so what it says it will, so
I refrain from commenting further);
- An assumption that objective morality exists and gives rise to ignorance (which you reject - fairly, on it's face); and
- A fear of an unconstrained AI.
I can't see an answer to why you think
there is an objective morality - but rather why you think it would be good to have one.