• Two concepts of 'Goodness'


    I'm very much in agreement with your distinction between methodological vs. evaluative disagreements.

    A classic disagreements over values is the egalitarian vs libertarian one. If A values individual liberty more than material well-being and B has the opposite values, they don't share the same goals, and no amount of logical analysis can reconcile their aspirations. The only available tool is rhetoric, whereby each side tries to win over voters to their values rather than those of their opponent.andrewk

    I wonder how you understand and regard 'rhetoric' in this context. Clearly, at a certain point evaluative arguments cannot be progressed through means-ends reasoning (rationality), since it is ends that are in question. But I think perhaps there is deeper scope for at least quasi-rational discourse in reforming one's sympathies and evaluative judgements (in one direction or reciprocally). What I mean is that one party can awaken sensitivity to certain values in the other by getting them to entertain ideas, scenarios, the position of others etc. This 'awakening' may involve either the 'creation' or 'discovery' of values, though I tend to think the lines here are blurry (at what point do dispositions to feel/act a certain way become reliable enough, and manifest enough, to count as properly held values?). In this way, I think suitably engaged interlocutors can achieve a lot of convergence. I've always been on the optimistic/sentimental side when it comes to such disagreement.

    But this all leaves the main question I raised unaddressed - insofar as evaluative/ethical, cognitive disagreement remains, does a second-order concept ('good' or similar) come into play and what does it consist in?
  • Two concepts of 'Goodness'
    I can't tell what half of that is saying. Need an example.zookeeper

    Okay. Say I'm an anti-natalist environmentalist. I assert that you shouldn't have children and that it is good to remain childless. I have a vague (not explicitly articulated) theory about why this is so, including utilitarian intuitions and principles of moral equality among all animals. My first-order conception of what is good excludes having children.

    You assert that having children is good. You have a vague theory about why this is so, appealing to individual rights or a duty to populate the nation. Your first-order conception of what is good includes having children.

    When we enter into dialogue, if we are using 'good' in these contrasting ways, it seems to me that there can be no disagreement. In using that word, we are referring to different things. Instead, whether we realize it or not, I am suggesting that we are referring to something else - something at a 'higher level' of abstraction. Something like 'whatever really is good (and by the way, my conception is the right one)'*. That way, we are both talking about the same thing. One of us will be right (that having children is good or not).

    * That might be another, slightly different attempt to articulate the second-order concept of good: the moral opinion that we would converge on if we were fully rational and informed, or perhaps that we will converge on at the limit of inquiry (as Pierce might have said). But again, that might be too substantive. I could say that I didn't think that that would constitute good, and so we are again failing to connect.
  • Two concepts of 'Goodness'
    Terrapin Station, you're quite right that a non-cognitivist can easily explain how disagreement works.

    If the problem I outline for cognitivism is genuine, and primitivism isn't satisfactory for some reason, then you have a further argument here for non-cognitivism.
  • Ethical postulates are in essence synthetic a priori truths.
    "Therefore, not letting passions dictate all aspects of behavior, thought processes, and action will be beneficial in reducing unhappiness, and therefore producing happiness."

    The argument you sketch for this conclusion relies on synthetic a posteriori premises. Moreover, the conclusion itself lacks prescriptive force until you introduce a further premise such as 'pursue actions or strategies which maximise happiness', which prima facia isn't synthetic at all.