• Kenosha Kid
    3.2k
    The "killer blow" is that you have excluded 1% of the population from consideration for being "qualitatively different": "Psychopaths are not outliers, they are qualitatively different." This means that 100% of the population under consideration are capable of practically follow the rule, making the rule categorically objective and not statistical.Luke

    If I consider a population of 99 fully-functioning social humans and one psychopath, 99% of them are moral agents, not 100%. That is, if I, as a fully-functioning social human (says I!) were to attest a rule that one should behave reciprocally, knowing that 1% of the population cannot do this, I can only expect a maximum of 99% to follow that rule, not 100%. I think you've gone off on a mental tangent that might make sense to you, but has nothing to do with the OP.
  • Luke
    2.6k
    If I consider a population of 99 fully-functioning social humans and one psychopath, 99% of them are moral agents, not 100%. That is, if I, as a fully-functioning social human (says I!) were to attest a rule that one should behave reciprocally, knowing that 1% of the population cannot do this, I can only expect a maximum of 99% to follow that rule, not 100%.Kenosha Kid

    100% of moral agents should behave reciprocally. That's the point. That's why it's objective and not statistical.
  • Kenosha Kid
    3.2k
    100% of moral agents should behave reciprocally, That's the point.Luke

    No, it's besides the point. This argument of yours is like responding "You're wrong, all bananas are curved" to the statement "Not all fruit is curved".

    But, as the OP goes on to say, "100% of moral agents should behave reciprocally" is also wrong, for feasibility issues.
  • Luke
    2.6k
    No, it's besides the point. This argument of yours is like responding "You're wrong, all bananas are curved" to the statement "Not all fruit is curved".Kenosha Kid

    Not at all. You said in the OP:

    Even the nearest to a fundamental rule -- do not be a hypocrite -- is not objective but statistical: there exist many for whom this is a practical impossibility because they lack empathy.Kenosha Kid

    The fundamental rule can apply only to moral agents. If psychopaths are excluded from being moral agents, then the fundamental rule cannot apply to them. Moral rules can only apply to moral agents. I don't see the relevance of statistics among moral agents.

    This argument of mine is like responding "You're wrong, this moral rule applies to all moral agents" to the statement "This moral rule doesn't apply to all agents (both moral and amoral)".
  • fdrake
    6.5k
    I don't usually write about morality, so my views on it aren't well travelled ground for me. I apologise for messiness.

    This might be a good time to ask, if I haven't already: what is the difference between "x is objectively true in context A", "y is objectively true in context B" and "the truth of x and y are relative: true/false in A, false/true in B", since clearly a relationship exists between x and A and between y and B? (x, y here may be inequalities.)Kenosha Kid

    I'm gonna put on my analytic philosophy hat.

    The difference is whether it's correct to say "X is better than Y" when in context C. Compare;

    "X is better than Y" is true in context C.
    To
    "X is better than Y is true in context C"

    The first thing takes statements, indexes to context, and evaluates them as true or false. Effectively asking whether "X is better than Y" evaluates to true in possible world C, where C is a possible context of evaluating a moral judgement. Corresponding to the question: "Does blah evaluate as true here?"

    The second thing takes statements indexed to content with evaluations of true and false and... simply embeds it in a meta language. Corresponding to the inverse question: "Is there a world in which this evaluates as (false/true)?".

    I think for you, in order for a moral claim to be objective, it has to be true in all contexts. If there is a context in which a moral claim is not true, it is not objective.

    That space of all possible contexts has to be generated in some manner; what constraints are placed upon imagining a possible context of moral evaluation of a statement? It's a very flexible notion. If the sense of possibility was logical possibility (what can be imagined without a contradiction), then it's clear that there are no objective moral truths in the above sense since imagining a world where punching babies to death is morally obligatory entails no internal contradiction. That sense of logical possibility does not reflect how we reason morally, however, since when someone evaluates whether something is right or wrong, or whether they can improve on their conduct, they don't imagine an arbitrary possible world, they imagine a world sufficiently similar to this one. Sufficiently similar insofar as the world we are changing our conduct to shares a context of facts around the claim (it will concern the same actions and people, so the ontology in the possible world has to make sense) and it is also shares sufficient similarity with the current context of moral evaluation.

    So it seems to me in order to describe our moral-evaluative conduct adequately, there needs to be a constraint placed upon the sense of possibility that connects contexts of evaluation and makes us revise our conduct - revision being a transition to a near possible world.

    That sense of nearness brings in ideas of connection of moral-evaluative conduct; it may be that some possible worlds (moral-evaluative conduct) are unreachable from our current one; whereas they are reachable under mere logical possibility. If some aspect of human being curtails our moral-evaluative contexts to ones sufficiently similar to our current ones, there will be the question of whether these aspects block transition to evaluative contexts in which an arbitrary moral judgement is false. Conversely, it may be that our moral contexts all evaluate some claim as true. IE, sufficiently similarity of moral evaluative contexts engenders the possibility that enough is shared to allow there to be some context invariant moral truths. Effectively true by "fiat" of our human nature.

    Given that the generation of moral principles is constrained by aspects of human nature, I would not like to rule this out. If we have commonalities in the generation of moral principles, we should not forget those when connecting up contexts of evaluation of moral claims.

    I need to check I follow you correctly. The perceived additional context-dependence is that just because X > Y, it doesn't follow that Y is bad, right? Because obviously X > Y itself is not more contextualised than "Y is bad" or "X is good", just more forgiving of the less preferred element of that context. The extent to which this can be any more objective, even if forgiving, still raises the same question: if culture A prefers X to Y and culture B prefers Y to X, and both cultures are self-consistently social within themselves, who is to validate that X > Y?Kenosha Kid

    I agree that talking about "X>Y is true" is much the same as talking about "X is true" when abstracting away from how we actually reason morally; they're statements which may be evaluated as true or false depending upon the context in the same way. I think comparisons highlight an aspect of moral conduct that is not well captured by a sense of modality (connection of possible worlds) that mirrors logical possibility. Comparisons include our ability to revise our conduct; do this because it's better than that, don't do this wrong thing any more. The daily contexts of moral dilemma are, in my experience, much more similar to this than "What I did was right!" and "What I did was wrong!"; aligning growth of character and moral wisdom with re-evaluating what we believe is right and wrong.

    Growth paints a picture that aligns moral conduct not with the evaluation mechanism of moral claims over all possible evaluative contexts, but upon how one transitions between them.

    So there are a few aspects to what I'm trying to say:

    (1) Contexts of evaluation for moral claims have to be connected in a manner that reflects how they are connected IRL, and logical possibility will not do.
    (2) Transitioning between contexts of evaluation is aligned with moral wisdom, and the conditions under which we transition are informed by the world's non-moral facts.
    (3) (From previous posts) changing your mind about what you should do is in part a modelling exercise - it requires you know the situation you're in and what its effective/salient vectors of change are.
    (4) (From previous posts) The modelling exercise component is consistent with cognitive mediation of sentiment in the production of evaluation. The causal sequence goes (affect+cognition)-> evaluation, rather than affect->cognition->evaluation.

    I have intuitions that the (affect+cognition) being treated as a unit places constraints upon the sense of connection of moral evaluative contexts; they have to be "sufficiently similar", in a similar manner to people imagine a semantics for counterfactuals by imagining the "nearest possible world". We have to hold a background fixed in which we evaluate things, and most of that background is non-moral facts. The strict distinction between descriptive and normative is also quite undermined (replaced with a weighting) by undermining the distinction between cognition and affect; facts come with feelings and norms, norms come with feelings and facts and so on.
  • creativesoul
    11.9k


    Very nice.

    I appreciated that input immediately upon reading it. It felt right. It made sense. It did not pose any issues of incoherence and/or self-contradiction.
  • Kenosha Kid
    3.2k
    The fundamental rule can apply only to moral agents.Luke

    That's not making a different point. Me: "50% of the fruit is apples." You: "Actually, 100% of the apples are." ???
  • Kenosha Kid
    3.2k
    I apologise for messiness.fdrake

    Can't be any worse than mine. :)

    That sense of nearness brings in ideas of connection of moral-evaluative conduct; it may be that some possible worlds (moral-evaluative conduct) are unreachable from our current one; whereas they are reachable under mere logical possibility. If some aspect of human being curtails our moral-evaluative contexts to ones sufficiently similar to our current ones, there will be the question of whether these aspects block transition to evaluative contexts in which some privileged domain of moral statements are false.fdrake

    I'd turn this around and say: isn't it simpler to postulate individual morality from common natural history and more or less arbitrary social history than worry about why and whether there are objective values for contexts that are possible but never realised? Especially given that that natural and social history is already extremely contextualised, removing the need to postulate an effective infinity of contingency-chaining variants of the same moral questions. That's the headscratcher of moral objectivity for me.

    The daily contexts of moral dilemma are, in my experience, much more similar to this than "What I did was right!" and "What I did was wrong!"; aligning growth of character and moral wisdom with re-evaluating what we believe is right and wrong.fdrake

    Would you agree a) that this makes perfect sense if we evolved an amenability to be socialised by a single culture throughout our lifetimes, b) that this will likely have influenced the common view that moral questions, however contingent, in terms of absolute magnitudes or metrics, have single-valued answers, and c) that this common view might not have been thoroughly questioned by traditional moral philosophers?

    The modelling exercise component is consistent with cognitive mediation of sentiment in the production of evaluation. The causal sequence goes (affect+cognition)-> evaluation, rather than affect->cognition->evaluation.fdrake

    I'd put it more like affect plus optional cognition -> evaluation. Which is more or less in keeping with how Kahneman describes S2: an optional process that has less input than it makes us believe, but, when it does useful things, is brilliant!

    An emotional stimulus resonant enough to change my position is also likely to be associated to that emotion thereafter. It's not a moral example, but I never particularly liked pigs. Then one night I had a very emotive dream about a pet pig. Now I love pigs! Point being, I never rationally concluded that pigs were great. I didn't "change my mind", except in a literal sense. I was conscious of all of the data, but reason didn't effect or affect the outcome. Most of my recent moral epiphanies seem very similar: a strong emotional reaction to some stimulus that similar stimuli resonate, with post hoc rationalisation. But yes sometimes you just gotta work it.

    The strict distinction between descriptive and normative is also quite undermined (replaced with a weighting) by undermining the distinction between cognition and affect; facts come with feelings and norms, norms come with feelings and facts and so on.fdrake

    I think this is a good way of putting it. The argument about how much is rational, how much associative, how much genetic is less important than accounting for it all.
  • Luke
    2.6k
    That's not making a different point. Me: "50% of the fruit is apples." You: "Actually, 100% of the apples are." ???Kenosha Kid

    Since the retraction of your OP statement that those with little to no empathy (again, what's the cutoff and how is it decided?) deserve their own moral frame of reference, I take issue with your remaining claim that the fundamental rule of hypocrisy applies only statistically. In response to my initial post on this matter, you compared such people as morally equivalent to chairs, buckets of water, and letter sequences. Do you also include those objects in your statistics?

    Edit: If the rule of hypocrisy is statistical, then you need to explain what counts or doesn’t count as being a moral agent. Is it having a level of 0% empathy or more than 0%? How is that cutoff level decided and how is that empathy level measured? In short, if it’s statistical, then I think you need to better justify the exclusion of some agents from having to follow the rule.
  • fdrake
    6.5k
    I'd turn this around and say: isn't it simpler to postulate individual morality from common natural history and more or less arbitrary social history than worry about why and whether there are objective values for contexts that are possible but never realised? Especially given that that natural and social history is already extremely contextualised, removing the need to postulate an effective infinity of contingency-chaining variants of the same moral questions. That's the headscratcher of moral objectivity for me.Kenosha Kid

    I don't know, it might be both our confirmation biases talking. I look at the kind of evidence you posted in your OP; survivability strategies being selected on, the human body constraining the space of moral values we can have, and draw the opposite conclusion. The contingent facts of our nature constrain the generation of moral values. If we happen to share a social context, we will evaluate similarly or at least negotiably or be able to conflict over it. If that context is stable; informed by the needs and functions of human bodies relative to a shared social condition; we don't get to arbitrarily vary the context to produce a defeater. I think where you see arbitrarity, I see contingent and contextualised moral truths.

    I think if I grant the "varying the context" procedure you're doing, it all gets arbitrary. I just think that we can't vary the context arbitrarily here and now. With your abortion example, we both live in the same possible world ontologically and there is political conflict between anti-abortion and pro-choice. What I believe is inappropriate is treating each of these as separate contexts of moral evaluation since there is contact between them - political conflict and "conversions" one way or the other even.

    That comes down to how we're allowed to connect contexts of evaluation to eachother as a network of possible worlds; your procedure is close to logical possibility, I think mine is closer to causal contact with the addage that the moral furniture of the world comes along with its societal norms and non-moral facts (we can have disputes over food because we need food). Logical possibility lets you vary the content of the possible worlds way more than seems appropriate to me.

    That plus the negotiation between cognition and affect and norm places a constraint, I think, on the kinds of connectivity we can posit between these moral evaluative contexts. Negotiation? Social structure change? Law? These can be varied arbitrarily in your framework, but we happen to share them. Commensurability vs incommensurability of conceptual schemes may be a relevant contrast, if you're aware of the debate. I'm siding with commensurability (political conflict and conversions between moral evaluative systems), I think you have to side with incommensurability to furnish the individuation of moral evaluative contexts in the production of these defeaters.

    An emotional stimulus resonant enough to change my position is also likely to be associated to that emotion thereafter. It's not a moral example, but I never particularly liked pigs. Then one night I had a very emotive dream about a pet pig. Now I love pigs! Point being, I never rationally concluded that pigs were great. I didn't "change my mind", except in a literal sense. I was conscious of all of the data, but reason didn't effect or affect the outcome. Most of my recent moral epiphanies seem very similar: a strong emotional reaction to some stimulus that similar stimuli resonate, with post hoc rationalisation. But yes sometimes you just gotta work it.Kenosha Kid

    Eh, as much as emotion is disruptive of moral frameworks (eg: genitals vs God, genitals always win), reason re-stitches them - propagation of an insight is something cognitively involved.
  • Deleteduserrc
    2.8k
    An emotional stimulus resonant enough to change my position is also likely to be associated to that emotion thereafter. It's not a moral example, but I never particularly liked pigs. Then one night I had a very emotive dream about a pet pig. Now I love pigs! Point being, I never rationally concluded that pigs were great. I didn't "change my mind", except in a literal sense. I was conscious of all of the data, but reason didn't effect or affect the outcome. Most of my recent moral epiphanies seem very similar: a strong emotional reaction to some stimulus that similar stimuli resonate, with post hoc rationalisation. But yes sometimes you just gotta work it.Kenosha Kid

    Empirical moral datum (a ‘report’):

    I've found that my moral epiphanies are both (1) very emotional (literally epiphanic, 'revealed)' and (2) usually precipitated out of saw-toothed reason-traps, slowly accreted over time. A pattern that seems to repeat: my present way of living and thinking about how I'm living (my intellectual and practical moral habitus) tends progressively toward some sort of insuperable, double-bind block. There's no way to progress. Lassitude, despair --- & then suddenly some sort of shift. I find that though this shift is often instantaneous and seemingly discontinuous, on reflection it seems like a leap out of an object level to a metalevel, where the object level had to run its course, wheeling itself into the paralyzing muck, in order to become receptive to the flash that (discontinuously establishing a greater continuity) connects it to a metalevel perspective (of course itself destined to become a future object-level double-bind)

    The epiphany has to come as epiphany, but its sort of like a flash connecting two levels. It may be that at some absolute level the 'flash' exceeds any reason-sculpted context, but, as flash translates into new habitus, I find that I don't discard 'my dislike for pigs' in favor of 'a like for pigs' but rather incorporate each part. (Not quite true. In actuality, there's usually a foggy period in between, where I turn my arsenal of moral fury toward the last stage (often through projective online arguing.) This seems to have some purgative function. After that stage completes itself, the fog lifts, and I can see the continuity more clearly....and then feel morally bad about having turned that arsenal on my past self.)

    Edit: the above is really just a gloss on what fdrake said about reason's 'restitching’

    Edit2: I also find there’s another layer that is outside this cyclical progression which seems like a continuous, waxing understanding of the affective/cognitive states that correspond to each stage. Like: a better ability to catch the wind by putting up the right sails, or, on the other hand, recognizing when to batten the hatches and ride out a storm. It’s neither object nor meta level, I don’t really know how to characterize this layer.
  • Kenosha Kid
    3.2k
    Eh, as much as emotion is disruptive of moral frameworks (eg: genitals vs God, genitals always win), reason re-stitches them - propagation of an insight is something cognitively involved.fdrake

    Sorry, I went down a rabbit hole for a couple of months. I've produced three entire albums in the interim, with two more on the go. Insert caterpillar cliche here, heavily dosed with apologies for apparent rudeness.

    Edit: the above is really just a gloss on what fdrake said about reason's 'restitching’csalisbury

    My suspicion is that this is really just the post hoc rationalisation I referred to phrased in a way that is extremely generous toward reason. I'm not saying reason is uninvolved, just that it is more often after the fact and its importance is wildly exaggerated. It is a story we tell to ourselves and then to others to derive a position we already hold for entirely separable reasons. I think this is a very human compulsion, entirely unavoidable in fact, but it's helpful to me to understand that, had my experiences pushed me toward a diametrically opposed moral position, I would likely rationalise that position with equal fervour. The reasoning, then, is far less important than the experience in terms of explanatory power.
145678Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.