function of social contexts comes along with a moral component - of commitments, responsibilities, duties, pledges, plans and attempts to change toward better functioning. — fdrake
Again, I don't see the determinism here that any kind of moral realism or universalism would require.
To me there's two things going on here. There's the question of what is/isn't morally good. For a large number of questions I think there's a right answer to that question. It's a linguistic question, no different to asking "what is the correct way to use the term 'morally good'". In proper Wittgensteinian sense the answer is not clear cut, it's fuzzy at the edges, but this fuzziness cannot be resolved ever. Likewise with social contexts. When the grocer delivers potatoes, you 'ought' to pay him because that's the meaning of the work 'ought'. It means 'that action which the social context places an imperative on you to do'. So if someone were to say "When the grocer delivers my potatoes I ought to punch him in the face" they'd be wrong. That's not what 'ought' means.
The rights and wrongs of question 1 can be resolved by studying language use. Similar to this is another type 1 question about determining one's next actions "what should I do next". Here it's obviously not about the meaning of 'should' because language need not be involved. As you know I advocate the active inference model of mental activity, so for me there's only inputs, predictions, and resolutions. The inputs here might also be social information, they might be internally generated. In each case they are an attempt to model the cause of some collection of affective states "why am I feeling this way?". The grocer delivers his potatoes and I feel an urge to pay him (or in some other way resolve this indebtedness). The best model for that is moral obligation, we pay him to test this prediction, he goes away smiling, all is well, we've resolved the uncertainty. Likewise with empathy, a desire to cooperate etc.
So with you and your partner, you have this conversation, it results in a series of affective states in your physiology, one of which is this desire to act in accordance with the spirit of the agreement made. You model this, resolve it by acting in that spirit and (hopefully) get the expected result.
But...
These are all type 1 questions - what to do, what can be said. The second type of question, which often gets conflated with the first, is - why, when we ask, does everyone else come up with a similar/different answer in the same context. We can ask this of models about the physical world, morality, logic, aesthetics...
With models about the physical world, the best answer is 'there's an external reality'. That's why I think that dropping my keys will cause them to land on the floor, and so does everyone else, because we're all interacting with the same external world which has patterns and rules.
Asking this question of morality is where questions of moral realism come in. The
how of making moral decisions is not via any meta-ethic. We can prove that using fMRI scanning, we definitely do not need to consult areas of our brain responsible for things like meta-modelling to make moral-type decisions. There does seem to be some similarity in some moral decisions, there's also a lot of dissimilarity. So there's an interesting question as to what causes this. My preferred answer is long and complicated because I tend to think morality is a messy combination of numerous, often conflicting, models. The point is, though, that whatever model we come up with to explain the similarities/dissimilarities, it has no normative force for exactly the reason you gave.
If we look in an entirely external realm to social contexts for a validation procedure for our moral conduct, we're no longer attending to the nature of moral conduct. — fdrake
The objectivism being discussed here is an attempt to take a model of why there are similarities and dissimilarities, and then treat the model as if it were the source of the moral imperatives we're investigating in our second order question. It speculates that the similarities are because there's and objective universal 'ought' among us, the dissimilarities are the result of inadequate thought given to accounting for other people's 'oughts'. It does this with absolutely no evidence whatsoever, but that's another matter. The important thing is that it then treats this model as if it were the source of the moral imperative it was originally collating. That if the model predicts your 'ought' is one of the dissimilarities, the your 'ought' is wrong. We know the dangers of treating outliers as errors just because they don't fit the model.
So basically, I agree with you completely that "looking to an entirely external realm from social conduct for a validation procedure for our actions actually does violence to the very intelligibility of moral conduct". Meta-ethical models cannot tell us what is right and what is wrong, nor even how to work that out, because meta-ethical models are outside of the social context within which morality makes sense. That's why I'm so opposed to them.
I think that underdetermination is radically anti-authoritarian, no? A social fact might engender that a person or institution acts in some way, but by itself it does not make that act satisfy any criteria other than those included within the behavioural commitments of the person or institution involved in the act. — fdrake
Exactly. To be anti-authoritarian it needs to
remain under-determined. The opposite of the 'we can work out what is morally right/wrong in every case' project. Moral 'oughts', as they actually exist in the wild, are complex, but always take the form of parameters, never pointers. One cannot continue to be 'less wrong'. One eventually reaches a point where one is simply no longer wrong. everything within that category is equally 'not wrong'.