Well most moral arguments in philosophy use thought experiments to try to tease out our intuitions, which is to say, the feelings and thoughts we have about what's right or wrong in any given scenario.
When we do so, we find that there's the usual bell-curve distribution - there's a good deal of agreement, but some disagreement on the fringes. But that's exactly what you'd expect if morality is largely driven in the first instance by our genes, which select first of all for kin altruism, and in a more vague way, for a kind of racial feeling (genetic closeness/distance), and in an even more vague way, for general sociability and like-mindedness (elective affinity, friendship). But there are slight variations in strategy there that our genes throw up too, and then even more "play" at higher, more abstract levels.
Effectively, the situation with morality is that there are a billion and one possible
objective moralities. Take anything - a grain of sand, a turd, a person, a robot, the human race, tigers - and the world instantly crystallizes into
objectively good and bad possibilities from the point of view of the existence of that thing. So we as human beings, rational animals, have inbuilt a certain point of view - we (on average, as a rule) act from the point of view of first of all, of reproductive fitness, the transmission of our genes, which we act to further willy nilly, because human beings in the past that didn't have that
intuition didn't survive to reproduce. And then, built on top of that foundation, we also have certain intuitions at the more meta level of what's good for social groups (families, kin, clans) that have reproducing members, and so on. No two individuals will have
exactly the same set of moral intuitions in this way, but there's a good deal of overlap and agreement (if there weren't, the human world wouldn't work
at all, communities couldn't form, etc.). And then on top of that we have even more rarified, abstract considerations about social structures (this is where the social constructionist aspect comes in - although our genetics form a tether, we do have
some freedom to try out possible social rules).
But separate from this issue, is the fact that you can search in the logical space of possible social rules, for some "tree" of consistent morality that maps onto any one of those billion possible "what's good/bad for x" perspectives.
So effectively, what we are doing in society, and what we are doing in philosophy, is looking for a good, consistent set or structure of moral rules in possible-social-rule-space, that maps onto the largest average consistent set of moral intuitions that most people have. And that logical tree that we're looking for is an objective morality, built around a basket of closely-related desiderata that ultimately fall out from the basic requirements for human existence - reproduction, co-operation, fulfillment, pleasure, etc. (Here we could also think of Maslow's hierarchy of needs, which is closely related to the way that moral goals are built on top of each other, and all ultimately on the foundation of genetic fitness and reproduction.) But that purely logical structure
also coincides with our inbuilt feelings and intuitions - it has to, otherwise we'll reject it, it will never "stick."
In this way, our intuitions
tend to fit together coherently (but not perfectly), and then we're also constantly trying (especially with the advent of new technologies, which open up new possibilities of things that can be good or bad) to set up a kind of feedback to our intuitions from our logical explorations, trying to make our moral systems ever more clear and coherent - the process works in tandem, back and forth between intuitions/feelings, and objective conditional moral logic (if ... then you should).