On utilitarianism Now, disregarding the above and assuming that utilitarianism is what philosophy ought to be, then isn't the problem now to create a calculus that would be able to determine what would be the optimal utility to all people (the greatest good principle). Is this something that will be possible in the future or another hopeless dream? — Posty McPostface
Used to be a consequentialist, still have some leanings towards it. Consequentialist theories like utilitarianism are seductive because their aim is to make the best-possible-world in terms of good. It's hard to argue why we ought not do that.
Utilitarianism, historically, was meant to be applied to systems of government more than individual people. Most consequentialists including utilitarians held/hold that for individual actions, it's better to not actively try to calculate the best aims but to live life naturally and intuitively, only applying consequential calculus in more extreme situations. Similar to the paradox of hedonism, it's argued that the best consequences come about generally when we're not obsessively pondering the consequences. Governments, on the other hand, have to deal with statistics, numbers, amounts, etc which are a lot easier to work with, generally. Does the military bomb a civilian settlement to eliminate radical terrorists? What are the consequences? No one individual is responsible for this decision, at least not usually.
The criticism that there is no calculus that could be applied (and therefore utilitarianism/consequentialism is false) fails to work. It is clear that a lesser headache, say, is better than a terrible migraine. We clearly know this because we take pain medication. Experiences can be roughly measured by intensity and duration, and while we don't have precise mathematical measurements for them, this is not different than other perceptual difficulties - we have a hard time estimating the length of objects without a ruler, for instance, but that doesn't mean there is no actual length.
So in general the utilitarian would argue that normal experiences are intuitively ranked without much worry. When things get hairy or we're talking about governments, that's when it says we have to start estimating the comparative value of alternate courses of action. Sometimes it's easy, but sometimes it's not. Things get super hairy when you're a pluralist in terms of value - how do you calculate the value difference
between values?
Sometimes the value difference is obvious. In which case, there's not much of an issue. Othertimes it's a lot harder. This difficulty, perhaps even real-life impossibility, is not really an argument against consequentialism. It just adds in another layer of non-ideal circumstances.