• Qmeri
    209
    Okay, you are trying to make me either create objective goals (things you use the word "should" for) or say that this is not a moral system.

    As I have said many times, it's irrelevant whether this is called a moral system. It is simply a system with which one can make unarbitrary value judgements. Therefore it is at least a functional equivalent to the most important function of a moral system.

    So, let's explain it from a completely different perspective. Let's start with a subjective value system which does not have a logically necessary subjective goal:

    1. person A has a goal
    2. therefore some things are desirable to person A (subjective normative statement)

    The only problem of that system is that there is no way to choose one goal over the other since the goals themselves define what is better over other. The desirability of things is based on persons arbitrary choices.

    The only new thing this system adds to that is a logically necessary goal.

    1. person A has a logically necessary goal
    2. therefore some things are necessarily desirable to person A (subjective normative statement)

    In this system the desirability of things is not based on persons choices and therefore the desirability of those things for him does not need to be justified.

    I'm not trying to make objective normative statements. I don't think that's possible since I think Hume's guillotine demonstrates that impossibility. I think objective morality is demonstrably a non thing and that's why we are incapable of making logical arguments for it. We still have a functional need for an unarbitrary system to make value judgements. This system provides that.
  • Qmeri
    209
    By your reasoning, our willful actions can never be wrong. If you do something in fulfillment of your desires, that moves you closer to a state in which you will no longer have those desires and thus no motive to perform any further action - a stable state.SophistiCat

    Except your willful actions can still be wrong. If you make an action that makes you temporarily more stable, but that decreases your stability in the long run, you have objectively made an error according to this system. For example: you hurt the group, but now the group makes sure you have more severe consequences. Or: you being secretly a thief caused your social system to lose trust in one another and now you have to be in an environment where everything social is harder and more complicated. Even your personal desire might be wrong if it's so hard or otherwise something that sabotages your ability to achieve stability. That's why I think this system does promote intuitively moral choices in most situations.

    And since we know right and wrong, we know that your theory has to be wrong just for that reason alone.SophistiCat

    No one has ever demonstrated any objective right and wrong to be a thing. We know that we have those feelings and intuitions and we know evolutionary reasons to have those feelings and intuitions. For example: acting too selfishly in an intelligent group/tribe makes them band against making you lose no matter how powerful you are therefore having unselfish feelings and feelings that follow the groups norms gives an evolutionary advantage.

    To me Hume's guillotine demonstrates that objective morality is a non thing. But we still have a functional need for a system by which we can make unarbitrary value judgements. So, from a previous post:

    So, let's explain it from a completely different perspective. Let's start with a subjective value system which does not have a logically necessary subjective goal:

    1. person A has a goal
    2. therefore some things are desirable to person A (subjective normative statement)

    The only problem of that system is that there is no way to choose one goal over the other since the goals themselves define what is better over other. The desirability of things is based on persons arbitrary choices.

    The only new thing this system adds to that is a logically necessary goal.

    1. person A has a logically necessary goal
    2. therefore some things are necessarily desirable to person A (subjective normative statement)

    In this system the desirability of things is not based on persons choices and therefore the desirability of those things for him does not need to be justified.
    Qmeri

    It's irrelevant whether this is called a moral system. It is simply a system with which one can make unarbitrary value judgements. Therefore it is at least a functional equivalent to the most important function of a moral system.
  • khaled
    3.5k
    a system with which one can make unarbitrary value judgementsQmeri

    "Act such that you ensure you consume the largest amount of cheese possible" is another system that does that. I don't think that would pass as a moral system though

    2. therefore some things are desirable to person A (subjective normative statement)Qmeri

    No it isn't. This isn't a normative statement. Check this http://www.philosophy-index.com/terms/normative.php . This is a statement of fact. Some things are indeed desirable to person A.... So what? An answer to the "So what" is a normative statment Ex: Thus A should seek those desirable things. "Some things are desirable to A" is akin to "The sky is blue", it is a statement about a property of an object

    We still have a functional need for an unarbitrary system to make value judgements. This system provides that.Qmeri

    Agreed. It doesn't provide a moral system though. It demonstrates a rather vague logical necessity that can predict what we will do. It is akin to saying "People do what they want to do the most". Ok but that doesn't have anything to do with morality.

    Also since we agree that seeking stability is a logical necessity how useful is this sytem really for making decisions? Even without knowing this system exists or hearing about it you would have sought stability (necessarily). So unless a more accurate description of "stability" and how to seek it is provided I can't say this sytem would be too useful in actual decisionmaking. Again, it is akin to saying something like "People do what they want to do the most." That is logically necessary (depending on you you define want) and it doesn't actually help anyone to know that fact
  • Qmeri
    209
    "Act such that you ensure you consume the largest amount of cheese possible" is another system that does that. I don't think that would pass as a moral system thoughkhaled

    No that doesn't make unarbitrary value judgements since the whole premise is arbitrary. The whole point of my system is that its premise is not arbitrary. It is based on a goal we have no matter what we chose. That cheese system is a perfect example of an arbitrary goal you simply chose. Therefore you have to justify your choice which you can't do.

    No it isn't. This isn't a normative statement. Check this http://www.philosophy-index.com/terms/normative.php . This is a statement of fact. Some things are indeed desirable to person A.... So what? An answer to the "So what" is a normative statment Ex: Thus A should seek those desirable things. "Some things are desirable to A" is akin to "The sky is blue", it is a statement about a property of an objectkhaled

    Well, then we disagree what subjective normative statement means, but that is alright... probably my fault since i'm not very familiar with that word. It is still irrelevant for my point though. If you agree that we have a logically necessary goal, then you should also agree that it does not need to be justified like other goals. No matter how obvious and trivial you say it is, the fact that it does not need to be justified as a choice is not obvious to most people. And the fact that you can derive all your other goals and desires by choosing them as much as they are choosable to serve it and it's optimal achievement is also not obvious to most people. Therefore it is not a non-helpful realization and it does serve the function of making unarbitrary value judgements for a person. It still makes objective morality functionally unnecessary.
  • Qmeri
    209
    "Choosing goals to achieve goals is an unarbitrary way of choosing ones values and desires based on a logically necessary goal of achieving ones goals which does not have to be justified since it's not a choice and it is a functional equivalent to objective moral systems in that it allows an unarbitrary way of making value judgements."

    Is that an understandable way of explaining this goal-system of mine?
  • khaled
    3.5k
    No that doesn't make unarbitrary value judgements since the whole premise is arbitraryQmeri

    I didn't say it was unarbitrary. I agree.

    If you agree that we have a logically necessary goal, then you should also agree that it does not need to be justified like other goals. No matter how obvious and trivial you say it is, the fact that it does not need to be justified as a choice is not obvious to most peopleQmeri

    Yup yup

    And the fact that you can derive all your other goals and desires by choosing them as much as they are choosable to serve it and it's optimal achievement is also not obvious to most peopleQmeri

    But this "derivation" will be different from person to person and from circumstance to circumstance. Without some guidance or rules (which you can't justify) this system will end up with unjustifiable conclusions as well. That people seek stability will not tell you anything beyond that. It won't tell you whether or not the best way to achieve stability is through self centered behavior, charity, communism, capitalism or what

    Sure it has the "functional equivalence" of objective moral systems in that it tells you what to do but it's so vague it doesn't actually help. It's like trying to extract some morality out of cogito ergo sum for example.
  • Qmeri
    209
    But this "derivation" will be different from person to person and from circumstance to circumstance. Without some guidance or rules (which you can't justify) this system will end up with unjustifiable conclusions as well.khaled

    Well the derivations can be justified from circumstance to circumstance. It's just complicated, not undoable. Nothing forces this system to make generalizations without acknowledging them to be generalizations and thus not always true. It's still better in my opinion than an arbitrary goal system or a moral system which is just based on intuition or some "moral shoulds" which are chosen but not justified. Especially since very simple but not vague moral rules have been shown by history to not work very well. There are always exceptions where even the most moral sounding rules actually cause more misery. Like: give to the poor might be a good moral principle in certain situations, but not all. And the only rules that always end up causing nice things are always very vague like: increase happiness.

    Actually, even intuitive morality is just as complex as mankind itself. Pretty much what we call "moral" is always dependent on the context and what actually causes things like suffering and happiness in any given situation. Therefore the only thing that makes this system more complicated is the layer that the optimal solution is dependent on the point of view. It's just the same jump we made in physics when we jumped to relative theory of time.

    Sure it has the "functional equivalence" of objective moral systems in that it tells you what to do but it's so vague it doesn't actually help. It's like trying to extract some morality out of cogito ergo sum for example.khaled

    The details of what this goal system gives to any person is an empirical scientific question since it's by definition not a logical necessity since it is dependent on the person and the circumstance. But since I can't demonstrate the empirical evidence for every circumstance in this forum, I only try to demonstrate the logically necessary starting point which I can demonstrate.

    And with that starting point combined with ones knowledge of his circumstance, at least I have been able to create to myself a pretty complete set of values with not too much effort.

    Just because a system is very complicated doesn't mean that the system is unhelpful. Politics is complicated, yet we have been able to make useful simplifications and generalizations for it and for pretty much every other complicated thing we have encountered. Including well established moral systems like utilitarianism which is almost as complicated and vague as my goal system, but you are not complaining about that, are you?
  • ovdtogt
    667
    And since we know right and wrong, we know that your theory has to be wrong just for that reason alone.SophistiCat

    The most of what we sense to be right or wrong is more of a belief than knowledge.
  • ovdtogt
    667
    That people seek stabilitykhaled

    People seek what to believe in. Belief gives them stability. We are all groping around in the dark with belief like a candle to show the way.
  • SophistiCat
    2.2k
    Except your willful actions can still be wrong. If you make an action that makes you temporarily more stable, but that decreases your stability in the long run, you have objectively made an error according to this system.Qmeri

    How is this "objectively an error?" You have not shown this. Your argument is that a closed system will by necessity converge towards a stable (static) state. This is both wrong and irrelevant, but let's set that aside for now. I just want to emphasize that your argument doesn't say anything about right and wrong - it just says, in the more restricted case, that whatever choices you make, in the long run they will tend to converge towards a state of perfect satisfaction. That is all.

    is absolutely right: your "system" doesn't help us make decisions, it just claims to make an objective statement about decision-making in general. It is not a system of ethics, because it cannot prescribe any course of action.

    If instead you propose that we ought to optimize our decision-making in order to maximize satisfaction, as measured by some metric (which you will also supply), then there is nothing "logically necessary" about that - that is just another in a long line of ethical systems that will have to compete with the rest.

    I will leave you with this admonition from the recently departed philosopher Jaegwon Kim (hat tip to ), because I feel that this is kind of a theme with your posts:

    There are no free lunches in philosophy any more than in real life, and I believe the cheap ones aren’t worth the money. We might as well go for the real stuff and pay the price.Jaegwon Kim
  • Qmeri
    209
    khaled is absolutely right: your "system" doesn't help us make decisions, it just claims to make an objective statement about decision-making in general. It is not a system of ethics, because it cannot prescribe any course of action.SophistiCat

    In the very same responte, khaled says that my system prescribes a course of action for every circumstance - just that it does not give simple universal courses of action like "be charitable" irregardless of circumstance.

    But this "derivation" will be different from person to person and from circumstance to circumstance...that people seek stability will not tell you anything beyond that. It won't tell you whether or not the best way to achieve stability is through self centered behavior, charity, communism, capitalism or whatkhaled

    The fact that the optimal course of action is different depending on circumstance is true about every consequentialist moral system. This system is not abnormal in that regard. The two things this system is abnormal in are:
    1. it gives a personal goal for everyone, not universal goals
    2. it avoids the problem of justifying the choice of this goal by showing that it is unchoosable and therefore doesn't need to be justified as a choice.

    But it seems like that you will not accept that people have this unchoosable logically necessary goal. That's all right. I hope that you at least understand what this system is trying to say even if you're not convinced.
  • khaled
    3.5k
    Well the derivations can be justified from circumstance to circumstance. It's just complicated, not undoableQmeri

    That is the case with each and every moral system

    Especially since very simple but not vague moral rules have been shown by history to not work very wellQmeri

    That is the same with your system. I understand that you begin from a premise that's true by definition, but the problem with moral systems is rarely that the system is unjustified but more so that it's hard to go from ANY vague premise to concrete reality. "People seek stability" (I still don't consider this as a moral premise but nonetheless) doesn't point to anything specific we should do

    The details of what this goal system gives to any person is an empirical scientific questionQmeri

    Again, why I asked you to define "stability" in the first place because you weren't going to use "entropy".
    Just because a system is very complicated doesn't mean that the system is unhelpful.Qmeri

    This system is too SIMPLE to be helpful.

    like utilitarianism which is almost as complicated and vague as my goal system, but you are not complaining about that, are you?Qmeri

    I complain about utilitarianism all the time. I complain about every well established moral system. Because I don't think any of them have a basis, including yours.

    1. it gives a personal goal for everyone, not universal goalsQmeri

    No, what it does is state everyone has a personal vague goal which is seeking stability which is true by definition of "stability". That is very different from "gives a personal goal for everyone" because that sounds like saying "everyone should seek stability" where the only thing you can say logically is "everyone seeks stability". Again, those are very different statements

    2. it avoids the problem of justifying the choice of this goal by showing that it is unchoosable and therefore doesn't need to be justified as a choice.Qmeri

    Agreed, but as a result the starting premise is true by definition.
    "Everyone seeks stability" is like "A bachelor is not married" it is true by definition, however in the same way that "A bachelor is not married" doesn't logically lead to "A bachelor shouldn't be married", "Everyone seeks stability" doesn't lead to "Everyone should seek stability". You need the moral should for the thing to be considered a moral system

    But it seems like that you will not accept that people have this unchoosable logically necessary goal.Qmeri

    Oh I accept it, I just think it's a useless premise since it's true by definition.
  • Qmeri
    209
    That is the same with your system. I understand that you begin from a premise that's true by definition, but the problem with moral systems is rarely that the system is unjustified but more so that it's hard to go from ANY vague premise to concrete reality.khaled

    This I disagree with. At least I haven't met any system that solves the problem of justifying your choice of goals. And I also disagree with that this system is even that hard to use to make concrete decisions. In practice this system simply makes people ask the question: "what actions would make me the most satisfied in the long run?". Since people have much more information about themselves than the world as a whole, such question is much easier to turn into concrete actions than something like utilitarianism. Even a moral system with a very specific goal like: "increase capitalism", would be more difficult to implement for the average person since what actually would increase capitalism is a question that needs expertise.

    And if you are talking about moral systems that give rules that are not based on circumstance, I don't even know where to find those these days. People seem to accept that our moral intuition makes mistakes from time to time and that even religious teachings should be implemented based on the circumstance. So, while the exact nature of human "stability" and the absolute optimal way to achieve it are complex to answer questions, this system is very simple for any single person to turn into somewhat functional concrete decisions. And I'm not aware of any moral system, which does anything better than that. Optimal solutions need expertise and effort and somewhat working solutions are what is expected from the average person.

    "What would make me the most satisfied in the long run?" is not even a new difficult thing to teach to people. People are doing it already. This system simply solves the problem of justifying that choice of a goal.
  • ovdtogt
    667
    but the problem with moral systems is rarely that the system is unjustified but more so that it's hard to go from ANY vague premise to concrete reality.khaled

    We apply morality when we are faced with confusing choices.
  • khaled
    3.5k
    justifying your choice of goalsQmeri

    This doesn't do so either. It says it's a necessity. That's all it does. That's different from "justifying". Example: "People will have children" does not justify having children. "People will kill each other" does not justify murder. "Everyone will try to eat" does not justify eating.

    "what actions would make me the most satisfied in the long run?". Since people have much more information about themselves than the world as a whole, such question is much easier to turn into concrete actions than something like utilitarianism.Qmeri

    That question is LITERALLY what a utilitarian would ask though

    what actually would increase capitalism is a question that needs expertise.Qmeri

    Not anymore than
    "what actions would make me the most satisfied in the long run?"Qmeri
  • Qmeri
    209
    This doesn't do so either. It says it's a necessity. That's all it does. That's different from "justifying".khaled

    I never said it justifies anything. It solves the problem of justifying ones choice of goals by bypassing it. Something that is not a choice doesn't need to be justified as a choice.

    That question is LITERALLY what a utilitarian would ask thoughkhaled

    Utilitarianism is a family of consequentialist ethical theories that promotes actions that maximize happiness and well-being for the majority of a population.-wikipedia. My system is about personal satisfaction - similar, but still majorly different.

    Not anymore thankhaled

    Seeking of larger things like capitalism does require much more expertise than seeking own personal satisfaction since one has naturally orders of magnitude more information about himself and things that effect himself than he has about larger things like capitalism. Not to mention that a single person and his satisfaction is a much more simple thing than anything that has to do with large groups like capitalism. Are you really saying that a non-expert would be able to do as good decisions regarding capitalism as what makes him satisfied? People try to achieve personal satisfaction all their lives. They are relatively trained in it even if they never thought of it. Therefore this system is very easy to turn into practical solutions.
  • SophistiCat
    2.2k
    In the very same responte, khaled says that my system prescribes a course of action for every circumstance - just that it does not give simple universal courses of action like "be charitable" irregardless of circumstance.Qmeri

    I don't know which part of what you wrote he had in mind. As I already pointed out, you equivocate between a trivial (but wrong) descriptive statement about decision-making and a bare-bones prescriptive theory. To recap, the descriptive bit is that wish fulfillment necessarily leads towards a permanent state of satisfaction ("stability"). This just tell us what is, but this doesn't say anything about what ought to be. This is not a prescriptive system of ethics.

    And then there is the prescriptive part, which says that you ought to make decisions so as to achieve this putative state of stability-nirvana in the most optimal way. This does not follow from the above, for all the usual reasons. You fail to bridge the is-ought gap.
  • khaled
    3.5k
    My system is about personal satisfaction - similar, but still majorly different.Qmeri

    Can be considered a version of hedonism

    Are you really saying that a non-expert would be able to do as good decisions regarding capitalism as what makes him satisfied?Qmeri

    Yes, he would botch both

    Therefore this system is very easy to turn into practical solutions.Qmeri

    What if my personal satisfaction requires shooting people?
  • Qmeri
    209
    Yes, he would botch bothkhaled

    I guess that is true about any non-expertise irregardless of how simple the subject is, but I would still give an advantage to personal satisfaction, since we naturally have some expertise in it.

    What if my personal satisfaction requires shooting people?khaled

    Then you would be in a difficult situation with this system. Since achieving such a desire would be insanely hard to do without huge retaliations against your long term satisfaction, your best bet would be to change your desires. This is a good example of how other goals and desires can be derived from this one goal since they themselves affect how efficiently satisfaction can be achieved in practice.

    You can't just say: "According to this system I should do this simply because it's my desire!" The optimal solution according to this system can always be to change ones desire or to ignore a desire that would hurt your general long term satisfaction instead of acting on it.

    Because of this, this system usually ends up with intuitively moral behaviour. Realistic exceptions to this are usually not because one has overwhelmingly strong insane desires because these are rare and changeable. The exceptions happen in situations like: my own survival vs others when resources are low. Or: I already am a dictator and the people are going to rebel and kill me if I don't force them to be passive and unfree. Both are situations where long term satisfaction is very uncertain and therefore they should be avoided in this system.
  • Qmeri
    209
    This system is not about straight up solving the is-ought gap since I think that it is unsolvable. This system is about bypassing it by giving a functional equivalent to an objective moral system with a system that gives a necessary personal goal for everyone the choice of which doesn't need to be justified since it's not a choice. It is all about whether this goal of "stability" is choosable.

    If you have a goal, you do have a functional equivalent to a moral system, since you can choose all your actions according to that goal. To me, the biggest problem of ethics has always been the justification of the choice of goals. No other system I have encountered has solved or justifiably bypassed the problem of justifying the choice of goals.
  • SophistiCat
    2.2k
    This system is not about straight up solving the is-ought gap since I think that it is unsolvable. This system is about bypassing it by giving a functional equivalent to an objective moral system with a system that gives a necessary personal goal for everyone the choice of which doesn't need to be justified since it's not a choice. It is all about whether this goal of "stability" is choosable.Qmeri

    You keep saying this, but three pages into the discussion it makes no more sense than in the beginning. I think we may as well leave it here.
123Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.