Comments

  • Morality
    I should add, however, that even on the understanding that you are not claiming we have a moral duty to trust some particular data source over another, I'm still not quite following how you got from the valuing of children's health to there being a fact of the matter about whether vaccines are 'good'.Isaac

    I didn't expect any push back about the effectiveness of vaccines, so perhaps we could substitute the example for something else:

    Female genital mutilation (FGM) is practiced for a myriad of confused reasons, and among them is the belief that it will improve the quality of life of victims. Ostensibly it is performed because it is believed to be good, and they don't happen to trust medical authorities who insist otherwise. From our enlightened ad-vantage point, it's clear to us that FGM does not actually improve the lives of victims (hence: "victim".)

    So in what ways might we say FGM is objectively immoral? Well it objectively undermines the preferences of victims, and it also reasonably undermines the values of perpetrators as well (in some cases, villages can't remember why they started doing it, and can't say why they choose to carry on with it). When we look at the most fundamental moral values of everyone involved, it's quite clear that FGM undermines them, which is why not practicing FGM is an objectively-morally superior practice.

    Others might object on religious grounds such as the Amish, having their ethics based on the divine command.Isaac

    Such people are confused, but thankfully these types of beliefs are assailable by science, logic, and an appeal to their human values.

    People might hold strongly a virtue of 'do no harm' which would prevent them from ethically giving any kind of prophylactic drug, not because of a utilitarian calculation of harm, but on a principle designed to accommodate uncertainty.Isaac

    There's really not much difference between virtue ethics and utilitarian generalizations. What do you think causes such moral maxims as "do no harm" to evolve? Because they're useful.

    We have many fancy calculi for navigating the many moral landscapes we inhabit (and the mix of moral games we play upon them), but ultimately, serving humans - utility- is the only real and reliable perspective to adopt. Where dilemmas become too complex for other frameworks to solve, we all intuitively revert to utilitarianism.

    I want to point out that straight-forward utilitarian calculus often amounts to a vast oversimplification of moral dilemmas, which is why we have other frameworks which can account for contextual nuance (e.g: the broader implication that organ-harvesting the random hobo has on society, and on the very agreement that social moral cooperation is based on)

    Values (by which you seem to mean objectives) and facts together still are not enough to make a moral path objectively true, we're not all utilitarians.Isaac
    As I keep stating, the values component is subjective, but the way they relate to others and the world is not subjective. Once we've settled on a definition of exactly what morality is supposed to do, we can assess whether or not the actions we propose will actually achieve our individual or communal moral goals.

    How do you define "morality" exactly? In my view, boiled down, it amounts to a realm of strategic knowledge intended to help us make decisions (decisions which impact others, and in a way which considers their values). I think some moral strategies/choices are objectively better or worse than others for a given set or sets of subjective values, just like how some chess strategies/moves are objectively better or worse than others for given arrangements of the chess board.
  • Morality
    When I say that morality is mere preference, what I'm saying is that "x is good" and the like are mental phenomena and do not occur elsewhere. That's all that I'm saying. I'm not ignoring anything, I'm simply focusing on a very specific ontological claim.

    Some people believe that "x is good" occurs in the world extramentally. It does not.
    Terrapin Station

    This is a fair enough point. You're right; some people suppose morality is some tangible set of laws that exist in some kind of ultimate and universally applicable moral realm (see: God), and they're wrong.

    Our starting moral values are not extramental, but they can be inter-mental and intra-mental. Even from an individually subjective starting point, one's value hierarchy can be more or less internally consistent. Objectivity is quite useful when we negotiate our own hierarchy of starting values. The fact humans tend to share so many fundamental starting values also adds a layer of cooperative opportunity that would not be there otherwise, and navigating these opportunities for mutual benefit is the bulk of the ethical work that lays before us.
  • Morality
    Are you seriously suggesting that trusting the government is a moral obligation?Isaac

    I haven't brought the government into this. In fact, all I suggested was that there is indeed a correct answer to the question of whether or not vaccines are harmful/worth the risk. At first I didn't even give an explicit answer to the question, although I did allude to my own position. I was just using it as an example to make the point that some courses of action are objectively morally superior/inferior to others per our values, and that sometimes when we disagree about matters of fact, we disagree about which choices are best as a result.

    It can't do that objectively. It can do that subjectively, relative to an individual's preferences, though, sure.Terrapin Station

    In the same way that the scientific method "objectively" serves the subjective starting goal of acquiring predictive knowledge, morality can "objectively" serve the subjective starting goals of human beings. This makes moral truth relative to the values of interested moral agents, but there is obviously still an objective component to our moral arguments.

    When people say morality is "mere" preference, they're ignoring the bulk of what it is we do when we do morality, which is figuring out how to best accommodate our existing values (a largely empirical question). This is why I'm accusing you of having a malformed meta-ethical definition: just because morality is not universal doesn't mean we cannot or need not strive for objectively better moral arguments for the situations/values we find ourselves in and with.

    It can't do that objectively. It can do that subjectively, relative to an individual's preferences, though, sure.Terrapin Station

    "Subjectively, relative" - No.

    The fact itself is objective, and the way it relates to existing values is objective. Only the values are subjective.

    In other words, if you know the starting moral values, and you know the matters of facts, then you can objectively evaluate the moral superiority/inferiority of moral arguments.
  • Morality
    ?? I'm referring to stances a la "x is good/right conduct," "x is bad/wrong conduct," "x is morally permissible," "x is morally obligatory" etc. So no, that's nothing meta-ethical.Terrapin Station

    You're saying that moral "truth" has to not depend on human preference, because human preference is not objective. That's meta-ethical.

    "x causes autism," "x doesn't cause autism" and the like are nor morality/moral stances.Terrapin Station

    No, but the stances we take on issues like these factual issues do impact our moral actions and arguments. In other words, whether or not it is true that X causes autism can determine whether or not an action is moral (especially when disagreement about objectives are neither here nor there).
  • Morality
    The problem with morality is that there is no objective state of affairs to match with respect to the moral part.Terrapin Station

    When you say "the moral part", you're appealing to a meta-ethical definition of morality as theoretical. When I say it, I appeal to morality as an applied [meta]-physics in service of human values.

    I suspect we completely agree, which would be clear if we could be more specific about what we're each addressing (if we had better language).
  • Morality
    We have to be talking about preferences about interpersonal behavior (that's more significant than etiquette). We can have such preferences with respect to vaccinations, but not any old preference re vaccinations would count, and the facts about it, in themselves, just don't have anything to do with morality.Terrapin Station

    When the facts change from our perspective, the moral status of the actions in question can also change from our perspective (to vaccinate or not to vaccinate).

    Basically you could also argue that science itself amounts to personal preference about which empirical beliefs to adopt, but you would be focusing on the wrong thing. Yes preference plays a role (e.g: humans prefer precise and reliably predictive models), but once we set out with specific goals and tasks in mind, there are always better and worse possible methods and outcomes. In the case of science, better outcomes mean greater precision and predictive power (and while, like all knowledge, scientific understanding exists on a spectrum of certitude (it is inductive), it is so high on the spectrum that it's reasonable to say that science approximates objective truth).

    Moral propositions are not too unlike scientific ones; they propose causal relationships that may or may not be universally true, and the more accurate or reliably predictive they are, the more useful to us, as tools, they become. If we agree about our starting moral goals (like the starting goals of science), then we can treat the dilemma of how to realize our moral goals as a purely empirical question, and we can even try to answer them using the scientific method (thereby eschewing preference for the remainder of the problem). Finding the right starting moral values (and negotiating different or competing values) can be important, but it's just the foothill of a much more pressing pile of moral dilemmas that need empirical solving, such as whether or not vaccines promote child health.

    Even if literally no one ever felt otherwise, what would that have to do with the issue? Are you saying that it has something to do with how common a particular sentiment is?Terrapin Station

    How common a particular sentiment is can be very important, or not at all. It depends on the nature of the sentiment (how strongly people value it, whether it is achievable, whether it competes with other values, etc...), the environment that moral agents find themselves in, and the landscape of other values.

    If all humans valued erecting great pyramids over all else, including our own lives, (in other words: if pyramid building was our only significant source of happiness), then we would all be building pyramids at any cost. Consider that certain economic arrangements might be more or less conducive to pyramid building: a form of government which is organized to maximize pyramid construction by any means might be said to be the most morally praiseworthy form of government [possible (and not immoral to any degree, because it does not transgress on the preferences of any individual).

    Now suppose that only most humans are into pyramid building while others are obelisk obsessed. A system of government which makes slaves of these unwilling in the name of pyramid building might be objectively less moral than a system which does not. Instead of allowing citizens a narrow range of freedom, diversity in existential values is generally better accommodated by a form of government which allows people to make their own decisions.

    Yes these are massive simplifications, but with some issues things can indeed be simple. If we fleshed out real enough (or used real world) examples, then we could come to useful and highly accurate moral statements like "X form of government is morally inferior to Y form of government". Of course we have to take into account our starting value hierarchies, to what extend they are shared, differ, or directly compete. And yes, our "moral truths" only amount of inductive approximations, but so does all other truth; it's an epistemic limitation inherent to our limited information gathering capacity and our ignorance of the physical world.
  • Morality
    "It is right to promote the health of your child" might be at least a simplification of the moral part, and that's the part that's not at all objective.Terrapin Station

    Think about how often, in practice, someone promotes the opposite...

    "It is right to undermine the health of your child?"

    Physical and mental health are such basic necessities to well-being and happiness that in practice nobody ever disagrees with the idea that promoting the health of children is morally important/obligatory.

    So yes, you can saw we have a preference-based or relativist/subjectivist-based moral value to protect children, but since nobody ever disagrees with this in practice we get to wield it as if it is an objectively true moral value.

    People never disagree (reasonably anyway) with the idea that we should protect children, so we don't often have to worry about debating/negotiating our starting moral values, we can skip right to the factual empirical questions of how to actually achieve those values.
  • Morality
    ... And this is exactly why the moral subjectivists do what they do, because of bullshit like this. Vaccinating your child (or not) is not an objectively moral action. To do so, you have to trust the medical establishment (where is the moral requirements that you do so?), you have to trust the pharmaceutical company (again, where is the moral requirement here?), you have to trust the statistics (no moral requirement), you have to trust that your child has the same health prospects as an average child (again, empirical, not moral data).

    If, it were an absolutely incontrovertible fact that your child (not just the average child) were going to be more healthy as a result of vaccination, and you knew that with absolute certainty or had no cause to doubt any of the information you've been given, then it would begin to approach objectively moral to do so.
    Isaac

    Do you agree that it is either a good decision or a bad decision or vaccinate your child?

    Yes, the truth of vaccine effectiveness can be difficult for laymen to behold, but the truth is out there. In reality, the statistical benefits of vaccines far outweigh any risks (the validity of statistical analyses are not a matter of personal preference). Refusing the empirically proven vaccines not only puts the child at greater risk, but it also threatens our "herd immunity" by giving pathogens a host/vector to infect more people (in the height of the anti-vax movement, there are a lot of recent stories about localized disease outbreaks being caused by unvaccinated children).

    I accept that people don't automatically understand this stuff, and I even understand why they reject vaccines; they're just wrong about it. Anti-vax parents would not need to side with the subjectivists if they could actually address the content of the specific moral dilemma. Do vaccines lead to more disease and suffering, or less disease and suffering? We want to have less disease and less suffering as a moral prerogative, so which path should we choose?

    If, it were an absolutely incontrovertible fact that your child (not just the average child) were going to be more healthy as a result of vaccination, and you knew that with absolute certainty or had no cause to doubt any of the information you've been given, then it would begin to approach objectively moral to do so.Isaac

    You're basically agreeing that, potentially, the only different between a moral doctor who supports vaccinations and an immoral and superstitious parent who is refuses to vaccinate their child is ignorance.
  • Morality
    You can still have objectivity on a spectrum.

    Some moral practices are objectively worse than others from a given set or sets of moral preferences, and some are objectively better.

    Child vaccination springs to mind: both parents prefer their kids to be healthy, but only one of them is actually achieving it.

    Try telling a pediatric physician that vaccines amount to ettiquette ;)
  • Morality
    They're preferences about interpersonal behavior that one considers more significant than etiquette.Terrapin Station

    They're more important than etiquette because they concern the "preferences" which we value and seek to protect above all others (eg: the desire to go on living). Etiquette is about avoiding annoyance and petty confrontation, morality is about avoiding suffering and other existential threats.
  • Morality
    does morality no longer have to do with good/bad conduct, ways that we should versus shouldn't behave, etc.?Terrapin Station

    It's simply that we base our ideas of what actions are "good and bad" (and thereby a way to derive oughts) around concepts like "Billy doesn't want to be molested" or "molestation is extremely harmful to health, and everyone wants to be healthy" in the first place. In this case we can actually use our shared preferences to make virtue or deontological moral arguments (general laws) that are very useful for creating a better (more preferable) world. We can also make consequentialist arguments by asking whether or not an action does physical or reasonable disservice to the preferences of anyone else. If it does not, then it cannot be an immoral action. And we don't need shared preferences to have consequentialist arguments to make sense. When preferences are actually mutually exclusive or in direct competition, things naturally become much more complex (morality can break down), but that's just the way the world is.
  • Morality
    True, but as far as the most prevalent (nearly universal) and most important moral preferences are concerned, we're all so similarly positioned that in practice it doesn't really matter that we're basing morality on human preference (its human morality after-all); most of our moral dilemmas and efforts in moral suasion concerns how to socially accommodate our existing values, not how to force our own preferences on others. There need not be moral conflict on the grounds of differing preferences unless they are somehow mutually exclusive.

    Furthermore, merely acting on personal preference lacks such a significant component of how most people conceptualize "morality" that it is basically antithetical. Under most definitions, morality only begins when we consider the preferences of others, whether for greedy, strategic, or empathetic causes. Impulsively acting on our hedonic urges (as "mere preference" might be boiled down to) seems antithetical to what it is we do when we do morality.

    For most people, morality isn't fundamentally "personal preference", it's "personal preference in world of others' preferences, which pragmatically demand consideration".
  • Morality
    To a large degree it depends on how we define "morality". If human preference is the locus of a given definition, it's wielders will go around equating morality with preference. But if, for example, "serving human preference" is instead the locus, then it's wielders might go around equating morality with objective strategy.

    Both views can be simultaneously true, and even complimentary, with a bit of effort. Human preferences (especially shared preferences) (eg: the desire to be free and unmolested), can form the basis of our moral objectives, agreements, and actions, but at the same time empirical truth must also play a part in our determinations of what to do next. According to human preferences, some moral schemes are objectively inferior to others because they do not effectively serve those preferences.
  • Decolonizing Science?
    Yet that wouldn't be so galvanizing. With using the Scientific method usually you normally end up with something quite boring. The real problem becomes what then? What do you implement? What to replace "Eurocentric" science with? What is the decolonized science or the decolonized curriculum?ssu

    I think that you've hinted at a deeper question: what galvanizes (binds together and sustains) movements in the first place? What ought to?

    To paraphrase Dr. M.L.K Jr., without strength and love (or the "strength to love") at the heart of a social movement, resentment begets more resentment, hate begets more hate, and focusing on the negative poisons our own personalities and undermines the movement. Broadly, hate-filled-attack is a purely destructive tool; in the setting of a civilized and civics filled landscape we simply cannot afford to label each-other enemies to be approached with cautious hatred.

    The structure and tone of Fallism ensured that they would encounter widespread opposition from the get go, let alone the problem of an absent replacement cirriculum. And I think this is an issue made more prevalent thanks to the the Ponzi scheme of hate-based influence that is social media. Online, groups can cohere and organize around incoherent bluster alone, so long as it is emotionally provocative.

    The source of the Fallist movement was presumably a disparity in academic participation and outcomes (which are due to a myriad of complex causes), not that science is discriminatory per se. But through a strong enough intersectional lens, everything becomes suspect in the crime of wanton discrimination. So-called academic departments like "Gender Studies" seek to understand complex systems and how they generate unequal outcomes, but they're so bad at it that all they can really do is produce clever-sounding and emotionally provocative rhetoric. Since sounding scientific and correctly addressing the right emotions are the only requirements of the field, it actually makes sense that science itself should come under fire as a patriarchal or supremacist system.

    The Fallist movement didn't actually have anything to replace science with (no indigenous curriculum); they might as well have asked for science-free safe-spaces. Being all bread and condiment with no meat is a symptom endemic to the departments which give rise to these intersectional theories of decolonization in the first place. Ironically all they do is get in the way of achieving their own goals. In true Ouroboros fashion...
  • Decolonizing Science?
    Fallism came and went in the South African university circles just like Occupy Wall Street movement in the US. Both aren't anymore active in a major way, but the undertones haven't gone away for sure. To say that science has been just dragged to this as an innocent by-stander might accurately describe the situation. The Apartheid era education system where a minority had a good education system while the black majority had a lousy one won't naturally correct itself without investment and a lot of hard work. But that surely isn't the fault of science itself. To argue that science is Eurocentric or Western can have true repercussions, if the views would go as so far as with Boko Haram. Naturally South Africa is very different from Northern Nigeria.ssu

    Fallism, like Occupy, came and went, but their underlying emotional discussions have been going on for over a hundred years (the Marxist perspective begat a century of socialist romance as a reaction to the gross and novel inequality created by the industrial revolution, and the economic/social/democratic emancipation of African Americans, along with the South African and Pan-African struggles against exploitation and discrimination, has been the central issue in Black intellectual communities since the late nineteenth). Fallism, as far as I can gather, was a short-time business end of this larger and older movement and emerged mainly in redress to academic inequality. Given that democratic equality and academic opportunity for Black South Africans has only relatively recently become a reality, it makes sense for a cultural movement to address any extant disparity directly (though they certainly chose the wrong vector of approach). The Occupy Wall-street movement in a way encapsulated the self-same dissatisfaction, but it took a more general perspective by not overtly focusing on race (although, Occupy did suffer from its own unique problems: what they called "the progressive stack of virtue based leadership", others others might call a headless chicken. What happens when you put 10 anarchists in a room and tell them to plan to implement their ideas? Cat herding for 400, Alex).

    Despite the zoo of failed or malformed social movements aimed at addressing economic and social forms of inequality, they keep (d)evolving because there are there are genuine disparities and injustices that persist (and because solving these problems in practice is immensely complex). The ever looming wealth gap, at a time when we're on the verge of a second industrial revolution (the AI revolution), and when the long term costs of industry are more and more deferred to the people (especially their children), is a serious threat to our long-term stability. (It's no wonder Marx is making a comeback). So there is indeed a need for these kinds of movements, just more practical and useful ones.

    Like so many reactionary movements it was full of vim and vigor but it had no coherent direction or practical vision. Ironically a scientific approach could have been very useful to them in identifying the most effective objectives and methods; creation through destruction is not always helpful.
  • Science is inherently atheistic
    How do you square the "thought" parameter of Spinozism with the lack of evidence that fundamental particles actually do any "thinking"?
  • The Climate Change Paper So Depressing It's Sending People to Therapy
    I'm definitely going to read this when I get some time.

    An interesting contradition seems inherent in the abstract, which isn't a good omen...

    This agenda does not seek to build on existing scholarship on “climate adaptation” as it is premised on the view that social collapse is now inevitable.

    The author believes this is one of the first papers in the sustainability management field to conclude that climate-induced societal collapse is now inevitable in the near term and therefore to invite scholars to explore the implications.
  • Decolonizing Science?
    My pleasure! Shoveling shit, after-all, is the backbone of philosophy!
  • Decolonizing Science?
    school science overtly and covertly marginalizes Indigenous students by its ideology of neo-colonialism – a process that systemically undermines the cultural values of a formerly colonized group (Ryan, 2008). As a result, an alarming under representation of Indigenous students in senior sciences
    persists.

    This is just pseudo-academic gobbledygook.

    "Neo-colonial ideology" is a generalized bogeyman that portrays all western progress as dependent on the intentional or reckless rape of all other cardinal directions (juxtaposing it with science is an exaggeration within an exaggeration). When perceived as a western invention, through the intersectional looking glass, ontologically it becomes defined as a tool of oppression for any way that it does not approach people or political issues with with absolute emotional sensitivity and on bended knee.

    It's nice to have critical-sounding rhetoric that uses words good, but unless it has some substance then it's just a fashionable trend.

    So my question (thanks if you have made it so far) is if this is just an academic red herring or an example of how academic knowledge has fallen? Or am I just a believer in Eurocentrist science that doesn't get the point of decolonization of science?ssu

    Fallism is less about science in any tangible way, and more about the general dissatisfaction with social disparities between perceivably western and non-western ethnicities. There's an emotional debate going on, and science has been dragged into it (and unfairly accused of taking sides) like some kind of unlucky brother-in-law.

    Something is indeed rotten in the state of academia, and social "sciences" directs it...
  • Is an armed society a polite society?
    I think he means the ratio of "good guys with guns to bad guys with guns".
  • Is an armed society a polite society?
    I don't think the titular assumption here is true, but it might contain some truth.

    I would instead say that a dangerous society is a cautious society (and an armed society is a dangerous society).

    The samurai of Feudal Japan were well armed, and you could say that their society was "polite" (downright honorable in fact), but that doesn't mean it was a safe or just society.

    In other words, arms just raise the stakes.

    I think the most relevant example is the case of nuclear weapons. After Hiroshima and Nagasaki no nation has used a nuclear bomb against any other nation. We avoided a hot war with the Soviet Union because both sides were too cautious in the face of the danger. So in that specific case, yes, armed society is more polite society, but it's a bit foolish to use this as a rule-proving-case. The M.A.D doctrine has worked out so far, but if it should fail at any point in the future its strategic utility will seem useless in hindsight.

    And the M.A.D doctrine doesn't work unless everyone has the same retaliatory capability. We can envision a world where nobody can transgress upon others because of equal power distribution, but that's not the world we live in. Even if weapons were all evenly distributed, if everyone has access to extremely powerful weapons then society will still be too unstable (imagine everyone having their own nuke).

    So, to increase politeness by modifying weapons access, we could either have a gun under every pillow and a tank in every garage (not so stable in the long run), or we could reduce the disparity by reducing the amount of guns out in the wild. If people have a harder time accessing guns, more powerful guns, and bullets, then victims will less frequently be drastically outmatched by the weapons of their transgressors.
  • Moore, Open Questions and ...is good.
    I acknowledge the objectivity there, but I don't think that it's necessarily right to call that "immoral". If I am one of those people, and I inadvertently act contrary to my aim of kicking the puppy, then I'm just being unreasonable. But if I have a principle which says that that behaviour is immoral, then sure, it would be immoral accordingly, but only relative to my principle, and only relative to my thoughts and feelings about its application.S

    From my meta-ethical position morality only exists to service existing human values, which is why when given conduct is detrimental to the relevant values in question, it makes some sense to refer to it as "immoral". A more technical way of putting it would be that some actions are more moral than others (or, some actions are more immoral than others) because they serve or damage existing moral values to lesser and greater degrees. If an action leads to worse outcomes than abstaining from that action, it's not hard to conceive of it as a morally inferior action. However, I think this is largely a semantic difference rather than a meaningful meta-ethical one.

    It wouldn't apply universally, even if I thought and felt that it should. If other people reject that principle, because they think and feel differently, then I can't demonstrate that they're objectively wrong, since our thoughts and feelings are inherently subjective, and there's no warrant for a transcendent standard to override one of us.

    You can get some objective truth in moral subjectivism. That I have never denied. It is objectively true that I feel that kicking puppies is wrong, for example. But the moral subjectivist would be like, so what?
    S

    You're right, but once we have agreed on a basic moral framework (i.e: it's meant to be a cooperative strategy which serves our moral values), there's still quite a bit of room left for strong moral suasion; the subjectivity/relativity of our moral values is only as harmful to moral practice as there is range and variability between them. Keeping in mind that morality is a strategy in service to human moral values, moral agreements, acts, or principles which more effectively serve those values which are more common (and more highly valued in our various value-hierarchies), are statistically more useful as moral heuristics, and objectively more useful in specific situations where the relevant values are in-fact shared. Where our primary moral values do in fact differ (but don't compete) we're left with a similar task of finding moral strategies which accommodate a diversity of human values more effectively.

    Where we have mutually exclusive primary moral values (e.g: puppy kicking vs no puppy kicking), the best we can do is challenge and attempt to influence each other's values. It might seem like a craps-shoot, but since most people do share higher order values (e.g: the desire to go on living), it is often possible to manipulate (with reason) lower order values by appealing to higher order ones. In reality (I think) our value-hierarchies are rapidly fluctuating and poorly considered, making them lucrative targets for persuasion and elucidation, be it rational or manipulative.

    Playing that game of moral suasion is sometimes an exercise in objective truth (e.g: should I vaccinate my child?), but it is very often an exercise in objective inductive reasoning (eg: How do we know our moral values are internally consistent? How do we know our moral conduct comports with our desired moral outcomes? How do we negotiate an environment filled with agents with sometimes disparate and competing values (i.e: what is the extent of the mutually beneficial cooperative strategies that we can undertake?). If we tried to answer the question "what should we do?" scientifically (given starting values as brute facts), then these are broad questions we would seek to answer.

    Ultimately, if a difference in conflicting moral values cannot be negotiated with reason, then appeal to emotion. If it cannot be negotiated with emotion, then the remaining options seem to be forfeit, compromise, stalemate, or attack. Yes, people do sometimes go down fighting for their moral values, but in how many of these cases did emotion or the absence of reason play the major role? Values disparity might be a problem for the universality of our answers to specific moral situations, but it is not a significant problem for the practical utility of moral systems themselves given how infrequently sound moral reasoning from well ordered values actually necessitates violent conflict or even mutually exclusive values.

    Are you basically just saying what @Banno said, namely that despite differences in meta-ethics, normative ethics matters?S

    I'm defining what normative ethics is from my meta-ethical standpoint. I'm also rebuking the "it's all just preference" line. In truth our preferences are mostly aligned, and the majority of moral dilemmas we're faced with pertain to figuring out how or committing to maximizing our nearly universally shared values in the first place. Our best moral theories are merely inductive and approximate models (of ideal strategies) but so are our best scientific theories (inductive and approximate models of observable phenomenon). It might seem trivial to you to persuasively show that normative ethics matters (and that it requires objective reasoning), but in the midst of strong relativism bordering on nihilism I don't think it's that trivial (not pointing fingers). One of the major sentiments that fuels moral absolutism is the knee-jerk fear people encounter when they consider that right and wrong might be in some way conditional, relative, or subjective (and therefore truthless/meaningless).

    And then you go on to make some normative points, like that the way that you judge it, we shouldn't be greedy, and we should be considerate of othersS

    Any specific normative content I put forward was really only meant as a demonstration of objectively reasoning moral conduct from starting values.

    Maybe, like Banno, you judge that morality should be about everyone, about how "one" or "we all" should behave, and not particular, like how I should behave. Why should I care in this context, whether I agree or disagree? That does not seem to have any relevance, meta-ethically. It seems beside the point.S

    I took him to mean "we" as in "we the interested parties" (as opposed to everyone who ever lived). Meta-ethically, morality isn't just about what's best for the individual, it's what's best for the individual in an environment filled with other individuals. Without at some point, in some way, considering the "we", the game of morality cannot begin; otherwise it's just competition.
  • Moore, Open Questions and ...is good.
    The bright pixels of my monitor aren't treating my eyes very kindly right now.

    Not very kindly at all...

    Bed time for me!
  • Moore, Open Questions and ...is good.
    I don't mean "contemplate", I mean "service". I'm using the "treatment" connotation of consider; to consider something is to treat it with attention and kindness
  • Moore, Open Questions and ...is good.
    I think it boils down more to finding a better way to talk about morality than fundamental disagreements about what it is.Baden

    Meta-meta-ethics :cool:


    So where would you say moral truth occurs aside from personal preference?Terrapin Station

    Normative ethical truth occurs in the way an action/agreement actually considers/preserves the genuine personal preferences of interested agents. (Example: if we had a chance meeting in an elevator, and we both happened to be armed with knives, it would be objectively immoral for us to attack one another without provocation given that it would directly harm our desire to avoid injury and continue living).
  • Moore, Open Questions and ...is good.
    So, because lots of people share moral feelings, and thus moral judgement, on certain issues, then if we stick two people in a room together, then they'll probably agree over these issues, in a normative sense.S

    We tend to establish moral rules/norms by appealing to shared values, but the fact that values are shared, per se, isn't what establishes moral "truth" (although, shared values are precisely from whence normative ethics are derived, for practical reasons) . Personal moral values exist as brute facts, and they're inexorably relative; "moral truth" is something more than mere personal preference.

    Let's say the two people in the room do [morally] value kicking puppies. They could compete over access to the only puppy in the room, or they could come to some sort of mutually beneficial agreement that serves the values they do happen to have (puppy kicking). The truthiness of their moral accords depend on whether or not they actually serve/defend their extant values in the environment they are in (or perhaps whether or not their professed values are their actual/sufficiently important values). For example, if fighting over access to the puppy reduces the amount of time that they would otherwise spend kicking it, then aggression for puppy control can be framed as an objectively immoral act in that situation because it directly disservices their moral values. They could go on to form a puppy-time-share agreement, thereby maximizing overall puppy-kicks, and call it morally praiseworthy. If all humans were hard wired to value puppy-kicking in this way, then that's what our moral agreements would serve.

    Without naming them here, the most common strong values of any group will tend to form the basis of their normative cultural content; and because there are indeed values which are universal to nearly all humans, and because we share similar environments, our normative moral frameworks/ethical prescriptions have converged toward the same archetypes and outcomes (lucky us Grover).

    So what's the problem, right? Well, the problem is that this is supposed to be a discussion about meta-ethics, not a discussion about normative ethics.S

    As is hopefully clear from the puppy example, the point I'm making is indeed a meta-ethical one (which may or may not relate to yours and Baden's disagreement or miscommunication). The truth of specific normative content is transitory, like the next optimal move in a given chess game, but the relationship between our desires and our lousy environment is not: achieving our own goals in a populated environment means considering the goals of others along with the environment we are in. In other words, morality isn't just any greedy hedonism, it's socially responsible hedonism in a world where intentions, methods,and outcomes can be fact-checked. (We could split semantic hairs regarding the "consideration" component, but when individuals extend no moral consideration whatsoever, no useful moral discussion with them can take place (they're a moot point). I prefer to describe the failure (or inability) to consider the needs of others as a breakdown of morality. Informally, it's as if morality itself is an ad hoc system of categorizing the various ways in which we might fail to consider the needs/values/goals/desires of others).
  • Moore, Open Questions and ...is good.
    I agree with you, i think; although I might summarise it somewhat briefly as that in the end, it's what we do that counts. And it is "we" not "I".Banno

    I like both your focus on "do" and on "we".

    My most recent thread attempted to capture the "doing" aspect of any strategic truth (what are moral oughts but strategies/predictions of outcomes?): it's impossible to separate moral [strategic/empirical] soundness from the actual situation and context it is to be employed in.

    And the "we" is critical: morality isn't merely asking "what's best for me?", it's asking "What's best for me in an environment filled with others who each want what's best for themselves?". In other words, morality as a practice begins at extending consideration of some kind to others.
  • Moore, Open Questions and ...is good.
    Allow me to insert my own ideas here on Janus' behalf (he is circling a point that I'm partial to).

    I think we both agree that there is necessarily a relative or subjective component of moral truth (concerning the moral values or principles we use as ethical foundations).

    On the whole, this idea of ultimate, universal, and objective moral truth is nonsensical given the breakdown of exclusive/competing values, but when two or more moral agents are trapped in a room together, it does not make sense to talk about the moral implications of the values which they do happen to share? Within that room, they can come to sound moral agreements even if everyone outside of it doesn't share their values.

    As we're all somewhat trapped together in our respective families, cities, and nations (and ultimately the planet), the strength and consensus of the moral agreements/statements we can make depend on what values are most prevalent within the relevant sphere of moral consideration. If there are indeed some values which are nearly universally present among all individuals and groups, then they tend to make the most functional and persuasive moral/ethical starting points.

    Is this helpful at all?
  • Could the wall be effective?
    My knee jerk reaction is that it is an unforgivable and unconstitutional act.

    The thinking is that relief funds for Puerto Rico and other disaster areas are going to be re-routed for the border wall. I wonder how much suffering and unnecessary loss of life this could lead to? If those relief funds are meant to secure power, access to hospitals and medicine, access to nutrition, access to education, etc, then the president is trading lives for nothing. In his on words, the "border situation" is not a time-sensitive emergency: "I didn’t have to do this, but I’d rather do it much faster."
  • Could the wall be effective?
    Ladders, tunnels, boats, planes, and more are all readily available technologies, so even if a complete border wall was erected, immigrants would still make it through.

    One of the main problems is that the border is too big to protect with a wall, and once walls are built around the current hot-spots (if it makes crossing hard enough) then the coyotes will just find new places to cross.

    It won't slow down drugs, but it might slow down human traffic for a time, but probably not significantly.
  • Being Unreasonable
    Is it possible that there are some people who try to be reasonable, but are inescapably unreasonable, at least in some respect?S

    This should be a primary fear of anyone seeking out a philosophy forum.

    Maybe some of us are so afraid of this, we're unwilling/unable to stand the dissonance when confronted?
  • "Free Market" Vs "Central Planning"; a Metaphorical Strategic Dilemma.
    the mixture of both have in history been the most successfull: basically as a free market cannot operate well without institutions (contrary to the silly brainfarts of anarcho-libertarians) and on the other hand central planning cannot be successfull without some freedom of invention and innovation (as obviously central planners cannot know what the future holds and what will be successfull).ssu

    This kind of shows the overall point I'm trying to make: there's no strategic panacea; no fool proof ideology. Not only has a mixture of both been most successful, different mixtures at different times have been most successful.

    I guess what I wanted from this thread is a way to show proponents of political, economic and moral strategies, that given a change in circumstances or available information, their lofty convictions might amount to nothing.
  • "Free Market" Vs "Central Planning"; a Metaphorical Strategic Dilemma.
    Per other comments above, I also think this isn't a very good analogy for the free market versus central planning dichotomy, for a number of reasons, including that free markets enable competing with others in a manner that you can "win" by doing things to make certain others lose. I think it's better to make the competition so that you win the most by helping others win, too. The competition should be how to best ensure that everyone's lives are better/easier/etc.Terrapin Station

    If we focus only on the decision of whether or not to head out in different directions, there's really no way of stipulating what is best. I was hoping that this dilemma would show that the predictive power of strategic principles can be entirely determined by the circumstances they are to be employed in (where unknowns can render strategic decision making impossible), but I left too much room for the imagination by adding boat construction and navigation to the mix.

    Choosing one direction/choosing many directions was meant to parallel with basic central planning/free market principles, not capture them entirely.

    Could you give an example?frank

    The best example is also probably the most broadly useful principle/effect of free markets: determining the value of things; setting prices. As markets and economies grow, they become exponentially more complex and interconnected. Allowing prices to emerge and self-correct naturally from markets is not risk free or perfect, but it tends to produce useful approximates. The more and more complex relationships a specific good or service has with the greater system, the harder it would be for a central planner to successfully set the value of that good or service.

    That's politics, though. People pour all their fear and belligerence into it. The truth is we have extravagant demonstrations of the folly of embracing either route single-mindedly.frank
    :up:
  • "Free Market" Vs "Central Planning"; a Metaphorical Strategic Dilemma.
    This is where the application of metaphors runs into a problem. Why is central planning represented by few, large boats when the central plan might just as well be to send out many small boats? And if individual preference is central to the free market, isn't it plausible that individuals might prefer to all stick together, for better or worse?Echarmion

    Free markets and centrally planned systems are somewhat handy ideology-laden real world examples that vaguely parallel the dilemma, but my real point of interest is with an aspect of "strategy" itself (how we confuse it with truth):

    We formulate strategic principles as predictive tools, and the very useful ones we retain and sharpen over time (in hopes of cross-application; utility), but we often don't recognize that the utility of a given strategy depends entirely on how well it accommodates the specifics of a given situation. When we lack access to certain data in a given circumstance, we may have no way of discriminating between two different strategic options (or finding the optimal mix of both).

    Political, economic, and ethical schools of thought are filled to the brim with predictions derived from strategic principles, but every time they are applied to a novel situation (instead of deriving the strategy from the novel situation), there's ultimately some degree of fallacy in the presumptive appeal that what works in one situation will work in another.

    Forget about the boat building and navigation, and assume everyone has their boats ready to go, and that a final decision must be made. Should people head in different directions or should they keep together? If there's no way of knowing which direction is best, then the ratio of risk to reward seems to stay constant no matter what strategy or directions they choose. It's like putting all your money on a single number in roulette Vs spreading it out with many smaller bets on many different numbers.

    Regarding free market and central planning specifically (capitalism/socialism, informally), the best laid arguments compare present circumstances to historically observed circumstances, but more often than not pundits and politicians are hastily and haphazardly applying the same political/economic answer to every new question (and we the body politic lap it up). As you say, our political [and moral] discussions are largely driven by ideological adherence to what amounts to a strategy.

    We're biased toward the strategies we're most familiar with, even when we might have no rational cause to favor them in a novel situation. Instead of treating our predictive strategies as specific tools for specific problems, somehow we start pretending that they're universal truths.
  • "Free Market" Vs "Central Planning"; a Metaphorical Strategic Dilemma.
    Except all instances of free market Capitalism have been centrally plannedMaw

    In many ways individual autonomy and central authority are incompatible. Free markets might require some central authority to support, stabilize, or insulate them, but the markets themselves work because of the autonomy that individual entities possess. Many aspects of our society are centrally planned, and many aspects rely on the benefits of free markets; my reference to capitalist and socialist theory and practice might generalize complex spectra, but the broad and general point I'm making still stands:

    There is no universal strategic principle or theory that can solve all problems; strategies should be informed and formulated according to the specifics of different situations. The best we can do is suppose the statistical likelihood of risks and outcomes, which is always limited by our ability to detect and compute unknown variables, especially given potentially vast circumstantial differences between individual cases (which we seldom have the time or interest to investigate thoroughly).

    I feel like you're getting a bit too hung up on the comparison to capitalism and socialism. Sure, they could centrally plan their boat construction while each deciding their own direction, but the dilemma pertains to the single decision of whether or not to all head out in the same direction. It's a yes or no proposition; there is no third option.

    Central planning is essential for large scale projects like railroads and highwaysfrank

    But for other large scale endeavors it can prove ineffective. I would say that generally, the greater the scale, and when there are more unknowns, the more difficult it is for central planning to cope with the various logistical demands of large complex systems. But if partial failure (inherent in free market principles) is not acceptable, then relying on markets tends to become less attractive.

    The funding and execution of scientific research is an interesting example. At present, a combination of government and private funding drives ongoing research in different areas which are determined by a combination of market forces and planned initiatives. Specific academic entities and operations/endeavors themselves have a mix of central planning and internal autonomy that serves their purposes/needs, and globally they form a marketplace of scientific knowledge and research potential. Without that marketplace, our ability to innovate would be hamstrung, but without some regulation (e.g: nuclear research, bio-ethics, etc...) we would be courting too much risk. Knowing the optimal strategic mix can only be determined with experience of a given situation.

    We sort of know where central planning is needed and where competition is best.frank

    Sort of, I agree. But we too easily conflate strategic success in one situation with the universal applicability of the successful strategy.
  • "Free Market" Vs "Central Planning"; a Metaphorical Strategic Dilemma.
    You don't see the similarity?

    Granted, the metaphor has nothing to do with property rights, taxation, or market regulation, but it's meant as a simplified analogy, not a facsimile. If central planning is present in "socialist" theory/practice, and if individual economic autonomy is present in "capitalist" theory/practice, then the use of the terms is correct, and the analogy is apt.
  • Moore, Open Questions and ...is good.
    I would try to break down the different truth requirements of moral propositions:

    If moral propositions imply actions, can we treat them from the perspectives of validity and soundness?

    Actions are morally valid if they follow from the moral propositions that imply them.

    Actions are morally sound if the moral propositions that imply them are true.

    There's no apparent room for subjectivity with regard to validity, but the truth of moral propositions, the premises of our moral deeds, are famously vulnerable to variation.

    Following the line of reason @Wallows begat, instead of looking at moral actions as deducible from a set of universal tenets, we could look at it as an endeavor to negotiate and compromise through the conflict that naturally emerges from those varied and sometimes conflicting premises.

    If we can agree on premises as interacting-individuals, or interacting-groups, then we can at least ensure the validity of or moral acts. Where we disagree or run into conflict, we're left to compromise (or not) in whatever way we think best serves our goals. In these cases, moral arguments tend to take an inductive form where they're strong or weak depending on how well they appeal to existing values.

    Rather than wonder what kind of metaphysical setup might give rise to objectively true moral propositions, I prefer to stop the buck and just accept the values that we do have. If we assume morality ought to serve human values, we can still derive appropriate actions even in the face of conflict/variation, it's just a whole lot messier (i.e: probabilistic).
  • Welcome to The Philosophy Forum - an introduction thread
    PM a moderator and they may change it for you. (a one-time deal, so choose wisely)

    Welcome to the forum!
  • How do you get rid of beliefs?
    The best way to get rid of beliefs is to expose them (to data, pressure, criticism, and tests, ideally).

    If a belief persists despite constant and concerted attack (by one's self and others), then congratulations, you've found an approximate truth.

VagabondSpectre

Start FollowingSend a Message