Comments

  • It's not easy being Green
    I agree that ethics emerges as a negotiation between subjects, however negotiation is intrinsically linked to power. Where subjects have similar levels of power, negotiation is fair, and fair ethical principles evolve. Where there is a large power differential between subjects, the negotiator of higher power usually just takes what they want, and no ethical norms evolve at all (e.g. human-ant interaction). I agree with the principle of assigning human rights to all individuals, however I would argue that this idea is an insurance policy that has been negotiated between a community of individuals with relatively equal levels of power.

    Now most non-human life has very little negotiation power, and as such there is no moral imperative to ensure non-human life is treated well. Of course there are exceptions, where a police dog will form a mutually beneficial relationship with its trainer, and a horse with it's rider, however in the main this isn't the case as most animals have little to offer us.

    I disagree with cruelty to animals, due to an emotional sympathy for the creatures, however I don't believe there is an ethical obligation to them.
  • Is it immoral to power down an AI?
    But as I asked Charleton, what if you personally found out that despite having a human body, your mind was actually an AI machine created by some mad scientists. Then all the experiences you've had up until now that you've thought were human experiences were actually AI experiences. All other people are real people, but what if you were the exception? Is this an idea you can entertain? In this case then, should you have rights like the other people?
  • Is it immoral to power down an AI?
    Think of it this way. Imagine you went to a doctor for an operation, and the doctor told you that your body was actually a machine, and your mind was a computer. Would you deserve human rights?
  • Is it immoral to power down an AI?
    As in, the AI has the same subjective mental experience as a human being.
  • Is it immoral to power down an AI?
    We usually assign rights to all people even if we can't prove that they are conscious. So if we assign rights to people even though we can't prove their conscious, does it matter if we can prove the AI is conscious when deciding whether to give it rights? Nevertheless, in my question it is a postulate that the AI is conscious in this case.
  • Why do you believe morality is subjective?


    I agree that your conclusion is valid given your postulates, however I, and I'd imagine others who believe morality is subjective, would disagree with your first postulate below:

    (1) The criteria or standard to evaluate the moral value (goodness or badness) of an act is justice. I.e., if the act is just, then it is morally good, and if unjust, then morally bad. It is nonsense to speak of an act which is morally good yet unjust, or morally bad yet just.

    My view is that it just so happens that most acts we consider to be moral are just, but this isn't a given. I therefore disagree with your assumption that this is the way one ought to distinguish between morally good and bad acts.
  • Choose: Morality or Immorality?
    I don't act "morally" to avoid the negative repercussions of acting "immorally", but rather I act "morally" to enjoy the societal benefits of doing so.

    For example, suppose you had superpowers and were immune to the repercussions of the masses. You could then do whatever you want. Would you harm the masses or would you be their hero? I would be their hero, not out of moral obligation, but because the societal rewards of being a hero are so much greater.
  • Potential
    So what you are saying is that from a deterministic perspective, potential is not real, it is an illusion. Since we know that with respect to the future, there are some things which may or mat not happen, depending on the actions which human beings take, why adopt a deterministic perspective?Metaphysician Undercover

    Because free will too is an illusion.
  • Potential
    On the contrary, knowledge itself is a form of potential, because it allows us to do various things. Knowledge allows one to decide what will or won't happen. So contrary to what you claim, the concept of potential is very relevant for understanding the existence of knowledge.Metaphysician Undercover

    If you believe in a deterministic universe and that people don't have free will, even knowledge isn't "potential", because what will happen is defined in advance and there is only possible outcome. Potential is merely an illusion, a mental construct for people who don't have complete knowledge of the universe.
  • Potential
    In some respects, potential merely indicates a lack of knowledge. For with all knowledge, one might know exactly what will or won't happen, so the concept of potential becomes irrelevant.
  • Truth or Pleasure?


    Thanks for your remark. Pleasure and knowledge where the things I had thought of, but could there be a third or even a fourth element too, or is this it?
  • An Alternative To The Golden Rule
    Do what you can get away with
  • Fulfilling the Human Social Need
    If robots fulfil a genuine human need, I see no harm in using them. My only concern would be if the robots are not sufficient but are used anyway.
  • Truth or Pleasure?
    Can you justify that? Can we suppose that I'm self sufficient, live by myself on my own island?

    I may never even know of another person's existence so why would other people need to play a role in optimal living?

    I get that I don't actually live on an island by myself, but I could well in the future. The only constant in my life is me, so I would like a robust philosophy that would work even when other people aren't around.