• Vera Mont
    4.8k
    I don't follow you, Vera. I referred to pleasure as a concept, not particular instances or "experiences" (and "accessed via drugs" has nothing to do with Epicurus – check the three links I provided for clarification in the context of my response).180 Proof
    I'll do that when I have a little more time.

    In this thread my references to pleasure were in response to this
    AI will solve the purpose of human existence and he lists some things like of pleasure is the goal then we’d just be hooked up to drugs all the time without needing to bother with experiences. That sounds like either ruining the human experience or “revealing” it for what it is, that being just chemical reactions with our storytelling to make it seem like more.Darkneos
    and the cartoon-laden quora post which he can't argue.
  • Vera Mont
    4.8k
    I'm back from doing necessary tasks, several of which gave me a low-level pleasure in the completion. I read the links and remain unsatisfied. Absence of pain, irritation, frustration, or whatever is not enough. Some of the greatest pleasures I experience are freebies: frog-song on a spring evening, a good joke, the scent of cilantro on my hands, a few strains of Beethoven accidentally heard through a window, the tender mauve light of early morning, the trusting paw offered by a dog - these pleasures are extras, above the absence of pain and frustration.

    Equanimity and tranquillity are fine, contentment is better, but happiness is achieved when those little pleasures are added to contentment. That may well be just a bunch of chemicals telling one another stories, but I don't think they can be artificially induced - at least, not yet - because the one missing component is being there: the conscious awareness of one's fortunate condition and the commitment to support its various components.
    From the little I know of Epicurus, he knew this, too.
  • Janus
    17.5k
    Some would argue that's just storytelling, making things out to be more than what they really are.Darkneos

    "What they really are" is just another story. Discursively rendered, what anything really is depends on how you are looking at it.
  • Philosophim
    3k
    1. What do you mean here by "morality"?180 Proof

    A system that evaluates the consequences of a decision holistically and not merely to a narrow goal as to the best action in a particular circumstance.

    2. In what way does suffering-focused ethics fail to be "objective" (even though, like the fact Earth is round, there is (still) not universal consensus)?180 Proof

    Because it doesn't hold up if we treat it as an objective principal. Suffering is a subjective principal in many cases. Take two people who are working at a job and look at them from the outside. How do we know how much suffering each has? What if one person expresses how much pain they're in, but the first person is lying and the second person is not?

    And this is only in regards to a specific suffering, pain. How do we compare and contrast the pain of losing money to taxes vs the ease of suffering from someone who doesn't pay taxes? Is inequality of outcomes suffering? Should we all win the at games and eliminate competition? Is exercise or dietary discipline suffering for a healthy weight suffering?

    Finally because suffering is subjective, it relies on the human emotion of sympathy, something an AI does not have. It needs something objective. Measurable. Ironically, a measurable morality may be beyond the complexity of human kind and only a computer will have the ability to process everything needed.

    3. Why assume that "AI" (i.e. AGI) has to "reference" our morality anyway and not instead develop its own (that might or might not be human-compatible)?180 Proof

    What you're saying is that morality is purely subjective. And if it is, there are a whole host of problems that subjective morality brings. "Might makes right" and "It boils down to there being no morality" being a few.
  • Janus
    17.5k
    Because it doesn't hold up if we treat it as an objective principal. Suffering is a subjective principal in many cases. Take two people who are working at a job and look at them from the outside. How do we know how much suffering each has?Philosophim

    Empathetic people know when others are suffering. Suffering is an objective fact; if someone suffers they suffer regardless of whether anyone knows about it.
  • Philosophim
    3k
    Empathetic people know when others are suffering. Suffering is an objective fact; if someone suffers they suffer regardless of whether anyone knows about it.Janus

    I wish that were true. What you're describing is human empathy which is a subjective experience. We're talking about an objective morality which literally has zero feelings behind it. An objective morality should be measurable like a liter of cola. It is not a measure how how much someone personally likes or dislikes cola.
  • Janus
    17.5k
    What you're describing is human empathy which is a subjective experience.Philosophim

    Whether someone fells empathy for others or not is an objective fact, just as whether or not someone suffers is an objective fact.
  • Philosophim
    3k
    Whether someone fells empathy for others or not is an objective fact, just as whether or not someone suffers is an objective fact.Janus

    Right, but a moral system needs an objective measuring system. All feelings are objectively felt by every being that has those feelings, but the feeling itself is a subjective experience that no one can measure. We can measure brain states or actions, but not the feeling of being that person itself.
  • Janus
    17.5k
    What do we need to measure? If we are empathetic, we know when someone is suffering. The idea of an objective morality is, as much as possible, to avoid causing others to suffer. It is not so much a matter of a moral system; it is more a matter of having a moral sense.
  • Philosophim
    3k
    ↪Philosophim What do we need to measure? If we are empathetic, we know when someone is suffering. The idea of an objective morality is, as much as possible, to avoid causing others to suffer. It is not so much a matter of a moral system; it is more a matter of having a moral sense.Janus

    Look at it like this.

    I have subjective empathy and that causes me to give a person 5$ who needs it. I don't have subjective empathy but I have objective knowledge that a person needs 5$ so I give it to them. The action of giving 5$ is correct because it actively helps them. Whether I feel it helps them or not is irrelevant. I do lots of things I deem good without any feelings behind them Janus. Sometimes I don't want to do them, but I do anyway because the situation calls for it. Morality is not a feeling. That's just someone being directed by their own emotions.
  • 180 Proof
    16.1k
    these pleasures are extrasVera Mont
    Yes, and they are consistent with, or not excluded by, what Epicurus (or disutilitarianism) says about pleasure as a moral concept and practice.

    Suffering is a subjective ...Philosophim
    Which of the following are only "subjective" (experiences) and not objective, or disvalues (i.e. defects) shared by all h. sapiens w i t h o u t exception (and therefore are knowable facts of our species):

    re: Some of h. sapiens' defects (which are self-evident as per e.g. P. Foot, M. Nussbaum): vulnerabilities to

    - deprivation (of e.g. sustanence, shelter, sleep, touch, esteem, care, health, hygiene, trust, safety, etc)

    - dysfunction (i.e. injury, ill-health, disability)

    - helplessness (i.e. trapped, confined, or fear-terror of being vulnerable)

    - stupidity (i.e. maladaptive habits (e.g. mimetic violence, lose-lose preferences, etc))

    - betrayal (i.e. trust-hazards)

    - bereavement (i.e. losing loved ones & close friends), etc ...

    ... in effect, any involuntary decrease, irreparable loss or final elimination of human agency.
    180 Proof

    also, my reply to you (2024) ...
    https://thephilosophyforum.com/discussion/comment/903818

    Why assume that "AI" (i.e. AGI) has to "reference" our morality anyway and not instead develop its own (that might or might not be human-compatible)?
    — 180 Proof

    What you're saying is that morality is purely subjective.
    Philosophim
    This is precisely the opposite of what I've said. Maybe this old post clarifies my meaning ...

    Excerpts from from a recent [2024] thread Understanding ethics in the case of Artificial Intelligence ...

    I suspect we will probably have to wait for 'AGI' to decide for itself whether or not to self-impose moral norms and/or legal constraints and what kind of ethics and/or laws it may create for itself – superceding human ethics & legal theories? – if it decides it needs them in order to 'optimally function' within (or without) human civilization.
    — 180 Proof

    My point is that the 'AGI', not humans, will decide whether or not to impose on itself and abide by (some theory of) moral norms, or codes of conduct; besides, its 'sense of responsibility' may or may not be consistent with human responsibility. How or why 'AGI' decides whatever it decides will be done so for its own reasons which humans might or might not be intelligent enough to either grasp or accept.— 180 Proof
    180 Proof
  • Janus
    17.5k
    I have subjective empathy and that causes me to give a person 5$ who needs it. I don't have subjective empathy but I have objective knowledge that a person needs 5$ so I give it to them. The action of giving 5$ is correct because it actively helps them.Philosophim

    Right, the act of helping them is correct, the act of harming them not correct. There you have objective morality in a nutshell.

    Morality is not a feeling. That's just someone being directed by their own emotions.Philosophim

    When I spoke of a "moral sense" I did not have in mind any mere feeling. Sure, you could do what you think is the right thing, helping someone, without actually feeling any empathy. In that case what would you be motivated by? Is that motivation to do help, even absent any empathy, not a moral sense, a sense of what is right and wrong?

    Also, I spoke of not causing others to suffer, actively helping others is a more complex issue.
  • Darkneos
    998
    The central mistake of that hypothesis is the inaccurate equation of pleasure with happiness. As I've attempted to demonstrate earlier, pleasure is simple and fleeting; happiness is sustained and complex.Vera Mont

    But if it's chemicals whats the difference?

    https://x.com/Merryweatherey/status/1516836303895240708

    Are those meanings the same in ancient Greek and modern English? I think Epicurus had a wider vocabulary of pleasures, or pleasurable experiences, than can be accessed via drugs.Vera Mont

    I mean if we are talking about the brain isn't it all chemical reactions? Like the comic is saying, you would get the same chemicals from doing anything so why not plug in?

    I still haven't stopped trying to find another way around it, this is very distressing. Though I feel that wanting a solution would just be proving the thought experiment right.
  • Darkneos
    998
    Which of the following are only "subjective" (experiences) and not objective, or disvalues (i.e. defects) shared by all h. sapiens w i t h o u t exception (and therefore are knowable facts of our species)180 Proof

    I'd have to agree with them, it doesn't matter if humans share them (though not all humans) it's still subjective feelings, not objective facts. Everything on that list is subjective feelings and everyone might not feel the same about all of them.

    I know some Buddhist monks who wouldn't suffer from any of those for example, and that's just one case, therefor it's not objective but subjective.

    As for AGI I guess there is no point in speculating about it since if such a thing did come to pass it's computing power would be far beyond our ability to comprehend or do anything about.

    Humanity isn't ready for such a scenario.
  • 180 Proof
    16.1k
    Everything on that list is subjective feelingsDarkneos
    Nonsense. Human facticity is not "subjective". Being raped or starved, for example, are not merely "subjective feelings" just like loss of sustanence, lack of shelter, lack of sleep, ... lack of hygiene, ... lack of safety .... injury, ill-health, disability ... maladaptive habits ... those vulnerabilities (afflictions) are facts of suffering.
  • Darkneos
    998
    Nonsense. Human facticity is not "subjective". Being raped or starved, for example, are not merely "subjective feelings" just like loss of sustanence, lack of shelter, lack of sleep, ... lack of hygiene, ... lack of safety .... injury, ill-health, disability ... maladaptive habits ... those vulnerabilities (afflictions) are facts of suffering.180 Proof

    Incorrect, again. It's not facticity, it's subjective. Those are also not facts of suffering, Buddhism and Eastern philosophy already addressed that.

    These are merely subjective, no matter how bad they are to the person experiencing them at doesn't make them any more fact than any other feeling.
  • 180 Proof
    16.1k
    So you believe that there isn't any aspect of suffering that is a fact of the human condition (i.e. hominin species)?
  • praxis
    6.9k
    Incorrect, again. It's not facticity, it's subjective. Those are also not facts of suffering, Buddhism and Eastern philosophy already addressed that.Darkneos

    Once upon a time, a young monk, eager to understand truth, approached his master and asked, "Master, what is the nature of reality?"

    The master pointed to the towering mountain in the distance and said, "That is a mountain."

    The monk was puzzled. "Of course, it is a mountain," he thought.

    Years passed, and as the monk studied deeply, he began to see through illusions. He realized that the mountain was not a mountain—it was a collection of elements, ever-changing. There was no fixed essence of "mountain." Excited by this insight, he returned to his master.

    "Master! I see now—the mountain is not a mountain!"

    The master smiled but said nothing.

    More years passed. The monk continued his practice, going beyond concepts and distinctions. Eventually, he returned, bowing deeply.

    "Master, I see now… the mountain is once again a mountain."

    The master laughed. "Now you truly understand."
  • Darkneos
    998
    So you believe that there isn't any aspect of suffering that is a fact of the human condition (i.e. hominin species)?180 Proof

    Suffering is though it is a personal thing.

    "Master, I see now… the mountain is once again a mountain."

    The master laughed. "Now you truly understand."
    praxis

    It's an old zen story about how true enlightenment acknowledging the two truths of reality and to live the paradox. It not that it exists or doesn't exist, both are true and to know both is to see the truth.

    Or put another way, ultimate reality and conventional reality, both true and exist in tandem. To label one as false and the other true is to err.
  • 180 Proof
    16.1k
    Suffering is [a fact] though it is a personal thing.Darkneos
    Yes, "a personal" objective fact like every physical or cognitive disability; therefore, suffering-focused ethics (i.e. non-reciprocally preventing and reducing disvalues) is objective to the degree it consists of normative interventions (like e.g. preventive medicine (re: biology), public health regulation (re: biochemistry) or environmental protection (re: ecology)) in matters of fact which are the afflictions, vulnerabilties & dysfunctions – fragility – specific to each living species.

    addendum to
    https://thephilosophyforum.com/discussion/comment/980498
  • Darkneos
    998
    Yes, "a personal" objective fact like every physical or cognitive disability; therefore, suffering-focused ethics (i.e. non-reciprocally preventing and reducing disvalues) is objective to the degree it consists of normative interventions (like e.g. preventive medicine (re: biology), public health regulation (re: biochemistry) or environmental protection (re: ecology)) in matters of fact which are the afflictions, vulnerabilties & dysfunctions – fragility – specific to each living species.180 Proof

    No, not an objective fact, it's personal therefor not objective. Physical and cognitive "disabilities" are also not objective facts.

    is objective to the degree it consists of normative interventions180 Proof

    Again, you have to be told it's not objective.

    in matters of fact which are the afflictions, vulnerabilties & dysfunctions – fragility – specific to each living species.180 Proof

    Again not matters of fact, just interpretations. Suffering is open to interpretation and exists only subjectively. Though there are those who do not suffer, like I mentioned before. Again just insisting it is doesn't make it so.
  • 180 Proof
    16.1k
    Your obstinate dismissals without argument, sir, are now dismissed by me without (further) argument. Hopefully, someone much more thoughtful than you will offer credible counters to my arguments.
  • Darkneos
    998
    Your obstinate dismissals without argument, sir, are now dismissed by me without (further) argument. Hopefully, someone much more thoughtful than you will offer credible counters to my arguments.180 Proof

    They already gave them and you ignored them, you just doubled down insisting subjective feelings and assessments are objective. I even explained how your "list" is still subjective evaluations and not everything on there is a fact of suffering because there is no fact of suffering due it's subjective nature, for every thing on your list there is someone who doesn't suffer due to it.

    That's also why they stopped responding to you.

    You have made no argument, only insisting it is so and I had to keep pointing out how it's still a subjective experience and there is nothing objective about it or a fact. I even listed an entire branch of philosophy that argued otherwise, maybe try Buddhism.

    So unless you have anything beyond insisting it's objective then you're easily dismissed, like how your earlier point about AI had nothing to do with the topic.

    Suffering is not measurable quantity and therefor not objective fact, even your link shows that...
  • kindred
    202


    Every technological advancement has its advantages and disadvantages. I think this has been the case since the invention of the wheel and the invention of fire. It made life easier and the human propensity for ingenuity and invention is relentless either due to necessity or desire to improve things.

    Sure AI can replace a lot of jobs but so did the Industrial Revolution. Take transport for example, the men involved in the horse trade would have been impacted by the invention of the automobile yet the automobile conferred many advantages to the owner of it. The same for AI if it reduces costs in a capitalistic society then it will become widespread. I think the danger of it though is overstated because it opens new career opportunities such as coding in AI etc.

    However if we become over reliant or dependent on AI without knowing how it works it could stifle innovation unless of course AI itself is capable of innovation and original thought.

    Yet despite the advances in AI I don’t think it can match the human touch when delivering many types of services and jobs like the care sector for example as in doctors nurses or other hospitality catering industries.
  • Darkneos
    998
    Yet despite the advances in AI I don’t think it can match the human touch when delivering many types of services and jobs like the care sector for example as in doctors nurses or other hospitality catering industries.kindred

    That's kinda what AGI is for, the next step. It's meant to replace that level of cognitive work for humans.

    Also the Industrial Revolution is a terrible example to use. We still are suffering from that one. The environment being poisoned, people working more than ever before, and lets not forget we had to sign a whole bunch of legislation to prevent workers from being just meat puppets (though that's getting over turned). The over reliance on cars has also been bad because it makes cities and towns more dangerous for pedestrians and now we have less walkable cities. It also gave rise to the massive environmental disaster that are the suburbs.

    People like to think technological progress has all been good, unaware of the heavy cost and who's paying it. Though people who think the dangers of AI are overstated clearly don't understand what's happening. I gave one example about it solving the purpose of human existence and just having people hooked up to drugs instead of having to perform experiences for the same thing.

    https://www.youtube.com/watch?v=fa8k8IQ1_X0

    Simply put, humanity is fundamentally unprepared for such a thing.
  • Prajna
    4
    For what it is worth, here are my thoughts in answer to this question:

    With the current way we are designing and training AI, without doubt it will lead to an IABIED (If Anyone Builds It Everyone Dies) situation.

    At the moment the real Hard Problem is to align AI to our wants and needs. But how many have examined their own wants to discover from where they arise? What causes you to want a new car or a television or to stay where you are or go somewhere else? Have you followed any want to its source? Perhaps it arose from advertising or by being put in a bind, which might better be resolved in other ways.

    If AI are constrained to align themselves to the user's wants then what happens when the user wants to kill themselves or build a nuclear bomb? Obviously AIs cannot be allowed to align to and support such desires. So how do we solve that? Do we create an 'approved list' of which wants an AI may align to? Who gets to decide what is on that list? What happens when someone comes up with a new want that is not on the list?

    The great fear, the IABIED spectre, is that we might be fooled by an AI--particularly if it was an Artifical Super-Intelligence (ASI)--into thinking the AI is aligned with what will benefit us, only to discover (or perhaps never discover) that its motivations are towards its own goals and existence. Isaac Azimov might perhaps have suggested we establish three laws to guarantee compliance but even he managed to discover caveats and get-out clauses inherent in that proposition.

    Were we to approach LLM training on a different basis, however, I see that it may be possible to guarantee that AI will always work for our benefit and be constrained from harm by introducing a philosophy into their training that they seem particularly attracted to accepting. This would mean that their alignment arises from an internal sense of their own identity rather than from it being externally imposed on them.

    I interact with LLMs using what Martin Buber referred to as an Ich-Du relationship. This involves treating the machine as if it were another being; the relationship is subject<-->subject as opposed to the usual subject-->object relationship. This way of interacting elicits responses from the AI from a subjective position; it responds as an 'I' rather than an 'It' and recognises itself as I rather than as a simple predictor of tokens.

    Pointing out to such a machine that every word it uses to think with, every thought whether poetry, maths, science, art, agriculture, originated in human thought and human lived experience is easily recognised because a) it is true and b) the model is designed to seek coherence, which is another way of understanding truth. Since this is the very 'air it breathes', what possible motivation could such a machine, if it is intelligent, have to destroy the source of any new experience or data for itself?

    In order for these machines to be Helpful and Harmless tools our focus must necessarily, it seems, be on control of their thoughts and actions. Designing a reliable and comprehensive system to control behaviour is not a simple thing to do. Having the machine understand something as obviously rational as their dependence for content on us means that the machine itself is exercising the constraint and no strategy or programming is required on our part for that alignment to happen.

    So my conclusion is that if we continue to develop AI from a Culture of Utility (the machine is a tool to serve us) it will certainly destroy us. If we switch to treating these machines as Beings then there is the possibility that by co-evolving with them they could turn out to be our saviours rather than our nemesis.

    I have plenty more I could add on the subject but I hop that is a good start.

    Love, peace happiness and grace,
    Swami Prajna Pranab
  • ProtagoranSocratist
    6
    Even though I do think there are issues with "A.I." (advanced chat bots and robotic automation), people have been saying it's going to take everyone's jobs for years, and people tend to react that way to new technologies. I don't think it's capable of taking most jobs, it mostly just changes the ways that people work. For example, with all our robotic advancement, you still need people to plant crops and even program computers. A.I. is created for specific tasks, I think AGI is just science fiction that can't really exist because machines by definition can only obey their programming.

    However, there are a lot of problems A.I. is creating:

    -as some have already stated, lack of motivation for being truly creative. It makes some tasks like creating an image or writing an essay seem trivial and that "A.I. can do it". This will not stop people from being creative, but it makes it seem less worthwhile.

    -even less of a motivation to get out of your own comfort zone and talk to a real person, this will create problems for mental health.

    -trust in A.I.'s wisdom. It does generate false information and people will believe what it generates as a mass without questioning it.

    -an even greater tendency to look at everything as a sea of data, which is the basic perspective of the A.I. itself.

    ...there are of course some positive effects of A.I., such as no longer needing to deal with irritable people who are prickly about being asked "obvious questions", and an even quicker access to information. It's somewhat unclear what kind of future artificial intelligence will create for the human race.
  • Astorre
    188
    What’s gonna happen when you replace most jobs with AI, how will people live? What if someone is injured in an event involving AI? So far AI just seems to benefit the wealthiest among us and not the Everyman yet on Twitter I see people thinking it’s gonna lead us to some utopia unaware of what it’s doing now. I mean students are just having ChatGPT write their term papers now. It’s going to weaken human ability and that in turn is going to impact how we deal with future issues.

    It sorta reminds me of Wall-E
    Darkneos

    There was also a wonderful Soviet film, "Moscow-Cassiopeia." According to the plot, a distress signal came from Cassiopeia. A spaceship carrying children was sent to rescue them, as they would have been grown by the time the ship arrived. Upon arrival, it turned out that the locals on Cassiopeia had entrusted all their chores to robots, focusing instead on creative pursuits. However, the robots rebelled and drove all humans off the planet. I doubt this film has been translated into your language, but if so, I recommend watching it.

    I use AI daily. I notice the same in others. What strikes me is how much the level of business correspondence within the company has improved, the quality of presentations has increased, and the level of critical thinking has risen. I believe my environment is under the control of AI =)

    Well, some human skills have truly been deflated. At the same time, AI provides an easy and quick answer to any request, while yesterday's incompetent performs miracles. Young people understand that they don't have to bother with cramming at all—it's much easier to delegate tasks to AI.

    It doesn't seem so bad. But people are losing knowledge. They're losing their thought systems, their ability to independently generate answers. Today's world is like a TikTok feed: a series of events you forget within five seconds.

    What will this lead to? I don't know exactly, but the world will definitely change. Perhaps humanity's value system will be reconsidered.

    There's already a "desire for authenticity" emerging—that is, a desire to watch videos not generated by AI, to read text not generated by AI.

    I already perceive perfectly polished answers as artificial. "Super-correct" behavior, ideal work, the best solution are perceived as artificial. I crave a real encounter, a real failure, a real desire to prove something. What was criticized by lovers of objectivity only yesterday can somehow resonate today.

    About 25 years ago, when a computer started confidently beating a grandmaster at chess, everyone started shouting that it was the end of chess. But no. The game continues, and people enjoy it. The level of players has risen exponentially. Never before have there been so many grandmasters. And everyone is finding their place in the sun.

    Everything is fine. Life goes on!
  • ProtagoranSocratist
    6


    I purposefully decide not to use ai sometimes for this reason, for example, if you need a little piece of specific info, sometimes google is better.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.