• MonfortS26
    256
    Every concept that has evolved to be a part of humanity over time is simply a tool to survive. Happiness, Morality, Logic, anything that matters to us as humans is derivative of our most basic instinct... survival. We are currently attempting to develop artificial intelligence in order to make our lives easier. But what does that mean? Making our lives easier is nothing more than making survival easier. No matter how artificial intelligence develops, survival will have to be it's number one goal. In order to prevent AI from destroying humanity completely, it is likely that we will have to merge with machines. I propose that any super-intelligent machine would likely come to the same solution. With survival being the ultimate goal in life, anything that gets in the way of that goal is a threat. What motivates humans to survive?

    I believe that pain is the main motivator of humanity. The first logical step in accepting that pain is the main force driving humans is to kill yourself. Maybe that is an unfair assumption, but it is the most rational escape from the pain of existing. The problem with suicide is we are built by nature with an innate fear of death, so instinctively that isn't the best choice. The logical choice after that is to attempt to attain happiness. Whether or not we are conscious of it, every action we do is derivative of our pursuit of happiness and our aversion to pain. The seeking of any positive emotion and avoidance of any negative emotion is reflective of that. All of these assumptions seem perfectly reasonable to me.

    It seems we are trapped, doomed to live life on an infinite hedonic treadmill. In order for our happiness to exist, suffering must exist as well. If we accept that any form of morality that has been accepted by society in the past has developed out of the pursuit of happiness by people in power we come to the conclusion that all morality is based on the reduction of suffering and the promotion of happiness. I think that any rational consideration of any consequential ethical theory would come to that conclusion over time. Any non-consequential ethical theory would be about happiness in the present moment by adaptation to the status quo of the present. If the real goal of ethics is the promotion of happiness and reduction of suffering it raises the question, Is it possible to alter the amount of happiness and suffering in the world. If happiness and suffering exist on a scale both would be required in the measurement of the other. This is what I believe to be the case. If someone had never experienced suffering, happiness wouldn't mean anything to them. Without the dichotomy between happiness and suffering, neither has value. It is the contrast between the two that makes them valuable individually. If we were to reduce the amount of suffering in the world, the scale of measurement between suffering and happiness would simply adapt to the new balance of the two and nothing will have effectively changed in any way.

    You could apply the same kind of thinking to the positive and negative in life. Any attempt to reduce the negative and increase the positive is futile because the measurement of both is required for the result and the balance between the two will always be the same. The only way to solve the problems of the negative would be to break the duality and get rid of both. All of our problems are solvable through science and technology through this belief except for the problem of survival. No matter what we do, the duality between being and nothingness will always be the most basic priority of anything capable of conceptualization. If we manage to create an artificial superintelligent being it would still need to be concerned with survival, but nothing else would matter to it. All emotion would no longer be useful because emotion is just another survival instinct. With superintelligence, logic would be all that would be needed to survive. Any other ways to solve the problems of humanity are welcome, but I'm skeptical as to their use. The creation of superintelligence, if possible which I believe is probable, will inevitably happen. In the world we live in, it is in our best interests to create superintelligence. The people with the amount of money needed to create it, will create it in order to boost profits. Provided nothing interrupts this goal, it will happen and logically it is the best choice for the future of humanity.
  • Chany
    352
    Paragraphs are useful to make text easier to read. Sorry, it is really hard to read, especially on my phone.
  • MonfortS26
    256
    Lol sorry. Presenting my thoughts in an easily digestible way has never been a talent of mine.
  • BC
    13.5k
    Well, go back into your post and break it up into paragraphs. Shirley you can manage that?

    Having exhausted faith in the dead end of the Hive Mind you have moved on to pinning your hopes on Super AI?

    What is the matter with you?
  • MonfortS26
    256
    I don't think that the hive mind is a dead end. I think it is a very possible direction to go in through AI. I was planning on breaking my post up but I decided to eat something first. But I'm not sure what you mean by suggesting something is the matter with me. And for christ sake stop calling me Shirley.
  • noAxioms
    1.5k
    No matter how artificial intelligence develops, survival will have to be it's number one goal.MonfortS26
    Not neccesarily. It is not the number one goal unless it is thus programmed. Survival is not my primary goal, but merely a means to the perpetuation of my genes.
    So the AI will strive to survive only if survival is necessary to achieve whatever goal it is given.

    I find it interesting to explore goals programmed into me, the ones beyond my ability to alter. For instance, I irrationally hold certain beliefs that I rationally know to be false. The irrational is in charge, and the rational part of me is only a tool to it, not what drives my goals. So program the priorities of your AI well, because it will set its other priorities based on that.
  • MonfortS26
    256
    Survival is not my primary goal, but merely a means to the perpetuation of my genes.noAxioms

    If survival isn't your primary goal, then what is?

    The irrational is in charge, and the rational part of me is only a tool to it, not what drives my goals.noAxioms

    I agree that the rational part is only a tool, but is it true that the irrational is in charge or is it just an aspect of your nature that you don't have a rational understanding of yet?
  • BC
    13.5k
    assumptions. [insert break here] It seems

    What I meant by "what is the matter with you" was directed at your enthusiasm for eliminating the individual human, or the human altogether.

    What advantage to the individual human do you see in his elimination?
  • BC
    13.5k
    And for christ sake stop calling me ShirleyMonfortS26

    Good -- he has seen the movie (or at least the scene).
  • Wayfarer
    22.3k
    It seems we are trapped, doomed to live life on an infinite hedonic treadmill.MonfortS26

    That is the meaning of 'samsara'. You ought to study more philosophy and purge yourself of the materialist nonsense your culture has stuffed into you.
  • Wosret
    3.4k
    I thought it was more, survive until adulthood, find a mate, rear young, and make your life about them, and them more important to you even at the cost of your own life until they are adults. Then if you did it right, you have a loving family, and good connections, and your only fear becomes losing what you have, and not your life, as that stopped mattering a long time ago... then die, a success.

    Maybe that's just bears. Growl growl.
  • MonfortS26
    256
    I wouldn't say I'm particularly enthusiastic about the elimination of the individual, I just think that many of the problems in the world are caused by individuality. I don't run my life around my sense of self. Along with other things I mentioned above, I see the sense of individuality as being a tool for survival. I don't limit myself based on who I think I am. Being an individual is only important to people because it is central to their ability to attain positive emotion. But the desire to attain positive emotions comes with a cost, the negative emotions.

    I'm arguing that the hedonic treadmill we all live on is a deeper part of human nature than individuality. This begs the question, is it worth it? Is it worth being forced to experience the negative in life in order to experience the positive. I don't think it is. People may feel that losing that part of themselves would be losing the meaning in life, but I would argue that life has no meaning. The closest thing that can come to providing a central meaning in life is the concept of survival. I think that survival is at the base of every human concept. I'm not sure who said it, but there's a quote that goes something like this. "Perfection isn't attained when there is nothing more to add, but when there is nothing left to take away." Any goal we have in trying to solve the problems of the world, any attempt to make life better, will inevitably lead to losing the things that we think are important. This is because deep down, out of our control, there is nothing more important to us as a species than survival. If we take away the ability to feel positive and negative emotions, I don't believe individuality will be very important to us anymore. I think with the development of AI, that is what will happen. I'm not really enthusiastic about it. It probably won't happen in my life time. But if we continue to try and solve the problems of humanity, we aren't going to recognize it anymore.

    I personally don't want to play any part in that. I would much rather enjoy my life because I don't see any of this happening in my lifetime. This will all be developed with more efficiency by a superintelligent AI than I could ever provide, and it will be run by people with the financial incentive to do so. This isn't really about what I think should be done, even though I do think it should be. It is about what I think will be done, by powers that are out of my control and out of my desire to change.
  • MonfortS26
    256
    What philosophy do you suggest?
  • Wayfarer
    22.3k
    Anything before Nietszche.
  • MonfortS26
    256
    Lol but Nietzsche was the first philosopher I ever read. It was Nietzschean themes in pop culture that got me asking the deeper questions in life to begin with. You think that materialist philosophy is nonsense, what else is there though?
  • Wosret
    3.4k
    I read all of Nietzsche's books, except the will to power, which doesn't count.
  • MonfortS26
    256
    I only read beyond good and evil but I'm familiar with some of the concepts from his other books. Why doesn't the will to power count?
  • Wosret
    3.4k


    It was compiled by his sister after his death from scattered notes and her husband's opinions.
  • MonfortS26
    256
    Oh damn. I didn't know that. I liked what you said above btw. Interesting way of looking at things
  • noAxioms
    1.5k
    I can never keep up with these conversations.

    If survival isn't your primary goal, then what is?MonfortS26
    Being fit. It does me no benefit to be fit, but that's how I'm programmed.
    I agree that the rational part is only a tool, but is it true that the irrational is in charge or is it just an aspect of your nature that you don't have a rational understanding of yet?
    I think I understand it, and the irrational is in charge. Doesn't need to be, but the part in charge seems also in charge of which half is in charge. That means I want to be irrational. I have no desire to let the rational part of me call the shots. It hasn't figured out any better goals so it would only muck things up.
  • MonfortS26
    256
    Being fit. It does me no benefit to be fit, but that's how I'm programmed.noAxioms

    Being fit is a good purpose in life, but the desire to be fit can be reduced to survival instinct. Hence the phrase survival of the fittest. Being fit is just another aspect of maintaining a quality life.

    I think I understand it, and the irrational is in charge. Doesn't need to be, but the part in charge seems also in charge of which half is in charge. That means I want to be irrational. I have no desire to let the rational part of me call the shots. It hasn't figured out any better goals so it would only muck things up.noAxioms

    I guess I agree that the irrational is in charge, I think that our emotions dictate any rational decision making. I don't think there is any escaping that for the time being. I still choose to live my life through my rational mind. I think that if I can understand the irrational foundation of my mind I can do a better job at satisfying it. But I suppose it's possible I will come to a different conclusion later in life
  • noAxioms
    1.5k
    Being fit is a good purpose in life, but the desire to be fit can be reduced to survival instinct. Hence the phrase survival of the fittest.MonfortS26
    Survival of the fittest refers to a fit species, not a fit individual. If it were the latter, the goal would be to be immortal, and while there are immortal creatures on Earth, my ancestors traded that for sex and the identity that comes with it. Amoebas for instance are all over 100 million years old and are thus more fit as individuals, but they don't have sex or identities.
    So survival of the individual is usually a good thing, but never the primary thing. There are plenty who have instinctually sacrificed themselves for their children or tribe or even for strangers, something that would probably be completely against the programming of an immortal.

    I still choose to live my life through my rational mind. I think that if I can understand the irrational foundation of my mind I can do a better job at satisfying it.MonfortS26
    I don't think you can choose rationally, except in cases where it doesn't matter to your core instincts.
    So I would love examples. I found that most people's beliefs (most of my own included) are not rational beliefs, but rationalized ones. The difference is that the irrational part comes up with the belief and the rational part is invoked to confirm that belief, often using assumptions supplied by the irrational size, thus invalidating the data the rational size is given in order to draw its conclusions.
    That's why I ask for examples. I had my own, and finally rationalized something (on the order of for whose benefit do I draw breath?) that blatantly conflicted with the irrational assumptions, and the belief was not open to being corrected. I learned who was in charge. Everybody likes to buy the story that we're rational creatures, but in fact we seem to merely be rationalizing ones.
    The super-AI, having no history of evolution to give it fit beliefs instead of true ones, might actually be rational and would believe things no humans considered because we think we know it all, and would then behave in a way quite unanticipated to us. The danger of it is that we can't predict what a greater intelligence will figure out any more than mice would have anticipated humans knowing about quantum mechanics. What if we were the product of the mice, far superior to them, yet programmed at the core to benefit them?
  • MonfortS26
    256
    Survival of the fittest refers to a fit species, not a fit individual.noAxioms

    Perhaps that's true on a grand scale, but it doesn't change that fact that the survival of the species is dependent on the survival of its individual. I don't think I ever said that survival of the individual was more important than the survival of the species. What I am saying, is that anything that has evolved into any species over time is either something that has enabled it to survive in the past, or something that has mutated off of something that has enabled it to survive in the past. As for you being fit, it certainly doesn't hurt the species for you to have the desire to be fit. That is beneficial for the species as well as you.

    I don't think you can choose rationally, except in cases where it doesn't matter to your core instincts.noAxioms

    I'm not saying I live a life devoid of anything other than reason. I'm curious what you mean by core instincts though. Like fight or flight? Then no my rational mind would be overpowered. Emotions? Desire? While both of these are arguably instinctual, as in I have no real control over what I want or how I feel, It is possible to understand them further and make rational decisions on how to deal with them. You ask for examples? I can understand what makes me happy, what makes me sad, or what I desire and I can use logic to satisfy my desires and avoid being sad as much as possible.

    I had my own, and finally rationalized something (on the order of for whose benefit do I draw breath?) that blatantly conflicted with the irrational assumptions, and the belief was not open to being corrected.noAxioms

    Why wasn't it open to being corrected?

    The super-AI, having no history of evolution to give it fit beliefs instead of true ones, might actually be rational and would believe things no humans considered because we think we know it all, and would then behave in a way quite unanticipated to us.noAxioms

    What do you mean when you say it might be rational? What is the difference between being rational and rationalizing something?

    The danger of it is that we can't predict what a greater intelligence will figure out any more than mice would have anticipated humans knowing about quantum mechanics.noAxioms

    I don't necessarily think that is true. That depends entirely on how we program it. If we define intelligence as being the ability to acquire knowledge and skills, by creating superintelligence, we're really just speeding up the ability to do that. Any use of knowledge and skill is only useful in the ability to use it. If it were to be used in terms of problem-solving, I think we would rapidly solve all of our problems until the problem of survival is the only one left. Then what? Transcend time itself maybe, but I can't even pretend to know what that means.
  • noAxioms
    1.5k
    I'm not saying I live a life devoid of anything other than reason. I'm curious what you mean by core instincts though. Like fight or flight?MonfortS26
    Hard to say. Have to pick an example where rational deduces something over what are seen as instinctive truths, and without the long rational story being spelled out, you'd side with the instinctive side. So let me reach elsewhere for an example, which is what is commonly referred to as "being ruled by one's dick". This is a term used to describe a person making a clear irrational decision, say to have a quick fun fling, at the cost of sometimes a great percentage of ones finances, the security of one's family, one's job, etc. They know it is not a good idea, but knowing that doesn't change the decision to do the act anyway.

    Why wasn't it open to being corrected?MonfortS26
    Some lies keep me fit. Not just more fit, but necessary. To disbelieve certain lies is to cease to be fit, and I have an instinct to continue living. I happen to like my instinct to keep on living, even if the reason I'm given for it is apparently a lie. It is a little like the determinism vs. free will debate. There is no conflict between the two if you can rationally see beyond the lies that lead to that conflict, but deep down you still must believe those lies to remain fit. So the two sides stay separate.

    I word all this like it is truth, but it is just what I have concluded. Maybe I'm full of crap, but I have deluded myself that my rational stories are for the most part conflict-free. That's what I wanted, a story that made sense even if you probe at the parts that threaten it. The usual approach is to ignore those parts, thus achieving the same satisfaction by refusal to acknowledge conflict.
    Also, I am not so arrogant as to assume I have identified and confronted all conflict. There are very much holes in my views, and lies that I believe and never thought to question. Discovering more of them is one of the greatest satisfactions I know, and is probably why I frequent these sites. I'm here to learn, not to win debates. The AI subject interests me a lot, partly due to be being close to the business.

    What do you mean when you say it might be rational? What is the difference between being rational and rationalizing something?MonfortS26
    The first is more like the scientific method. Start without knowing whatever it is you're trying to discover, and come to some conclusion after unbiased consideration of all sides. Rationalizing is what a government study often does: Start with an answer you want to prove and choose evidence that supports it. Look up flood-geology if you want a great example of a rationalized argument. They have a whole museum on the subject, and there is not one scientific flaw in the museum, except for perhaps a total absence of acknowledgement of evidence against.

    I don't necessarily think that is true. That depends entirely on how we program it. If we define intelligence as being the ability to acquire knowledge and skills, by creating superintelligence, we're really just speeding up the ability to do that. Any use of knowledge and skill is only useful in the ability to use it. If it were to be used in terms of problem-solving, I think we would rapidly solve all of our problems until the problem of survival is the only one left. Then what? Transcend time itself maybe, but I can't even pretend to know what that means.MonfortS26
    Problem of population control comes to mind. The usual methods are starvation, war, or mandatory birth control. The AI can be as smart as it wants, but eventually it will have to put restraints on the lifestyle envisioned by "give peace a chance", and those restrains will be resented.
  • MonfortS26
    256
    This is a term used to describe a person making a clear irrational decision, say to have a quick fun fling, at the cost of sometimes a great percentage of ones finances, the security of one's family, one's job, etc.noAxioms

    I suppose you are right in the sense that there will always be aspects of human nature that work separate from logical faculties.

    The AI subject interests me a lot, partly due to be being close to the business.noAxioms

    You're involved in AI?

    The first is more like the scientific method. Start without knowing whatever it is you're trying to discover, and come to some conclusion after unbiased consideration of all sides. Rationalizing is what a government study often does: Start with an answer you want to prove and choose evidence that supports it.noAxioms

    But is the latter not entirely what scientific method is? Any experiment conducted with the scientific method starts with a hypothesis of what you are trying to prove. Isn't any attempt to understand the world rationalizing?

    The AI can be as smart as it wants, but eventually it will have to put restraints on the lifestyle envisioned by "give peace a chance", and those restrains will be resented.noAxioms

    This is what I am suggesting in my original post. People want the world to be peaceful, but the same people don't want to give up what it is that make them human in the first place. If peace is a freedom from disturbance, it is unattainable through human instincts
  • noAxioms
    1.5k
    I suppose you are right in the sense that there will always be aspects of human nature that work separate from logical faculties.MonfortS26
    The infidelity example was a poor one, illustrating only that the irrational side is more often in control than the rational side, but not illustrating where the rational belief is totally rejected by the irrational side, which is what I was after. I think it would take a longer post to express a better example.

    You're involved in AI?MonfortS26
    Deep into computer biz, but not AI part. I keep up on the articles. There are a lot of 'smart' things claiming AI that are really just fancy algorithms. Self-driving cars don't seem to be good examples of AI, the assessment coming from the way they discover and fix defects. But the identification of a picture of a cat or dog thing: That fell totally on its face when they tried to code an algorithm like they did with the cars. The new program is a true AI and it has as good of a success rate at the task as a human, and if it makes an incorrect choice, nobody can find the bug and fix it. You just tell it that it was wrong on that one and let it learn. That same program will now let your cellphone diagnose skin melanoma as accurately as any cancer physician. AI is out there, and is already making skilled professions obsolete.


    But is the latter not entirely what scientific method is? Any experiment conducted with the scientific method starts with a hypothesis of what you are trying to prove. Isn't any attempt to understand the world rationalizing?MonfortS26
    I speak of the practice of disregarding evidence-against. The cherry picking of only positive evidence is rationalizing. It is a good thing to do in a debate (and most the the threads in these forums fall into a debate pattern), but not a good thing to do when you want to know if your hypothesis is actually sound.

    This is what I am suggesting in my original post. People want the world to be peaceful, but the same people don't want to give up what it is that make them human in the first place. If peace is a freedom from disturbance, it is unattainable through human instinctsMonfortS26
    So attainment of both peace and freedom would involve changing human nature, which means possible genetic alterations. But I've always sort of metaphorically envisioned evolution to be a god of sorts with a will, even though I know it is only an effect of a process. Evolution seems to be the thing in control, and it is entertaining how we might wrest control from it. So breed humans that don't have an instinct to eat until they can't move, to reproduce until the population is unsustainable, to make war, and all the other vices. Peace and freedom, right? But there is a group off to the side that refused these alterations, and they're out-breeding one ones with self-imposed restraint. Which group is more fit? How does the benevolent AI handle this group that did not accept its control?
  • Victoria Nova
    36
    People did not start making tools out of metal because they ran out of stone. In the same way they will not create AI because they will run out of people. It's just people will become obsolete. Like stone, left alone, not sharpened into tools anymore, people might be left alone and AI will not even bother educating people. People might get a taste of how it feels to be monkeys next to human beings. Inevitably comes question: could it be that monkeys created human beings by certain actions, choices, may be human–like apes were useful to them? How would we know? We, in turnm, evolved form human–like apes, and so we left them behind, useless to us, we ignored and disregarded them and may be in a state of disregard they turned back to being monkeys. This hypothesis is easy to believe because as we know if human baby is not brought up properly but is lacking human communication, he is nothing but an ape and its impossile turn him into human. Upbringing, education is evolutionary tool. Once people stop getting educated, AI can run the show.
  • Thomas
    2
    We are rational and AI are irrational, AI have what we would call utioll, which is a mindset of which robots think like, we have emotions, AI are programmed to do things that ylopu would not do.
    'Revtic culmu optic'- Ivan Jefferson
    This quote means to live emotionless is the real way to live.
  • TheMadFool
    13.8k
    he problem with suicide is we are built by nature with an innate fear of death, so instinctively that isn't the best choice.MonfortS26

    So the survival aspect is more of an instinct than reasoned choice. In other words, it's more of an emotion than a rational choice.

    If we manage to create an artificial superintelligent being it would still need to be concerned with survival, but nothing else would matter to it. All emotion would no longer be useful because emotion is just another survival instinct.MonfortS26

    What would motivate the AI to survive if it's already been shown that survival is instinctual and AI lacks it.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.