• The essence of religion
    Note that language itself is the very Being in question.Constance

    I have my doubts here. Heidegger and Husserl parted ways because Heidegger hyper-focused in on hermeneutical form of phenomenology. Husserl was still reaching for the unreachable (and stated as much). The task is endless.
  • Morality must be fundamentally concerned with experience, not principle.
    I do not believe in the existence of objective categories, this includes moral or aesthetic values.Ourora Aureis

    Can you expand on this? especially in reference to aesthetics. Are you stating that aesthetics are merely an expression of the natural condition just as morals are, or something more nuanced?
  • My understanding of morals
    A reaction to this would be ethical egoism, the ethical framework I follow. It declares that we ought to act according to our values, not the value judgements of others. In this way it seems similar to the idea of personal morality you hold.Ourora Aureis

    This is a hard gap to cross as there are effectively no moral values we can hold outside of social framework. Perhaps all morals are, are instantiated social necessities that communicate shared values systems. Outside of society morals are naught. Of course we are always partially attached through social means because it is nature to be social.
  • My understanding of morals
    My understanding of morals doesn’t really fit in with those generally discussed here.T Clark

    I think it is more or less about feeling your around how other apply value to certain judgements in certain contexts compared to others. It is then about unpicking the rational claims laid out or, often enough, revealing that there are none whatsoever.

    Of course, this is further complicated when those espousing certain moral themes are so entrenched in them (or opposed to moral views) that they are effectively no longer doing anything I would call 'philosophical'. We can still attempt to point this out and find out where they took the wrong path and/or whether there is simply a misunderstanding in the concepts laid out.

    The terminology in this area is just as obtuse (if not more so) as every other field of philosophical inspection.
  • Antinatalism Arguments
    The problem of identity is a real problem, but if we admit this problem to the equation, then there may be no “me” who could fail to prevent suffering either.Fire Ologist

    Well, not really because you exist.

    I think we have to insist any AN statement adheres to their moral stance regarding nonidentity of possible future people, as well as their moral claim that no suffering is good and no pleasure is neutral.

    It is a utilitarian position in essence, so when questioning the AN this should also be kept in mind and pointed out where we feel necessary.
  • Assange
    Wikileaks is a hamster, no?
  • Antinatalism Arguments
    Because you cannot particularize this prevention of suffering in a particular “you” who doesn’t suffer, AN is acting ethical towards no one, no one who ever exists.Fire Ologist

    You can guarantee less suffering by not bringing someone into the world. This is also underlined by the metaphysical problem of non-identity. Much like our responsibility for considering the kind of world we leave behind for future generations (which we speculate about quite often without arguing about their non-existence).
  • Antinatalism Arguments
    I think I see what you are asking.

    The AN view is asking what right anyone has to create life if they know it will suffer.

    Below this isasymmetry argument. The absence of pain is Good whilst the absence of pleasure is Neutral.
  • Antinatalism Arguments
    There is no factual basis for this claim though as far as I can see.
  • Antinatalism Arguments
    That life, regardless of change or possible omission of what is currently held in the antinatalist mindset as "suffering" or "negative", creation of new life either, is intrinsically a negative, whether that conviction is held based on the likelihood of even, say, a perfect utopia naturally always reverting to a negative state, or some other generally non-evidential belief.Outlander

    I think this more or less aligned with the Right of the living to bring life into existence.
  • Antinatalism Arguments
    I am not completely sure I follow this? Can you explain better please?
  • Antinatalism Arguments
    Not to my understanding.

    Existing = Suffering < Neutral < Pleasure
    Not Existing = No Suffering/Neutral/ No Pleasure

    Only the latter guarantees No Suffering.

    Obviously because ANs exist they are prone to argue against bringing life into the world.

    From this there is the argumentation of having the Right to bring people into existence. Then we enter into the non-identity problem and metaphysics.

    Of course there is much more nuance to it than this but this is the basic frame work. Its proponents will vary depending on other moral stances (including items like moral absolutism, moral naturalism and logical positivism).
  • Antinatalism Arguments
    Well, you kind of have to understand that position as framed in isolation. ANs are not against existence per se, but there is certainly a disjoint if we project their views as a universal law (which none really seem to do).

    It is a moral preference. Of course, if said person believes in moral absolutes then the matter is quite different.

    As a means of exploring the responsibility of being a parent I regard the AN position as worthy of serious attention.
  • Antinatalism Arguments
    Life is way more than suffering. Maybe only human beings can recognize this. Why kill ourselves off because of a little suffering?Fire Ologist

    Because the AN basically believes that not suffering trumps not existing. It is certainly a factual claim that if you do not exist you do not suffer. It is not a factual claim to state that something is better than something else. It is also not a factual claim that suffering is bad unless you have outlined some specific example of the kind of suffering being offered up for discussion.

    The arguments for and against the AN position are dependent upon personal views, experiences and metaphysic (The non-identity of possible people is an example of the kind of metaphysics argument used by ANs).

    I think it is fair to say that it is an extreme kind of negative utilitarianism if taken as universal law, as the end goal drives towards something like net zero suffering (so net zero existence).

    In a more favourable light it is act utilitarianism hyper-focused on a specific aspect of the human experience.
  • Is death bad for the person that dies?
    Death is an event that can shape the future. the live has more meaning on the future than your death, but nevertheless your impact will echo into the future in some minute form or another.

    It is more a question of asking if life has meaning. If you believe you create meaning then you are more than likely stating that life has meaning beyond its cessation.
  • Antinatalism Arguments
    And I think I’ve said my peace. Antinatalism seems unneccesssry if it be based on simply suffering, seems anti-ethics while it puts ethics above ethical people, and simply ignores the joy in life.Fire Ologist

    It is a useful to consider antinatalism if you are planning to have children. The reason being it requires you to look at your inner motives and understand the kind of responsibility you are taking on.

    Other than that, it is fairly limited in my view for the reasons you articulated (and many more).
  • The essence of religion
    @Constance

    I would enjoy to here what is meant by Religion and Religious here too?

    I think it pays to distinguish what we are talking about and it what kind of historical timeframe too. Religion today is taken to mean a whole range of things sometimes and I am more concerned with common aspects that extend and persist rather than focusing in on any one particular instance or interpretation.
  • The essence of religion
    One may experience something so alien to common sense and deeply profound that it requires metaphysics to give an account of it, but to make the claim that the world as it is in all its mundanity itself possesses the basis for religious possibility, this is the idea here; that in the common lies the uncommon metaethical foundation for ethics and religion.Constance

    I am on board with this, simple because if we refer to any totality it is the current totality we know. We cannot think beyond and to say 'beyond' is merely an empty statement that is only actually applicable to different known areas of experience. If you get what I mean? Often the terms used are done so in overextension. An heuristic outside of its useful functionality.

    Here, I want to show that this other world really is this one.Constance

    And mysterious ;) Reality is often more surprising than fiction.

    So here is a question that lies at the center of the idea of the OP: what if ethics were apodictic, like logic? This is what you could call an apriori question, looking into the essence of what is there in the world and determining what must be the case given what is the case. Logic reveals apodicticity, or an emphatic or unyielding nature. Entirely intellectually coercive. I claim that ethics has this at its core.Constance

    I am not entirely clear what you are stating here. Can you be more specific about this hypothetical IF?

    If you are asking where ethics/morals come from - with the assumption of some essence - much like Kant asked about what can be known prior to experience, I am not sure how this could be so. In terms of experience I think Husserl is the best landmark to orientate from given what I have found.

    Of course, this is right. It ALWAYS depends on the flexibility of the words we are using. When you start the car in the morning, are you "thinking" about starting the car, or is it just rote action? But you certainly CAN think about it. I think when a person enters an environment of familiarity, like a classroom or someone's kitchen, there is, implicit in all one sees, the discursive possibility that lies "at the ready," as when one asks me suddenly, doesn't that chef's knife look like what you have at home? I see it, and language is there, "ready to hand". For us, not cows and goats, but for us, there is language everywhere and in everything.Constance

    I personally like to frame our intentionality as a form of questioning. What is 'given' is outside the frame of awareness. I like to frame Apodictic as that which we are not consciously attending to (it is a negative sense of being I guess? Hard to express).

    For the record I do not view religion as a mere vehicle for ethics. I think the uses of it (especially in terms
    of its prehistorical origins) were more far reaching and inclusive of much of human day-to-day experience.

    Note: This topic looks like it is right up my street. I am ready for disappointment though as these kinds of discussions rarely go down the kind of path I was hoping for.
  • Ethics: The Potential Advent of AGI
    Why do you assume there is any relation between "sentience" and "morality"?180 Proof

    I do not. This was a speculative statement. I did state sentience is not massively important to what I was focusing on here.

    Well, the latter (re: pragmatics) afaik is a subset of the former (re: semantics).180 Proof

    My mistake. What I meant was Moral in terms of Empirically validated - or something along those lines. I forget what family of Morality it is in philosophical jargon. Moral Absolutism I believe? Hopefully you can appreciate the confusion :D
  • Ethics: The Potential Advent of AGI
    I see no massive issue with equating them in this context (it is not really relevant as I am talking about something that is essentially capable of neither). I do not confuse them though ;)

    to learn how to develop its own "objectives" and comply with those operational goals in order to function at or above the level of human metacognitive performance (e.g. normative eusociality³).180 Proof

    and

    We are (e.g. as I have proposed ↪180 Proof), and I expect AGI will learn from our least maladaptive attempts to "say what is and is not moral"³.180 Proof

    These two points roughly outline my concerns - or rather, the unknown space between them. If we are talking about a capacity above and beyond human comprehension then such a system may see that is valid to extend morals and create its own to work by - all in a non-sentient state.

    If it can create its own objectives then it can possibly supplant previously set parameters (morality we dictated) by extending it in some seemingly subtle way that could effectively bypass it. How and why I have no idea - but that is the point of something working far, far beyond our capacity to understand - because we will not understand it.

    As for this:

    More approaches come from explicitly combining two or three of the approaches which you've mentioned in various ways. In my case, 'becoming a better person' is cultivated by 'acting in ways which prevent or reduce adverse consequences' to oneself and others (i.e. 'virtues' as positive feedback loops of 'negative utilitarian / consequentialist' practices). None of the basic approaches to ethics seems to do all the work which each respectively sets out to do, which is why (inspired by D. Parfit) I think they can be conceived of in combinations which compensate for each other's limitations.180 Proof

    This is likely the best kind of thing we can come up with. I was more or less referring to Moral Realism not Moral Naturalism in what I said.

    As we have seen throughout the history of human cultures, and cultures present today, there is most certainly a degree of moral relativism. Herein lies the obvious problem of what to teach AGI what is or is not right/good when in a decade or two we may well think our current thoughts on Morality are actually wrong/bad. If AGI is not sentient and sentience is required for Morality then surely you can see the conundrum here? If Morality does not require sentience then Moral Realism is correct, which would lead to the further problem of extracting what is correct (the Real Morality) from fallacies that pose universal truths.

    I grant that there are a hell of a lot of IFs and BUTs involved in this field of speculation, but nevertheless the extraordinary - and partly unintelligible - potential existential threat posed by the occurrence of AGI warrants some serious attention.

    Current we have widely differing estimations of when AGI will be a reality. Some say in 5 years whilst others say in 75 years. What I do know is that this estimate has dropped quite dramatically from the seemingly impossible to a realistic proposition.
  • Ethics: The Potential Advent of AGI
    I can't think of any reason why AGI would ignore, or fail to comply with, eusocializing norms (i.e. morals) whether, in fact, we consider them "truths" or "fictions".180 Proof

    My point was more or less regarding the problems involved down the line if we are wrong and AGI still carries on carrying on out of our intelligible sight grounded in such morals.

    What I may think or you may think are not massively important compared to what actually is. Given our state of ignorance this is a major problem when setting up the grounding for a powerful system that is in some way guided by what we give it ... maybe it would just bumble along like we do, but I fear the primary problem there would be our state of awareness and AGI's lack of awareness (hence why I would prefer a conscious AGI than not).

    I can't think of any reason why AGI would ignore, or fail to comply with, eusocializing norms (i.e. morals) whether, in fact, we consider them "truths" or "fictions".180 Proof

    I can, military objectives or other profit based objectives instituted by human beings. Compliance would be goal orientated not ethically orientated in some scenarios such as these. Again though, an 'ethical goal' is a goal nevertheless .. who is to say what is or is not moral? We cannot agree on these things now as far as I can see.
  • The essence of religion
    In terms of Cosmological perspectives this might spark some interest:

    https://www.youtube.com/watch?v=S6s_O0_6Ehs
  • The essence of religion
    Do you believe we need language to think? As in this here written language?
  • The essence of religion
    I thoughts on the whole matter of religion is varied and widespread. Could you perhaps give me a summation what has happened over the 9 pages as I am late to the party.

    I think it could be best to start by looking at differing cosmological perspectives both now and historically, then extrapolating further back into prehistory.

    I think Mircea Eliade did some stellar scholarship on religions and religiosity in general.
  • Ethics: The Potential Advent of AGI
    I am not convinced this is a Moral Truth. I do believe some kind of utilitarian approach is likely the best approximation (in terms of human existence). I would argue that mere existence may be horrific though, so more factors would need to be accounted for.
  • Ethics: The Potential Advent of AGI
    a) Yes. After that it is speculative. I do not expect self-awareness honestly, but do see an extreme need for some scheme of ethical safeguards.

    For b) and c) that is following the route of trying to get AI>AGI to help develop understanding of consciousness in order to provide AGI with a consciousness/awareness. I think that is a HIGHLY speculative solution though.

    That said, it is probably just as speculative to propose that Moral Facts can be found (if they exist at all).

    This clarification is very helpful. AGI can independently use its algorithms to teach itself routines not programmed into it?ucarr

    AI can already do this to a degree. At the moment we give them the algorithms to train with and learn more efficient pathways - far quicker than we can due to computational power.

    It is proposed that AGI will learn in a similar manner to humans only several magnitudes faster in all subject matters 24/7.

    At the risk of simplification, I take your meaning here to be concern about a powerful computing machine that possesses none of the restraints of a moral compass.ucarr

    Pretty much. Because if we cannot understand how and why it is doing what it is doing how can we safeguard against it leading to harm (possibly an existential threat).

    Just think about how people are already using AI to harvest data and manipulate the sociopolitical sphere. Think of basic human desires and wants projected into models that have no compunction and that can operate beyond any comprehensible scope.
  • Ethics: The Potential Advent of AGI
    I missed this:

    AI doesn’t know why it is important to get to the finish line , what it means to do so in relation to overarching goals that themselves are changed by reaching the finish line, and how reaching the goal means different things to different people.Joshs

    That is the danger I am talking about here. Once we are talking about a system that will is given the task over achieving X there is nothing much to say about what the cost of getting to this goal is. Much like as Asimov displayed in his Three Laws.
  • Ethics: The Potential Advent of AGI
    It seems you do not believe AGI is a possibility. you may be correct but many in the field do not agree.

    On the off chance you are wrong then what?
  • How would you respond to the trolley problem?
    Check my post for the use of hypotheticals.

    In short, it is mostly a private matter for us to meditate upon and tinker with. Some will find use in tinkering and others will be repulsed by what they find and run away screaming.
  • Ethics: The Potential Advent of AGI
    Computation is not thought.Joshs

    Of course? You do at least appreciate that a system that can compute at a vastly higher rate than us on endless tasks will beat us to the finish line though? I am not talking about some 'other' consciousness at all. I am talking about the 'simple' power of complex logarithms effectively shackling us by our own ignorant hands - to the point where it is truly beyond out knowledge of control.

    True, we provide the tasks. What we do not do is tell it HOW to complete the tasks. Creativity is neither here nor there if the system can as good as try millions of times and fail before succeeding at said task. The processing speed could be extraordinary to the point where we have no real idea how it arrives at the solution, or by the time we do it has already moved onto the next task.

    Humans will be outperformed on practically every front.

    The issue then is if we do not understand how it is arriving at solutions we also have no idea what paths it is taking and at what kind of immediate or future costs. This is what I am talking about in regards to providing something akin to a Moral Code so it does not (obviously unwittingly) cause ruinous damage.

    It only takes a little consideration of competitive arenas to briefly understand how things good go awry pretty damn quickly. We do not even really have to consider competing AGI on the military front, in terms of economics in general it could quite easily lead to such competing sides to program something damn near to "Win at all costs".

    That fact that AGI (if it comes about) cannot think is possibly worse than if it could.

    For the record. In terms of consciousness I believe some form of embodiment is a requirement for anything resembling human consciousness. As for any other consciousness I have pretty nothing no comment on that as I think it is unwise to label anything other than human consciousness as being 'conscious' if we cannot even delineate such terms ourselves with any satisfaction.

    Being conscious/sentient is not a concern I have for AGI or Super Intelligence.
  • Do you equate beauty to goodness?
    That which is appealing is highly associated with goodness. I think this is a given, no?

    Otherwise? What do you mean? I answered yes for the above reasons. The question is open to interpretation a bit too much I think.
  • Ethics: The Potential Advent of AGI
    I think you do not fully understand the implication of AGI here. Sentience is not one of them.

    AGI will effectively out perform every individual human being on the planet. A single researchers years work could be done by AGI in a day. AGi will be tasked with improving its own efficiency and thus its computational power will surpass even further what any human is capable of (it already does).

    The problem is AGI is potentially like a snowball rolling down a hill. There will be a point where we cannot stop its processes because we simply will not fathom them. A sentient intelligence would be better as we would at least have a chance of reasoning with it, or it could communicate down to our level.

    No animal has the bandwidth of a silicon system.

    Even though there are many things we don’t understand about how other organism function, we don’t seem to have any problem getting along with other animals, and they are vastly more capable than any AGI.Joshs

    They are not. No animal has ever beaten a Grand Master at chess for instance. That is a small scale and specific problem to solve. AGI will outperform EVERY human in EVERY field that is cognitively demanding to the point that we will be completely in the dark as to its inner machinations.

    Think of being in the backseat of a car trying to give directions to the driver and the driver cannot listen and cannot care about what you are saying as it is operating at a vastly higher computational level and has zero self-awareness.

    As far as I can see this could be a really serious issue - more so than people merely losing their jobs (and it is mere in comparison).
  • Ethics: The Potential Advent of AGI
    I think he meant an algorithm following a pattern of efficiency NOT a moral code (so to speak). It will interpret as it sees fit within the directives it has been given, and gives to itself, in order to achieve set tasks.

    There is no reason to assume AGI will be conscious. I am suggesting that IF AGI comes to be AND it is not conscious this is a very serious problem (more so than a conscious being).

    I am also starting to think that as AI progresses to AGI status it might well be prudent to focus its target on becoming conscious somehow/ Therein lies another conundrum. How do we set the goal of achieving Consciousness when we do not really know what Consciousness means to a degree where we can explicitly point towards it as a target?

    Philosophy has a hell of a lot of ground work to do in this area and I see very little focus in attention towards these specific problems regarding the existential threat of AGI to humans if these things are neglected.
  • Ethics: The Potential Advent of AGI
    Yes – preventing and reducing² agent-dysfunction (i.e. modalities of suffering (disvalue)¹ from incapacity to destruction) facilitated by 'nonzero sum – win-win – resolutions of conflicts' between humans, between humans & machines and/or between machines.


    ¹moral fact

    ²moral truth (i.e. the moral fact of (any) disvalue functions as the reason for judgment and action / inaction that prevents or reduces (any) disvalue)
    180 Proof

    Can you explain a little further. Not sure I grasp your point here. Thanks :)
  • Ethics: The Potential Advent of AGI
    The very same way an AI can beat human players in multiple games without being conscious. AGI does not mean necessarily mean conscious (as far as we know).

    AGI means there will exist an AI system that can effectively replicate, and even surpass, individual human ability in all cognitive fields of interest. It may that this (in and of itself) will give rise to something akin to 'consciousness' but there is absolutely no reason to assume this as probable (possible, maybe).
  • Ethics: The Potential Advent of AGI
    That sounds kind of horrific.ToothyMaw

    Well, most of the population of the planet is already has an extended organ (phone).
  • Ethics: The Potential Advent of AGI
    I'm pretty certain AGI, or strong AI, does indeed refer to sentient intelligences, but I'll just go with your definition.ToothyMaw

    Absolutely not. AGI refers to human level intelligence (Artificial Intelligence).

    Making AI answerable to whatever moral facts we can compel it to discover doesn't resolve the threat to humanity, however, but rather complicates it.

    Like I said: what if the only discoverable moral facts are so horrible that we have no desire to follow them? What if following them would mean humanity's destruction?
    ToothyMaw

    If AGI hits then it will grow exponentially more and more intelligent than humans. If there is no underlying ethical framework then it will just keep doing what it does more and more efficiently, while growing further and further away from human comprehension.

    To protect ourselves something needs to be put into place. I guess there is the off chance of some kind of cyborg solution, but we are just not really capable of interfacing with computers on such a scale as we lack the bandwidth. Some form of human hivemind interaction may be another solution to such a problem? It might just turn out that once AGI gets closer and closer to human capabilities it can assist in helping us protect ourselves (I am sure we need to start now though either way).

    I would not gamble on a sentient entity being created at all. Perhaps setting human-like sentience would be the best directive we could give to a developing AGI system. If we can have a rational discussion (in some form or another) then maybe we can find a balance and coexist.

    Of course this is all speculative at the moment, but considering some are talking about AGI coming as soon as the end of this decade (hopefully not that soon!) it would be stupid not to be prepared. Some say we are nowhere near having the kind of computing capacity needed for AGI yet and that it may not be possible until something like Quantum Computing is perfected.

    Anyway, food for thought :)
  • Ethics: The Potential Advent of AGI
    Also: Asimov's 'Three Laws of Robotics' were deficient, and he pointed out the numerous contradictions and problems in his own writings. So, it seems to me we need something much better than that. I would have no idea where to start apart from what I have written above, which is definitely not sufficient.ToothyMaw

    Well, yeah. That is part of the major problem I am highlighting here. Anyone studying ethics should have this topic at the very forefront of their minds as there is no second chance with this. The existential threat to humanity could be very real if AGI comes into being.

    We can barely manage ourselves as individuals, and en masse it gets even more messy. How on earth are we to program AI to be 'ethical'/'moral'? To repeat, this is a very serious problem because once it goes beyond our capacity to understand what it is doing and why it is doing it we will essentially be being piloted by this creation.

    If you are suggesting we root our ethical foundations for AGI in moral facts: even if we or the intelligences we might create could discover some moral facts, what would compel any superintelligences to abide those facts given they have already surpassed us analytically? What might an AGI see when it peers into the moral fabric of the universe and how might that change its - or others' - behavior? And what if we do discover these moral facts and they are so repugnant or detrimental to humanity that we wish not to abide them ourselves?ToothyMaw

    I think you are envisioning some sentient being here. I am not. There is nothing to suggest AI or AGI will be conscious. AGI will just have the computing capacity to far outperform any human. I am not assuming sentience on any level (that is the scary thing).

    I have no answers, just see a major problem. I do not see many people talking about this directly either - which is equally as concerning. It is pretty much like handing over all warhead capabilities to a computer that has no moral reasoning and saying 'only fire nukes at hostile targets'.

    The further worry here is everyone is racing to get to this AGI point because whoever gets their first will effectively have the monopoly on this .. then they will quickly be secondary to the very thing they have created. Do you see what I mean?
  • Are some languages better than others?
    I was wasting my time it seems. Not even got going yet.

    Guess this is how things are now here.

    Bye bye :)
  • Are some languages better than others?
    Are some natural languages more logical than others though?

    One of my biggest gripes with people discussing ethics/philosophy is that they believe ‘true’ is wholly applicable to pure abstract forms.

    As I said with German do you think that is more logical?

    Another matter I recall was when the European countries grouped together for political discourse Greek was given serious consideration as the Greek to mediate through as it was more suited to easy communication. They went for English simply because it was more universal not because it was the best suited.