• Baden
    16.4k
    As earlier announced, we have invited transhumanist philosopher David Pearce to be a guest speaker here and he has kindly accepted. We are hoping to learn more about his work and the very interesting field in which he's involved.

    David is a key figure in the transhumanist movement and a co-founder of Humanity+ (formerly known as The World Transhumanist Association). For those of you who are unsure of the basics of transhumanism, David provides a useful, concise introduction in this video:



    For a more detailed take on David's ideas, the following is helpful:



    Or check out David's book The Hedonistic Imperative and his website Hedweb.com

    Other important thinkers in the broad transhumanist sphere include Ray Kurzweil and James Hughes.

    Needless to say, transhumanism is a controversial subject and it's status open to debate. As such, some of us may come to the subject with strong preconceptions/opinions. This is all fine, but please bear in mind, David is contributing time from his busy schedule to help us learn more about his field and while critique and questioning is welcome, we hope everyone will be measured and respectful in tone.

    This thread is intended as an initial AMA (ask me anything) where you can put your questions/critiques to David. Please keep these to a maximum of 250 words. There's no guarantee he'll get around to everyone, but I'm sure he'll make the effort to answer the more interesting posts, some of which may, according to David's prerogative, be separated into threads of their own. So, have at it and I'll let David know this is up and we're ready for him.

    (Obviously, he may need some time to digest the questions and answer them around his schedule, so please be patient.)
  • _db
    3.6k
    David, have you read Jacques Ellul, and if so, what critique do you have against his philosophy of technology? I am unconvinced that technology as it exists today is merely a tool that is used by people, and that people are or even could be in control of the direction in which it develops. What reasons do we have to believe that humanity can achieve these monumental transformations with technology, when we are unable to solve the current problems we face (global warming, overpopulation, famine, etc)?
  • BC
    13.6k
    Star Trek (especially, the Second Generation series) set in the 24th Century seems to embody a version of transhumanism. There is a high level of human well-being, empathy, technology, and so on, on earth as well as on board the Enterprise. In the galaxy, not so much.

    Do you see technological advances in the next two centuries delivering the conditions of transhumanism, or are you thinking in longer (or shorter) time periods?

    What do you think the chances are of environmental collapse in the next 100 years derailing the necessary technical developments to allow transhumanism?

    What kind of economic arrangements are most and least likely to advance transhumanist goals? Capitalism is not a good candidate to deliver super well-being to everyone.
  • fdrake
    6.7k
    Two thrusts of research in the last half century have been the thorough chronicling of how human rationality is riddled with irremovable biases and how extremely powerful artificial intelligences may nevertheless have incommensurable value systems with humans. It seems we are broken tools that make broken tools; we evaluate wrongly and thus teach machines to value wrongly.

    What role, if any, do you see a re-evaluation of rationality and decision making logics playing in the transhumanist project? How ought we go about that? And how do we get around the problem of using broken tools to make only more powerful broken tools?
  • Shawn
    13.3k
    Given the strong ethics present in the US medical field and especially US colleges, how do you see the developments (legalistically) for transhumanism for the following years?

    For example, Neuralink, by Elon Musk, hopes to address something equivalent albeit not explicitly stated to the goal of transhumanism with brain machine interfaces to keep pace with ever more intelligent computers.

    What's your thoughts about this endeavor and hurdles it faces?
  • tim wood
    9.3k
    Super well-being. Hmm. Achieved in negative terms by overcoming and eliminating causes of suffering (understood most generally), and in positive terms through facilitation of high achieving. This sounds good, but I cannot help wondering what it exactly means. It would be an excessive rudeness on my part to ask you to build me a watch in reply, but might you make a few comments on who your super-well person is, what he or she is doing, and how they feel or understand their happiness? If for example it is along the lines of Aristotelian concepts of eudaimonia, then a one-word answer would be sufficient.
  • counterpunch
    1.6k
    Seems a little far fetched when we can't get humans to apply available technologies for the benefit of humanity; not least, drill for limitless magma heat energy, to produce massive base load, clean electrical power, for carbon capture and sequestration, hydrogen fuel, desalination/irrigation and recycling, but a miracle chip in my neo-frontal cortex is going to usher in a subjective sense of paradise? Unless we get the utilities sorted out, your transhumanist paradise will be objectively unliveable - no matter how blissed out you are!
  • Pfhorrest
    4.6k
    I just want to say to David that (anyone’s quarrels with transhumanist means aside) I’m very pleased to see a professional like him touting the right ends, especially after all the pushback I’ve gotten on this forum for supporting the radical idea that maybe all that really matters morally speaking is reducing suffering any kind.

    I guess if this is to be a question, it would be: has he gotten any pushback on that in the professional sphere, and what has that been like?

    Oh and also: does he know a good term for anti-hedonist views in general? Because I feel like I’m sorely lacking any catch-all term that doesn’t name something more specific than just that.
  • Deleted User
    0
    David, a current theory is that we can only experience joy and love because of the experience of their opposites, misery and hatred. So if you were born in a state of bliss and have never known otherwise, then it wouldn't be bliss. What do you think about this theory?
  • Baden
    16.4k
    (Just want to add that we'd like to keep this thread for questions to and interactions with David. Comments on this thread and debates among yourselves can be posted in the accompanying discussion posted previously. So, if your comment disappears, it's probably been moved there.)
  • god must be atheist
    5.1k
    Utopias, visionarism, futurism, and even presentism and looking at the past are all riddled with the fallacy of reducing the concept of humanity by stripping it of its diverse nature, and of its further diversification. How does Transhumanism handle this concept, the concept that humanity is not at all a monolithic homogeneous substance made up of individuals of the same social and personal psychology, with the same or similar needs, wants and desires, but a mass conglomerate of an ever-growing trend of diversification?
  • Down The Rabbit Hole
    530
    David, hope you are well.

    Anti-natalism is regularly discussed on this forum, with members having different degrees of sympathy towards it.

    You describe yourself as a "soft anti-natalist". What is your basis for this? And do you buy into Benatar's asymmetry theory? (which suggests that the pain of a pinprick would make it so that a life otherwise full of pleasure would have been better off not being started).
  • Pinprick
    950
    Hello, and thank you for participating here.

    Regarding the three “supers” mentioned, I’m curious about how the three are interrelated. Particularly super-intelligence and super-wellbeing. Generally speaking, intelligence means learning/knowing what is true. However, truth is often unpleasant, and would therefore seem to detract from one’s wellbeing, at least occasionally.

    Also, you seem to advocate for essentially the removal all suffering. Much of our suffering derives from our biological needs (food, sleep, etc.). So would these needs need to be removed in order to eliminate suffering? If so, this too would seem to detract from the goal of super-wellbeing, because much of our happiness is rooted in pursuing, and hopefully meeting, these needs. Basically what I’m saying is that if you eliminate our biological needs, you also risk eliminating our very will to live. What would our motivation for life be without experiencing desire? It’s like Buddhism without the concept of nirvana, enlightenment, rebirth, etc. A state of eternal contentment and complacency seems to be what the outcome would look like. Do you feel this would be more desirable than our current state of affairs? Thank you for your time.
  • David Pearce
    209
    Thank you, Philosophy Forum, for inviting me. I'll be answering questions – and any critical follow-ups! – this week. Please forgive any delay. I'll do my fallible best to respond to everyone.

    darthbarracuda
    Most transhumanists are secular scientific rationalists. Only technology (artificial intelligence, robotics, CRISPR, synthetic gene drives, preimplantation genetic screening and counselling) can allow intelligent moral agents to reprogram the biosphere and deliver good health for all sentient beings.
    Global warming? There are geoengineering fixes.
    Overpopulation? Fertility rates are plunging worldwide.
    Famine? More people now suffer from obesity than undernutrition.

    I share some of Jacques Ellul's reservations about the effects of technology. But only biotechnology can recalibrate the hedonic treadmill, eradicate the biology of involuntary pain and suffering and deliver a world based on gradients of intelligent bliss:
    https://www.hedweb.com/hedethic/sentience-interview.html

    Jacques Ellul himself was deeply religious. He felt he had been visited by God. Most spiritually-minded people probably feel that transhumanism has little to offer. Perhaps they are right – my own mind is a desolate spiritual wasteland. But science promises the most profound spiritual revolution of all time. Tomorrow’s molecular biology can identify the molecular signatures of spiritual experience, refine and amplify its biological substrates, and deliver life-long spiritual ecstasies beyond the imagination of even the most god-intoxicated temporal-lobe epileptic.
    Will most transhumans choose to be rationalists or mystics?
    I don’t know. But biotech can liberate us from the obscene horrors and everyday squalor of Darwinian life.
  • Baden
    16.4k


    Welcome, David! We appreciate any time you can give us, so please do proceed at your own pace. :smile:
  • David Pearce
    209
    Bitter Crank
    The future we imagine derives mostly from the sci-fi we remember. Life in the 24th century will not resemble Star Trek. Not merely does the “thermodynamic miracle” of life’s genesis mean that Earth-originating life is probably alone in our Hubble volume. The characters in Star Trek have the same core emotions, same pleasure-pain axis, same fundamental conceptual scheme and same default state of waking consciousness as archaic humans. Even Mr Spock is all too human. It’s hokum.

    Realistic timescales for transhumanism? Let’s here define transhumanism in terms of a “triple s” civilisation of superintelligence, superlongevity and superhappiness. Maybe the 24th century would be a credible date. Earlier timescales would be technically feasible. But accelerated progress depends on sociological and political developments that reduce predictions to mere prophecies and wishful thinking. In practice, the frailties of human psychology mean that successful prophets tend to locate salvation or doom with the plausible lifetime of their audience. I’m personally a lot more pessimistic about timescales for a mature “triple S” civilisation than most transhumanists. Sorry to be so vague. There are too many unknown unknowns.

    Environmental collapse? The only way I envisage collapse might happen is via full-scale thermonuclear war and a strategic interchange between the superpowers. Sadly, this is not entirely far-fetched. Evolution “designed” human male primates to wage war against other coalitions of human male primates. I fear we may be sleepwalking towards Armageddon. Note that environmental collapse wouldn’t entail human extinction, though relocating to newly balmy Antarctica (cf. https://motls.blogspot.com/2019/10/60-c-of-global-warming-tens-of-millions.html) would be hugely disruptive. Let’s hope these fears are wildly overblown.

    Capitalism? Can a system based on human greed really deliver the well-being of all sentience? I'm sceptical. Free-market fundamentalism doesn’t work. Universal basic income, free healthcare and guaranteed housing are preconditions for any civilised society. Above all, murdering sentient beings for profit must be outlawed. Factory-farms and slaughterhouses are pure evil (cf. https://www.hedweb.com/quora/2015.html#slaughterhouses). The cultured meat revolution will presumably end the horrors of animal agriculture. But otherwise, I think some version of the mixed economy will continue indefinitely. Anything that can be digitised soon becomes effectively free. This includes genetic information. The substrates of bliss won’t need to be rationed. In the meantime, preimplantation genetic screening and counselling for all prospective parents would be hugely cost-effective – especially in the poorest countries. I hope all babies can be designer babies rather than today’s reckless genetic experiments.
  • BC
    13.6k
    Thank you for joining us, and presenting your vision of transhumanism.

    The only way I envisage collapse might happen is via full-scale thermonuclear war and a strategic interchange between the superpowers.David Pearce

    A thermonuclear war would indeed be a fine way to ring down the curtain, but perhaps a less efficient method would be sufficiently effective. I am not suggesting a human species-terminating event. Rather, extensive -- and occasionally severe -- environmental degradation could rob the species of the surpluses needed to support a large research and development establishment. In time we may be able to dig ourselves out of the environmental hole we are still busy excavating.

    Do you think super-intelligence will be achieved and enjoyed incrementally, or will this happen in a single exceptional leap? Is the present brain capable of being uplifted to super-intelligence, or will it be necessary to design a better biological brain-build before uplift can occur? A bigger, better frontal cortex; a less volatile limbic system, more memory, better sensory processing? Brains much smaller than ours manage remarkably complex behavior (but just skip over philosophy). Can our brains be made a more efficient structure, before we add a practice effect?

    I have experienced an unearned but nice level of contentment which has lasted now several years. I locate the source of this contentment in the limbic system. Is it age? I'm 75. Do you see super-happiness as the result of changing our emotion-generating system, or as a result of super-intelligence? Maybe one of the things that makes the God of Israel so angry is his alleged omniscience--The God Who Knew Too Much?
  • Noble Dust
    8k
    @David Pearce

    Thanks for joining us.

    My concern with transhumanism is that it's beneficiaries like yourself are either unaware of or downplaying what I (maybe erroneously) refer to as the "human condition"; namely that state we all find ourselves in in which we are unable to achieve the moral ideals we aspire to - I want to not be selfish, and yet I am; I want to not give in to base pleasures, and yet I do; I want to devote my time and energy to higher ideals, and yet I spend hours watching vapid youtube videos (or even doing something slightly more noble like going down wiki rabbit holes or listening to weird music that I'm not sure if I like or not). I worry about the gap between lofty goals such as those that are transhumanistic on one hand, and the cold hard reality of purely human existence on the other. Tie that in to what @darthbarracuda mentioned about Ellul, and I'm wondering if transhumanism isn't just a sort of pubescent lack of understanding about the human situations we all find ourselves in; technology, after all, is simply humans harnessing our (ever-changing understanding of) nature in a very imperfect, and often destructive way. Cryptocurrency itself is detrimental to the environment.

    What I'm really worried about is a transhumanistic approach to the human situation that is not based on an accurate understanding of that human situation; an approach that assumes too much and introspects about ourselves far too little.

    Somewhat related; how does transhumanism address addiction?
  • Benkei
    7.8k
    Do you think super-intelligence will be achieved and enjoyed incrementally, or will this happen in a single exceptional leap? Is the present brain capable of being uplifted to super-intelligence, or will it be necessary to design a better biological brain-build before uplift can occur? A bigger, better frontal cortex; a less volatile limbic system, more memory, better sensory processing? Brains much smaller than ours manage remarkably complex behavior (but just skip over philosophy). Can our brains be made a more efficient structure, before we add a practice effect?Bitter Crank

    This ties in with what Bitter Crank is asking and @Noble Dust points out with his "human condition".

    Apologies if this is a dumb question that can be easily researched but, @David Pearce, what is considered "superintelligence" within transhumanism? Some theories pose several types of intelligence and quite a few don't necessarily fit in the scientific positivist vibe of transhumanism. A list to illustrate:

    Aesthetic intelligence
    Collective intelligence (a result of social processes and communciation)
    creativity
    crystallized intelligence (abilities based on knowledge and experience)
    existential intelligence (philosophical reasoning, abstraction)
    fluid intelligence
    intentionality
    interpersonal intelligence
    intrapersonal intelligence (together often "emotional intelligence")
    kinesthetic intelligence
    linguistic intelligence
    musical intelligence
    organizational intelligence
    self awanreness
    situational intelligence
    spatial intelligence
    logical-mathematical intelligence

    For instance, how would being hypersensitive and aware of your own and other people's feelings affect our logical-mathematical intelligence at any given time? Even resolving whatever "bandwidth" issues we currently have, causing us to only focus on one thing at a time, what does it mean to simulatenously follow a law that requires a punishment and our compassion wanting to forgive the criminal?

    In other words, given the various types of intelligence and there not being a clear hierarchy, what do you think it would in practice mean to be superintelligent?
  • David Pearce
    209
    fdrake
    Irremovable human biases?
    Yes. One example is status quo bias. A benevolent superintelligence would never have created a monstrous world such as ours. Nor (presumably) would benevolent superintelligence show status quo bias. But the nature of selection pressure means that philosopher David Benatar’s plea for voluntary human extinction via antinatalism (Better Never To Have Been (2008)) is doomed to fall on deaf ears. Apocalyptic fantasies are futile too (cf. https://www.hedweb.com/quora/2015.html#dptrans).
    So the problem of suffering is soluble only by biological-genetic means.

    The orthogonality thesis?
    All biological minds have a pain-pleasure axis. The pain-pleasure axis discloses the world’s inbuilt metric of (dis)value. Thus there are no minds in other life-supporting Hubble volumes with an inverted pleasure-pain axis. Such universality of (dis)value doesn’t mean that humans are all closet utilitarians. The egocentric illusion has been hugely genetically adaptive for Darwinian malware evolved under pressure of natural selection; hence its persistence. Yet we shouldn’t confuse our epistemological limitations with a deep metaphysical truth about the world. Posthuman superintelligences will not have a false theory of personal identity. This is why I’m cautiously optimistic that intelligent agents will phase out the biology of suffering in their forward light-cone. Yes, we may envisage artificial intelligences with utility functions radically different from biological minds (“paperclippers”). But classical digital computers cannot solve the phenomenal binding/combination problem (cf. https://www.hedweb.com/hedethic/binding-interview.html). Digital zombies can never become full-spectrum intelligences, let alone full-spectrum superintelligences. AI will augment us, not supplant us.

    Better tools of decision-theoretic rationality?
    Compare the metaphysical individualism presupposed by the technically excellent LessWrong FAQ (cf. https://www.lesswrong.com/posts/2rWKkWuPrgTMpLRbp/lesswrong-faq) with the richer conception of decision-theoretic rationality employed by a God-like full-spectrum superintelligence that could impartially access all possible first-person perspectives and act accordingly (cf. https://www.hedweb.com/quora/2015.html#individualism).
    So how can humans develop such tools of God-like rationality?
    As you say, it’s a monumental challenge. Forgive me for ducking it here.
  • David Pearce
    209
    Shawn
    Challenges for transhumanism?
    Where does one start?! Here I’ll focus on just one. If prospective parents continue to have children “naturally”, then pain and suffering will continue indefinitely. All children born today are cursed with a terrible genetic disorder (aging), a chronic endogenous opioid addiction and severe intellectual disabilities that education alone can never overcome. The only long-term solution to Darwinian malware is germline gene-editing. Unfortunately, the first CRISPR babies were conceived in less than ideal circumstances. He Jianku and his colleagues were trying to create cognitively enhanced humans with HIV-protection as a cover-story (cf. https://www.technologyreview.com/2019/02/21/137309/the-crispr-twins-had-their-brains-altered/). All babies should be CRISPR babies, or better, base-edited babies. No responsible prospective parent should play genetic roulette with a child’s life. Unfortunately, the reproductive revolution will be needlessly delayed by religious and bioconservative prejudice. If a global consensus existed, we could get rid of suffering and disease in a century or less. In practice, hundreds if not thousands of years of needless pain and misery probably lie ahead.

    Neuralink? It’s just a foretaste. If all goes well, everyone will be able to enjoy “narrow” superintelligence via embedded neurochips – the mature successors to today’s crude prototypes. Everything that programmable digital zombies can do, you’ll be able to do – and much more. Huge issues here will be control and accountability. I started to offer a few thoughts, but they turned into platitudes and superficial generalities. “Narrow” superintelligence paired with unenhanced male human nature will be extraordinarily hazardous.
  • David Pearce
    209
    Tim Wood
    Super well-being?
    Let’s say, schematically, that our human hedonic range stretches from -10 to 0 to +10. Most people have an approximate hedonic set-point a little above or a little below hedonic zero. Tragically, a minority of people and the majority of factory-farmed nonhuman animals spend essentially their whole lives far below hedonic zero. Some people are mercurial, others are more equable, but we are all constrained by the negative-feedback mechanisms of the hedonic treadmill. In future, mastery of our reward circuitry promises e.g. a hedonic +70 to +100 civilisation – transhuman life based entirely on information-sensitive gradients of bliss (cf. https://www.gradients.com). Currently, we can only speculate on what guise such superhuman well-being will take, and how it will be encephalised. What will transhumans and posthumans be happy “about”? I don’t know – probably modes of experience that are physiologically inaccessible to today's humans (cf. https://www.hedweb.com/quora/2015.html#irreversible). But one of the beauties of hedonic recalibration is that (complications aside) it’s preference-neutral. Who wouldn’t want to wake up in the morning in an extremely good mood – and with their core values and preference architecture intact? Aristotle’s “eudaimonia” or sensual debauchery? Mill’s “higher pleasures” or earthy delights? You decide. Crudely, everyone’s potentially a winner with biological-genetic interventions. Compare the zero-sum status-games of Darwinian life. Unlike getting rid of suffering, I don’t think superhappiness is morally urgent; but post-Darwinian life will be unimaginably sublime.

    What about hedonic uplift for existing human and nonhuman animals prior to somatic gene-editing? Well, one attractive option is ACKR3 receptor blockade (cf. https://www.nature.com/articles/s41467-020-16664-0), perhaps in conjunction with selective kappa opioid receptor antagonism. Enhancing “natural” endogenous opioid function and raising hedonic set-points is vastly preferable to taking well-known drugs of abuse that typically activate the negative feedback mechanisms of the CNS with a vengeance. An intensive research program is in order. Pitfalls abound.

    In the long run, however, life on Earth needs a genetic rewrite. Pharmacological stopgaps aren't the answer.
  • David Pearce
    209
    counterpunch
    “A miracle chip?”
    Transhumanists don’t advocate intracranial self-stimulation or unvarying euphoria. For a start, uniform bliss wouldn’t be evolutionarily stable; wireheads don’t want to raise baby wireheads.
    Transhumanists don’t advocate getting “blissed out”. Instead, we urge a biology of information-sensitive gradients of well-being. Information-sensitivity is critical to preserving critical insight, social responsibility and intellectual progress.
  • David Pearce
    209
    Pfhorrest
    I’m sad to hear of the pushback you’ve received on the forum. Instead of saying one is a “negative utilitarian”, perhaps try “secular Buddhism” or “suffering-focused ethics” (cf.
    https://magnusvinding.com/2020/05/31/suffering-focused-ethics-defense-and-implications/). I sometimes simply say that I would “walk away from Omelas”. No amount of pleasure morally outweighs the abuse of even a single child: https://www.cmstewartwrite.com/single-post/a-question-for-david-pearce . If a genie made you an offer, would you harm a child in exchange for the promise of a millions of years of indescribable happiness? I'd decline – politely (I'm British).

    Academic pushback? I guess the average academic response isn’t much different from the average layperson’s response. An architecture of mind based entirely on information-sensitive gradients of well-being simply isn’t genetically credible – whether for an individual or a civilisation, let alone a global ecosystem (cf. https://www.gene-drives.com). At times my imagination fails too. Of course there are exceptions – but the academics who’ve directly been in touch to offer support are almost by definition atypical.

    A fairly common critical response would probably be Professor Brock Bastian's The Other Side of Happiness: Embracing a More Fearless Approach to Living (2018):
    https://www.hedweb.com/social-media/pairagraph.html
  • counterpunch
    1.6k


    Transhumanists don’t advocate intracranial self-stimulationDavid Pearce

    You don't?

    Neuralink? It’s just a foretaste. If all goes well, everyone will be able to enjoy “narrow” superintelligence on embedded neurochips – the mature successors to today’s crude prototypes.David Pearce

    For a start, uniform bliss wouldn’t be evolutionarily stable;David Pearce

    Longevity is not a stable evolutionary state either! I did not think that was a big deal for you:

    biotech can liberate us from the obscene horrors and everyday squalor of Darwinian life.David Pearce

    Do you appeal to evolutionary stability - or seek to transcend it?

    Transhumanists don’t advocate getting “blissed out”.David Pearce

    Alexander Graham Bell originally suggested 'ahoy' be adopted as the standard greeting when answering a telephone. Not many other people use it the way it was intended.

    Information-sensitivity is critical to preserving critical insight, social responsibility and intellectual progress.David Pearce

    The problem I foresee is that, currently, people get 'blissed out' because they don't want to think; they want to be less sensitive to information - not more so.

    But you seem to have missed my point. There are technologies we have available, we need to apply to survive as a species, we still don't apply. Where's the incentive to make immortals that are sublimely contented and wicked smart?

    p.s. I know transhumanists don't advocate actual immortality.
  • 3017amen
    3.1k


    Hello David!

    As been said, thank you kindly for sharing some of your thoughts and time here. Just 2 quick questions that relate to the 3rd Super, can you please define the following concepts that you used to describe your thesis:

    1. "Involuntary Suffering"

    2. "Pro-Social"

    I am trying to parse both the practical and theoretical implications of those concepts, so as to understand Transhumanism a bit more... .

    Thank you in advance.
  • _db
    3.6k
    Thanks for replying, David.

    Only technology (artificial intelligence, robotics, CRISPR, synthetic gene drives, preimplantation genetic screening and counselling) can allow intelligent moral agents to reprogram the biosphere and deliver good health for all sentient beings.
    Global warming? There are geoengineering fixes.
    Overpopulation? Fertility rates are plunging worldwide.
    Famine? More people now suffer from obesity than undernutrition.
    David Pearce

    I don't want to come across like some neo-Luddite who hates all technology, but:

    Reprogramming the biosphere etc could result in it being dependent on the technological infrastructure. And if this infrastructure fails, then the biosphere will be unable to recover on its own. I think about a business who has nearly its entire operations digitized in the cloud; when those servers go down, the business is screwed. Though perhaps you could provide some examples of geoengineering fixes that don't have the possibility of catastrophic failure.

    With respect to famine, the fact that obesity is more common then undernutrition is an example of technology solving a problem, only to introduce another one.

    I share some of Jacques Ellul's reservations about the effects of technology. But only biotechnology can recalibrate the hedonic treadmill, eradicate the biology of involuntary pain and suffering, and deliver a world based on gradients of intelligent bliss:David Pearce

    Could you elaborate on these reservations you share with Ellul?

    Jacques Ellul himself was deeply religious. [...] But science promises the most profound spiritual revolution of all time. Tomorrow’s molecular biology can identify the molecular signatures of spiritual experience, refine and amplify its biological substrates, and deliver life-long spiritual ecstasies beyond the imagination of even the most god-intoxicated temporal-lobe epileptic.David Pearce

    I am not religious or spiritual myself, but I think Ellul's critique of technology can be evaluated independently of his religious beliefs.

    Would a super-rational scientific soma really be spiritual? What do you mean by spiritual here?
  • David Pearce
    209
    Tay San
    The idea that pleasure and pain are largely if not wholly relative is seductive. It’s still probably the most common objection to the idea of a civilisation based entirely on gradients of bliss. However, consider the victims of life-long pain and depression. Some chronic depressives can’t imagine what it’s like to be happy. In some severe cases, chronic depressives don’t even understand what the word “happiness” means – they conceive of happiness only in terms of a reduction of pain. Now we wouldn’t (I hope) claim that chronic depressives can’t really suffer because they’ve never experienced joy. Analogous syndromes exist at the other end of the Darwinian pleasure-pain axis. Unipolar euphoric mania is dangerous and extraordinarily rare. Yet there is also what psychologists call extreme "hyperthymia". Hyperthymics can be very high functioning. My favourite case-study is fellow transhumanist Anders Sandberg (“I do have a ridiculously high hedonic set-point”). Anders certainly knows he is exceedingly happy – although unless pressed, he doesn’t ordinarily talk about it. He is also socially responsible, intellectually productive and exceptionally smart. In common with depression and mania, hyperthymia has a high genetic loading. Gene editing together with preimplantation genetic screening and counselling for all prospective parents offer the potential prospect of lifelong intelligent happiness for future (trans)humans. For sure, creating an entire civilisation of hyperthymics will be challenging. Not least, prudence dictates preserving the functional analogues of depressive realism – at least in our intelligent machines. But unlike ignorance, known biases can be corrected.

    I’m a dyed-in the-wool pessimist by temperament. But for technical reasons, I suspect the long-term future of sentience lies in gradients of sublime bliss beyond the bounds of human experience.
  • schopenhauer1
    11k
    One example is status quo bias. A benevolent superintelligence would never have created a monstrous world such as ours. Nor (presumably) would benevolent superintelligence show status quo bias. But the nature of selection pressure means that philosopher David Benatar’s plea for voluntary human extinction via antinatalism (Better Never To Have Been (2008)) is doomed to fall on deaf ears. Apocalyptic fantasies are futile too (cf. https://www.hedweb.com/quora/2015.html#dptrans).
    So the problem of suffering is soluble only by biological-genetic means.
    David Pearce

    Hi David, I was wondering if your philosophy is more aggregate-centered, or individual-centered. It seems to me to be more aggregate-centered. Often these ethical philosophies overlook the pain and suffering of individuals to effect/affect the greatest change. One example here is that you admit that this world can be pretty monstrous, and would not be something a benevolent superintelligence would want. However, your vision of a transhumanism utopia seems something in a far off future. Presumably, from the time now until that future time, billions of people will have lived and suffered. That being said, wouldn't David Benatar and antinatalism's argument in general be the best alternative in terms of suffering prevented? Basically, if you prevent the suffering in the first place, you have cut off the suffering right from the start. And as Benatar's asymmetry shows, no "person" suffers by not being born to experience the "goods" of life. It's a win/win it seems.
  • fdrake
    6.7k
    Better tools of decision-theoretic rationality?
    Compare the metaphysical individualism presupposed by the technically excellent LessWrong FAQ (cf. https://www.lesswrong.com/posts/2rWKkWuPrgTMpLRbp/lesswrong-faq) with the richer conception of decision-theoretic rationality employed by a God-like full-spectrum superintelligence that could impartially access all possible first-person perspectives and act accordingly (cf. https://www.hedweb.com/quora/2015.html#individualism).
    So how can biological humans develop such tools of God-like rationality?
    As you say, it’s a monumental challenge. Forgive me for ducking it here.
    David Pearce

    Thank you for your in depth answer. I hope you don't mind me following up on (only) this one point:

    By contrast, the fleeting synchronic unity of the self is real, scientifically unexplained (cf. the binding problem) and genetically adaptive. How a pack of supposedly decohered membrane-bound neurons achieves a classically impossible feat of virtual world-making leads us into deep philosophical waters. But whatever the explanation, I think empty individualism is true. Thus I share with my namesakes – the authors of The Hedonistic Imperative (1995) – the view that we ought to abolish the biology of suffering in favour of genetically-programmed gradients of superhuman bliss. Yet my namesakes elsewhere in tenselessly existing space-time (or Hilbert space) physically differ from the multiple David Pearces (DPs) responding to your question. Using numerical superscripts, e.g. DP564356, DP54346 (etc), might be less inappropriate than using a single name. But even “DP” here is misleading because such usage suggests an enduring carrier of identity. No such enduring carrier exists, merely modestly dynamically stable patterns of fundamental quantum fields. Primitive primate minds were not designed to “carve Nature at the joints”. — David Pearce, Quora Answers by David Pierce

    (above quote from linked document for context)

    The interpretive emphasis on the human body physically simulating that body's self awareness is well taken. I would like to take that simulation idea and push on its boundaries - the boundaries of the body, when the body is seen as a space-time process.

    I was wondering if you had any comments regarding the scope of that process of simulation cf the extended mind thesis? And possibly an ethical challenge this raises to the primacy of biogenetic intervention in the reduction of long term suffering: if the human mind's simulation process is saturated with environmental processes, why is the body a privileged locus of intervention for suffering reduction and not its environment?

    Also in that context of the philosophical puzzles of gene : environment interaction, what challenges and opportunities do you think the heritability of epigenetic effects raise for the elimination of suffering through biogenetic science?
  • Outlander
    2.2k
    Wouldn't this create two classes of humans, dividing us, ie. enhanced and naturals? They will be smarter and more blissful yes, but what's to stop them from becoming incredibly stronger as well? Won't the stronger group (transhumans) oppress the other? How do we know "transformation" won't become mandatory? Take the pandemic, there is talk of non-vaccinated persons becoming a threat to public safety. Who's to say the natural human form won't be declared a threat by the enhanced transhumans due to its tendency to pick up diseases and be put into camps to live as the savage relics of a time now past that they are? Wouldn't this halt or alter evolution entirely, denying us the beauty and potential of what nature has to offer, the most significant being what created you and allowed you to know and believe all you do? Perhaps your naturally evolved form will solve this or allow you to come up with even better ideas. What would you say to convince those who hold these both non-religious and non-bioconservative views?

    Echoing concern, if there is technological enhancement, won't this be vulnerable to hacking (smart cars can be hacked and controlled, brakes disabled, etc.) or man-made or natural EMPs? Wouldn't this device allow a transhuman to be murdered or "disabled" with no evidence?

    Also, where does one draw the line between a human with significant technological/genetic enhancements, a true cyborg or laboratory experiment, and a mere robot/non-human abomination?

    Best,
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.