Pinprick
Thank you.
What is the relationship between superintelligence and super-wellbeing?
It’s tricky. The best I can manage is an analogy. Consider AlphaGo. Compared to humble grandmasters, AlphaGo is a superintelligence. Nonetheless, even club players grasp something about the game of chess that AlphaGo doesn’t comprehend. The “chess superintelligence” is an ignorant zombie. I don’t know how posthuman superintelligences will view the Darwinian era that spawned them. Maybe posthumans will allude, rarely, to Darwinian life in the way that contemporary humans allude, rarely, to the Dark Ages. Most humans know virtually nothing about the Dark Ages beyond the name – and have no desire to investigate further. What's the point? Maybe superintelligences occupying a hedonic range of, say, +80 to +100 will conceive hedonically sub-zero Darwinian states of consciousness by analogy with notional states below hedonic +80 – their functional equivalent of the dark night of the soul, albeit unimaginably richer than human “peak experiences”. The nature of Sidgwick's "natural watershed" itself, i.e. hedonic zero, may be impenetrable to them, let alone the nature of
suffering. Or maybe posthuman superintelligences will never contemplate the Darwinian era at all. Maybe they’ll offload stewardship of the accessible cosmos to zombie AIs. On this scenario, programmable zombie AIs will ensure that sub-zero experience can never re-occur within our cosmological horizon without any deep understanding of what they’re doing (
cf. AlphaGo). In any event, I don’t think posthuman superintelligences will seek to
understand suffering in any full-blooded empathetic sense. If any mind were to glimpse even a fraction of the suffering in the world, it would become psychotic.
Whatever the nature of mature superintelligence, I think it’s vital that humans and transhumans investigate the theoretical upper bounds to intelligent agency so we can learn our ultimate cosmological responsibilities. Premature defeatism might be ethically catastrophic.
Suffering and desire?
Buddhists equate the two. But the happiest people tend to have the most desires, whereas depressives are often unmotivated. Victims of chronic depression suffer from “learned helplessness” and behavioural despair. So the extinction of desire
per se is not nirvana. Quite possibly transhumans and posthumans will be superhappy
and hypermotivated. Intuitively, for sure, extreme motivation is the recipe for frustrated desire and hence suffering. Yet this needn’t be so if we phase out the biology of experience below hedonic zero. Dopaminergic “wanting” is doubly dissociable from
mu-opioidergic “liking”; but motivated bliss is feasible too. If you’ll permit another chess analogy, I always desire to win against my computer chess program. I’m highly motivated. But I always lose. Such frustrated desire never causes me suffering unless my mood is dark already. The same is feasible on the wider canvas of Darwinian life as a whole
if we reprogram the biosphere to eradicate suffering. Conserving information-sensitivity is the key, not absolute position on the pleasure-pain axis:
https://www.hedweb.com/quora/2015.html#hedonictreadmill