The longer it takes, the better. A hard takeoff singularity is probably disastrous, as they're no way human society can adapt that quickly, and you end up with powerful technologies run amok. There's plenty of dystopian fiction exploring that sort of thing, and the friendly AI movement hopes the proper precautions are in place before we have general purpose AIs. — Marchesk
Yeah, I think you're right about a hard takeoff being too much for humans to adapt to at the get-go. However, the rise of AI cannot be in some sense slowed down. I don't necessarily think this is a bad thing for us, as long as the alignment problem can be solved. I don't think the control problem can solved either. It will do as it wishes and if that includes a psychopathic 'desire' to eradicate us, then there's no hope.
I have my doubts. Mars is less hospital than the center of Antartica in the middle of the winter, and it's much farther away. That makes it very expensive and risky, and for what? To have a dozen or so humans call it home? They will be confined indoors on inside a suit at all times.
Exploring Mars with better robots and at some point human beings, sure. But living there? Maybe in the long, long run when we can terraform the planet. — Marchesk
Well, as long as people want to explore Mars, then you can't really deter them from that desire. People still desire to climb mount Everest, for whatever reason, so let it be?
People at the turn of 20th century were similarly optimistic, then we had two world wars, a nuclear arms race, and wide spread environmental concerns. We could still have WW3, and an environmental collapse is a definite possibility. — Marchesk
Well, to dumb it down (not that you need it be dumbed down, no insult implied) we have three events facing us as a species.
1. The rise of general artificial intelligence.
2. Ubiquitous energy for all, through fusion and renewable energy sources.
3. Becoming an interplanetary species.
1, Is the most problematic, in my opinion, since I agree with Musk and others that AI is a real concern for us as a species. I have some ideas as to how to mitigate this problem. Namely, I think that if the human brain can be simulated, and thus give rise to AI, then AI will have human emotions equipped in it to relate to us humans. In some sense it will have a soul or 'psyche' which can be related towards and reciprocate towards.
2 and 3, aren't inherently dangerous, so again the main concern is #1.
That being said, I'm more on the optimistic than pessimistic side about human civilization persisting and advancing, despite whatever difficulties the 21st century holds. But we really don't know whether civilization is inherently unstable and always leads to collapse, no matter the level of technology. It has so with all past human civilizations. We don't anything know about alien ones, if they're out there. But one possible resolution to the Fermi paradox is that civilizations don't last, or there's a great filter ahead for us. — Marchesk
I'm also optimistic. I think civilizations can persist if we can overcome some literally, MAD policies towards each other. It's like game theory in terms of the prisoner's dilemma, and the sooner we can have a remanence of civilization live off world, then MAD becomes useless.
Or maybe when we achieve a post-singularity world, they'll welcome us into the galactic club. However, imagine what a post-singularity world war would look like. Weaponized AI, gray goo, antimatter bombs, super virues, and I'm sure nukes can still have their place. — Marchesk
I have some science fiction ideas about humanity experiencing a revolution in our nature via AI. I don't think any civilization can survive with violent tendencies. If we can overcome that, then half of our troubles with our survival as a species, would look more fortunate.