How do you see the future evolving? I've always been a closet utilitarian. My conception of what is good, is that people suffer less and enjoy life more. — Posty McPostface
I'm not any kind of utilitarian, so I can't agree with this. It's probably a little tangential, but I see "the good" as an objective ideal that isn't attainable within the current state of the human condition. The human condition itself would have to change in a paradigmal way in order for the good to be attainable. So things like "suffering less" and "enjoying life more" are small details in the face of the actual good. What I see as extremely dangerous is the hubristic assumption that technology itself is the causeway which leads to a paradigmal shift in the state of the human condition. It's not so much a spoken philosophical position as it is a general zeitgeist, which is exactly what makes it dangerous. Our petty sweet nothings that we chide on about on this forum pale in comparison to the real, cultural shifts that are happening outside of our own prisonesque philosophies here.
When I said 'to each they're own' earlier in the thread, I meant it in the context of there arriving a day when people will be free to engage in any activity that they desire or think they would do best. — Posty McPostface
The irony here, per what I've been saying all along, is that technology has
absolutely nothing to do with this; it has nothing to do with our ability to freely engage in activity which we desire or think is best. The reality is that we already do this, and mostly we do it horribly, because we don't know what we're doing. That's the human condition. Again, technology is just a neutral means to achieve more power and productivity to do whatever it is we do; habits beneficial, destructive, malignant, etc.
Technology will eventually, in terms of having a benevolent AI in the not so distant future, provide for all our needs, and then well... noting much further than that. I guess people will be free from the need to engage in intensive labor. Then what? — Posty McPostface
Why assume AI will be benevolent? Again, this is exceedingly simple, to me: humans are morally problematic; humans make technology; technology as an extension of human problematicness will be problematic; it has been; it continues to be; AI, therefore, will also be problematic. It's stupidly simple.