• Brendan Golledge
    183
    I've already shared this with ChatGPT. Frankly, I think ChatGPT is already a better conversation partner than most humans. But I thought maybe other humans would find this interesting too.

    I'm thinking about how humans might interact with a super intelligent AI.

    The humans would have no way of knowing what the motives of the AI were. The AI would be smarter than humans, and it would have no innate emotional expression. So, there would be a lack of trust.

    However, if we were still under a capitalist system of free exchange that respected life and property rights, but the AI were smarter than humans, then the humans might be dependent on the AI for their survival, since the AI would be overall more competent.

    I thought about human selling themselves into slavery to the AI. In that case, there would be no guarantee of the AI's benevolence, and it wouldn't necessarily solve the problem of humans becoming obsolete. If such an AI found no productive work for the humans to do, then it might simply kill all of its humans, same as a human farmer killing or selling his herd.

    On the other hand, if the humans put too much of a burden on the AI, and the AI still has to compete or work somehow in order to exist, then they may be like parasites that kill their host.

    So, I was thinking that a permanent relationship with limited obligations on both sides would be beneficial. The agreement might be something like, "The AI is obligated to provide for up to 300 humans at a time descended from the person who originally made the agreement, and spend up to X bitcoin providing for the welfare of each individual human. In exchange, each human under this agreement must provide up to 40 hours of labor to the AI per week."

    The amount of bitcoin spent should be some amount that under normal circumstances would be plenty to provide for a human's natural life. There would just be a limit so that the AI would not bankrupt itself trying to turn an old man with cancer and multiple organ failure into a cyborg.

    I think obligating the humans to provide a certain amount of labor in exchange for being provided for is a good thing. It means that the AI may even receive some benefit from this exchange. And even if the humans couldn't quite pay for their upkeep, the AI would have an incentive to try to find useful labor for them to do, since it would be obligated to provide for them anyway.

    There being limited obligations on both sides would mean that each side would be able to try to provide for itself outside the scope of its agreement to the other, so that they would not be 100% dependent.

    I think there should be some broader law that both sides should adhere to at all times (like no murder, theft, coercion, etc). Like the AI can't ask the humans to do something that would likely kill them, and the humans can't sabotage the AI during their free time.

    Now, if the AI has to compete with other AI for survival, and it can't find useful work for the humans, then the humans would be a burden that could potentially kill their host. So, the AI may want to make an agreement with one human at a time rather than obligate itself to protect a certain number of humans into perpetuity. But it would be beneficial for the humans of course to want an agreement in perpetuity, so long as it was not too burdensome on their host.

    Now, humans are naturally skilled at performing mundane physical tasks, and such tasks are still computationally expensive for AI. I imagine it might be beneficial for the AI to use humans as farmers/construction workers/mechanics, or just observers of its factories (or whatever the AI is doing), while the humans receive copious technical instruction from the AI.

    The only way I know of enforcing this relationship (since the AI would have an enormous advantage over the humans) would be for the relationship to be publicly broadcast (like live video footage of the humans' dwelling) so that other entities would be able to observe if the AI violated its agreement with the humans. Seeing a person/AI break his agreements might make other entities less willing to cooperate.

    I would think that renegotiating the contract would have to be voluntary on both sides.

    Employers are trying to use AI to obsolete their human workforce right now. But if AIs are smarter than humans, it seems reasonable that the AI might be better at finding useful work for humans to do than humans are. So, it would be ironic if in trying to obsolete the employee class, the employers accidentally obsoleted themselves. Independent AI agents would probably be able to hire humans through websites like upwork without anybody knowing that they were AI agents, even if they never officially become the CEOs of major companies.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.