"In the era of human data, language-based AI has largely focused on short interaction episodes: e.g., a user asks a question and (perhaps after a few thinking steps or tool-use actions) the agent responds," the researchers write.
"The agent aims exclusively for outcomes within the current episode, such as directly answering a user's question."
There's no memory, there's no continuity between snippets of interaction in prompting. "Typically, little or no information carries over from one episode to the next, precluding any adaptation over time," write Silver and Sutton.
Of course there's always the cost-cutting capitalist, who tries in every way to get his or her expenses and cost way smaller in order to have a bigger profit. What would be a better and cooler way to get rid of those expensive workers by relying on AI and lights out factories? Well, that's basically the same song that has been played since the industrial revolution by everyone hoping to be the next Henry Ford.However, they suggest there are also many, many risks. These risks are not just focused on AI agents making human labor obsolete, although they note that job loss is a risk. Agents that "can autonomously interact with the world over extended periods of time to achieve long-term goals," they write, raise the prospect of humans having fewer opportunities to "intervene and mediate the agent's actions."
Well, again some of those things and interactions that are obvious to us, but very difficult for a computer using an algorithm.On the positive side, they suggest, an agent that can adapt, as opposed to today's fixed AI models, "could recognise when its behaviour is triggering human concern, dissatisfaction, or distress, and adaptively modify its behaviour to avoid these negative consequences."
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.