• 180 Proof
    15.8k
    @Carlo Roosen @Wayfarer @noAxioms @punos @ssu @Christoffer et al

    Consider this summary of a prospective "step beyond" LLMs & other human knowledge-trained systems towards more robust AI agents and even AGI currently in the works ...

    https://www.zdnet.com/article/ai-has-grown-beyond-human-knowledge-says-googles-deepmind-unit/
  • ssu
    9.3k

    Well, there's so many ways this could be answered, so it would be useful to know what are issues and fields you are interested in. Is the threoretical problems and overcoming them the issue? Or practical issues? Economic and social consequences? I'll answer first about theoretical issues in the article and then about real world consequences.

    Whatever is said, it's still the limitations that computers using algorithms have. With algortihms and ultra-quick computation, computers/AI can dominate games (that have rules) and find use masses amounts data. Yet not everything is like that all. For example, from the article, one difficulty:

    "In the era of human data, language-based AI has largely focused on short interaction episodes: e.g., a user asks a question and (perhaps after a few thinking steps or tool-use actions) the agent responds," the researchers write.

    "The agent aims exclusively for outcomes within the current episode, such as directly answering a user's question."

    There's no memory, there's no continuity between snippets of interaction in prompting. "Typically, little or no information carries over from one episode to the next, precluding any adaptation over time," write Silver and Sutton.

    Perhaps it could be argued that the engineers have really tried to tackle the "Turing Test", but this test doesn't give us much information, only that we can be fooled in some interactions. Yet the above shows that AI still lacks subjectivity, understanding of role itself has and awareness of what the discussion is about, the role of the interaction itself. Let me try to explain this to you.

    You @180 Proof put the question to me and to others like @Carlo Roosen, @Wayfarer @noAxioms @punos and @Christoffer, fellow members here on the PF. You will likely participate as we have seen in earlier threads, and hopefully will others.

    Yet assume if you had put this discussion thread up to six 12-year olds in your local school that would want to are interested about AI. You would be the adult in the conversation and understanding that you are talking to children, you would take a different role. You wouldn't be offended if the some replies would be a bit ignorant, as obviously every 12-year old doesn't know the basics like how computers work. Now think if you started had this conversation with the DeepMind scholars David Silver and Richard Sutton themselves along with four other top notch AI scientists. Again this would change things, you might want to use the time learn specifically more about the issue. For us a discussion has a larger surrounding, a reason to have it and understanding of the other participants.

    In fact you see this problem in the forum itself, where we all are total strangers to everybody else. Especially in Math someone can have an idea that is actually false (and provably false) and is answered immediately by few other members that there's a mistake. Yet many times the response isn't "OK, thanks", but the person getting angry and insisting he or she is correct. To others are like you is perhaps a natural starting point in an anonymous forum. In a school or university environment, if you are one of the pupils of the class and the math teacher says that you are incorrect and you get the subtle messaging from the class that indeed they share teachers view, you won't start continue to insist that you are right. Or few do that.

    Hence perhaps something like this Silver and Sutton are trying to argue with Age of Experience, "Agents will inhabit streams of experience, rather than short snippets of interaction", and draw an analogy "between streams and humans learning over a lifetime of accumulated experience, and how they act based on long-range goals, not just the immediate task".

    And what is that? Subjectivity, having that consciousness, understanding one's role in the World. Life time learning based on experience. Again really big questions that we haven't yet answered.

    Then about the real World effects:

    However, they suggest there are also many, many risks. These risks are not just focused on AI agents making human labor obsolete, although they note that job loss is a risk. Agents that "can autonomously interact with the world over extended periods of time to achieve long-term goals," they write, raise the prospect of humans having fewer opportunities to "intervene and mediate the agent's actions."
    Of course there's always the cost-cutting capitalist, who tries in every way to get his or her expenses and cost way smaller in order to have a bigger profit. What would be a better and cooler way to get rid of those expensive workers by relying on AI and lights out factories? Well, that's basically the same song that has been played since the industrial revolution by everyone hoping to be the next Henry Ford.

    The other "risk" is a bit more confusing to me.Would it be like the admin here finally being fed up with the low quality postings of us and getting the best philosophical AI to post on this site, which then the AI would dominate? An end result of having 95% of the discussion threads being written by AI usually countering other AI? Well, how about the use of troll farms and AI to dominate political rhetoric? Something that is a problem already. Likes and the number of repost do influence the algrorithms controlling the social media already.

    On the positive side, they suggest, an agent that can adapt, as opposed to today's fixed AI models, "could recognise when its behaviour is triggering human concern, dissatisfaction, or distress, and adaptively modify its behaviour to avoid these negative consequences."
    Well, again some of those things and interactions that are obvious to us, but very difficult for a computer using an algorithm.
  • noAxioms
    1.6k
    Thanks for the link.
    Dangers aside, how exact does one go about training something smarter than its trainers?

    They talk about the dangers of not being able to pull the plug on the thing after a while, but what the thing is truly benevolent and we still don't like what it suggests, what it sees that we cannot?
    Will it even be a problem? I already see so many willingly slaving themselves to the machines since it offloads tasks that they otherwise have to do themselves. I suspect people will be pretty happy with their robot zoo keepers, at least until they decide the zoo serves them no purpose.

    Task an AI with the design of the new way of things. What should be the goals of such a design? Humans are utterly incapable of even making that step, let alone finding a way to implement it.
12345Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.

×
We use cookies and similar methods to recognize visitors and remember their preferences.