But isn't there something "behind" the stories that a person cannot wimp out on even if she tried? — Astrophel
But eventually it has to get to a position that it hasn't seen in its training data, and then what? — noAxioms
I also checked if it was playing unique games not found in its training dataset. There are often allegations that LLMs just memorize such a wide swath of the internet that they appear to generalize. Because I had access to the training dataset, I could easily examine this question. In a random sample of 100 games, every game was unique and not found in the training dataset by the 10th turn (20 total moves). This should be unsurprising considering that there are more possible games of chess than atoms in the universe.
Also in passing I learned about linear probes, which I gather are simpler neural nets that can analyze the internals of other neural nets. So they are working on the "black box" problem, trying to understand the inner workings of neural nets. That's good to know. — fishfry
And thanks so much for pointing me at that example. It's a revelation. — fishfry
By doing nothing more than auto-completing these games as text strings, — fishfry
No internal model of any aspect of the actual game
To conclude, I have proven I can change the future indirectly by interrupting the flow of the present — Barkon
This one developed a higher level of understanding than it was programmed for, if you look at it that way. — fishfry
No internal model of any aspect of the actual game. — fishfry
more akin to a form of explicit reasoning that relies on an ability to attend to internal representations? — Pierre-Normand
I stand astonished. That's really amazing. — fishfry
Ownership laws are taking the place of the chieftain when it comes to people who stray from the ideals. — frank
Is it possible? Could it last if it happened? — frank
With the way you're answering I don't think you are capable of understanding what I'm talking about. It's like you don't even understand the basics of this. — Christoffer
No, the reason something can walk is because of evolutionary processes forming both the physical parts as well as the "operation" of those physical parts. — Christoffer
Yes, but my argument was that the only possible path of logic that we have is through looking at the formation of our own consciousness and evolution, because that is a fact. — Christoffer
Therefore we can deduce either that all matter has some form of subjectivity and qualia, or it emerges at some point of complex life in evolution. — Christoffer
In order for a machine to have subjectivity, its consciousness require at least to develop over time in the same manner as a brain through evolution. — Christoffer
Nothing "emerges" from neural nets. You train the net on a corpus of data, you tune the weightings of the nodes, and it spits out a likely response. There's no intelligence, let alone self-awareness being demonstrated.
There's no emergence in chatbots and there's no emergence in LLMs. Neural nets in general can never get us to AGI because they only look backward at their training data. They can tell you what's happened, but they can never tell you what's happening. — fishfry
The end output is a bunch of symbols, which inherently is without value — NOS4A2
AI has one good effect, I think, in that it reveals how much we overvalue many services, economically speaking. There was a South Park episode about this. I can get quicker, cheaper, and better legal advice from an AI. I can get AI to design and code me an entire website. So in that sense it serves as a great reminder that many linguistic and symbolic pursuits are highly overrated, so-much-so that a piece of code could do it. — NOS4A2
I did, see above. — Lionino