Comments

  • Project Q*, OpenAI, the Chinese Room, and AGI
    The acceleration in the development of AI, that I see as being likely, seems like something humanity is not well prepared for.wonderer1

    Wearing my nerd hat, I am excited to see what's next, but I agree, there are reasons to be terrified. I should be careful what I wish for.

    It was nice that OpenAI was founded to get out in front of this and make sure that AI doesn't cause horrific disruptions. Of course, there are now questions about whether or not their priorities have changed. And they are just one actor in one nation. With nation-states still living in a state of nature, they are all plowing ahead to be first to gain (or not lose) the advantage. Ready or not, here it comes.
  • Project Q*, OpenAI, the Chinese Room, and AGI
    For some reason ChatGPT also wasn't being fed current information (I think everything was at least a year old). Recently they allowed it access to current events, but I think that's only for paid members. Not sure what the rationale behind the dated info was or is.
  • Project Q*, OpenAI, the Chinese Room, and AGI
    Sure, thanks for asking!

    Each word that ChatGPT spits out is really just a statistically plausible guess at what word might appear next in that context (context = the user's prompt and the other words it has already spit out). There are generally lots of words that might come next, and so long as it hits on one of them, it has done its job.

    You can probably see where this is going. Math doesn't quite work that way. Yes, there can be multiple next steps that are allowed, but the rules are far more rigid than with everyday speech. The simplest example gets this general point across well enough:

    2 + 3 =

    How did Q* solve this problem? Here's my wild guess....

    If the next word spit out has to be consistent not just with regularities culled from gobs of factual, faulty, and fanciful texts but it also has to be consistent with a model of [or, really, a model that coheres with] what the text describes, its answers will be far more tightly constrained.

    If the next word has to be consistent with (a) regularities picked up from processing texts and (b) a mental model where, for instance, 3 matchsticks are added to 2 matchsticks, that greatly restricts the space of plausible next words.

    Given that there are spatial proofs of the Pythagorean theorem and lots more besides, this takes you a long way into grade-school math. But why get spooked about that (the way OpenAI got spooked by its little mathematician Q*)?

    A system that works as described would have lots of other capabilities. It could eventually understand why a mouse needn't worry about being trapped in a jack-o-lantern. It could engage in forethought prior to taking action (that is, once it gets a body) so as to achieve its goals. It could generate and test hypotheses. And so on. It wouldn't just be reflecting human language back at us in ways that look smart. It would be smart.
  • Project Q*, OpenAI, the Chinese Room, and AGI
    Yes! Robots that learn from the world. It would be a sensible stepping stone having them develop inner models of how things behave that they can run offline prior to acting on the actual world. It starts with first learning how to use their own bodies, and that work is succeeding wonderfully. With enough parallel processing hardware behind them (we must thank all those goofy gamers and crypto miners!), neural nets can almost always find a solution. OpenAI has been right in the thick of that too: https://openai.com/research/learning-dexterity

    I'm hoping they put the brains and brawn together within the next ten years.
  • Project Q*, OpenAI, the Chinese Room, and AGI

    Great question to get at the source of ChatGPT's ethical guardrails. Supposedly it takes great trickery to get it to betray its ethical 'programming,' which I imagine is just more training on texts. What strikes me as impressive is that OpenAI has somehow found a way to get a neural network to prioritize some forms of training over others, very much in the spirit of Asimov. In fact, though Asimov used the three laws to describe robot operating principles, he didn't think of them as being written out explicitly in some form of code. Rather, they were deeply embedded in their positronic networks much as we see with ChatGPT. I think he may have been right, as well, that someday artificial brains will organize the human economy (like the yogurt episode of Love, Death, and Robots). It sounds (and could certainly become) dystopian, but it could also be the long-awaited alternative to the Marx vs. Smith box we feeble-minded humans have been trapped in for so long.

    Re: the Plato analysis...Indeed! Like you, I have been nothing short of flabbergasted at ChatGPT's ability to use information culled from oceans of text to generate lengthy, brilliant analyses such as this one about about Forms. I know overcoming the vanishing gradient problem was a key part of this (so that it can keep track of what it said many paragraphs back). But I wouldn't have expected that alone to yield analyses that are so shockingly sound.

Jonathan Waskan

Start FollowingSend a Message