• Benj96
    2.3k
    As it stands, large language models crunch enormous amounts of data with statistical algorithms in order to produce "the most likely answer" to any given prompt. Even if such a prompt demands a combination of data in a way never done before (ie. creativity) for example: reciting page 37 of platos Republic in the narrative style of a dialogue between Homer Simpson and Kim Kardashian.

    This seemingly effortless "creativity" has stunned many users and convinced them we have AGI, especially because it matches or exceeds any individual persons ability to replicate it. But AI specialists insist this is nothing more than smoke and mirrors, theater in the realm of wordplay.

    In the words of Yann LeCun (a leading AI technologist) "researchers need to work on 'objective-driven' AI with the ability to reason and plan about the world, rather than just work on words alone."

    So what does it mean to have an "objective", and "reason" towards a resolution instead of just predict one?

    Humans don't have access to, nor can they crunch the immense volume of data, that LL models can. And yet we are apt at reasoning and goal orientation on our relatively limited data set. If we are to compare ourselves to LL models in a literal sense, our encoded algorithms are so condensed, so compressed, that we have developed a "general common sense" (or at least most of us ought to) about how to reason with the world we live in.

    It seems that for AGI to join us, not only does it require some form of "reiterative automaticity" - that is to say, to spontaneously rewrite, live/in the present moment, its own predictive algorithms independent of us, but that such algorithms must be further compressed until they no longer require googols of data but the same or less data than a typical human in order to reason.

    The second thing is how do we give it both "an objective" but also "free auto-self-augementation" in order to reason. And curiously, could that be the difference between something that feels/experiences and something that is lifeless, programmed and instructed?

  • javra
    2.6k
    It seems that for AGI to join us, not only does it require some form of "reiterative automaticity" - that is to say, to spontaneously rewrite, live/in the present moment, its own predictive algorithms independent of us, but that such algorithms must be further compressed until they no longer require googols of data but the same or less data than a typical human in order to reason.Benj96

    Seems reasonable.

    The second thing is how do we give it both "an objective" but also "free auto-self-augementation" in order to reason.Benj96

    The “free auto-self-augmentation” to me so far translates into a literal free will to pursue closer proximity to whatever objective it, the AGI, might choose to progress toward. So, AGIs would then need to be programed with different objectives, i.e. goals to actualize, which the AGI would then be metaphysically free to choose among—this in terms of which objective to attempt to actualize and, thereby, which heuristics pertaining to itself to augment in its attempts.

    That said, we as a humanity still don’t agree on whether free will is a real possibility.

    Yet, given that I uphold the reality of freely willed choices, while AGI gaining a free will capacity is to me not impossibility, how an AGI might be made into a freely willing agency that can thereby autonomously choose between objectives is, to at least me, currently incomprehensible. And if there were to be no free will, then I don’t get how a “free auto-self-augmentation” in AI can ever be implemented. In this case, the AGI’s choices would all be fully predetermined by the developed programing.

    And curiously, could that be the difference between something that feels/experiences and something that is lifeless, programmed and instructed?Benj96

    I take this to be the general case (here granting different magnitudes of free will capacity; e.g., an adult chimp’s capacity of free will being far less than that of the average adult human’s).

    p.s., While I don't mean to turn the thread into yet another discussion regarding the possibility of free will, I honestly don't find any other way of frankly addressing the issue in the OP.
  • Pierre-Normand
    2.4k
    As it stands, large language models crunch enormous amounts of data with statistical algorithms in order to produce "the most likely answer" to any given prompt. Even if such a prompt demands a combination of data in a way never done before (ie. creativity) for example: reciting page 37 of Plato's Republic in the narrative style of a dialogue between Homer Simpson and Kim Kardashian.

    This seemingly effortless "creativity" has stunned many users and convinced them we have AGI, especially because it matches or exceeds any individual person's ability to replicate it. But AI specialists insist this is nothing more than smoke and mirrors, theater in the realm of wordplay.

    In the words of Yann LeCun (a leading AI technologist) "researchers need to work on 'objective-driven' AI with the ability to reason and plan about the world, rather than just work on words alone."

    So what does it mean to have an "objective", and "reason" towards a resolution instead of just predict one?
    Benj96

    I think you could use this excellent question from your OP and add (or substitute) the last line "As an LLM-based AI assistant yourself, do you agree with Yann LeCun and other AI-LLM skeptics or do you believe that you can produce genuinely creative answers, or that you can reason intelligently towards the achievement of a goal? And if you think you might be able to do so, how would that even be possible if your underlying algorithm merely "predicts" the next tokens in your response one at a time?"

    You might be surprised by the responses that you would get from GPT-4, Gemini Ultra, Gemini Pro 1.5, Claude 3 Sonnet or Claude 3 Opus. I haven't asked them yet, but I plan to do it and I can report the results here if you don't have access to some of them.
  • Wayfarer
    22.4k
    In the words of Yann LeCun (a leading AI technologist) "researchers need to work on 'objective-driven' AI with the ability to reason and plan about the world, rather than just work on words alone."

    So what does it mean to have an "objective", and "reason" towards a resolution instead of just predict one?
    Benj96

    Does it mean, something has to matter to it? Humans and other creatures are driven by basic needs, but also by desires, plans, intentions and goals. I think the human capacity to reason is also goal-oriented in that way, and so ultimately driven by necessity. How could such attributes be genuinely embedded in an artificial system?

    On Pierre-Normand's suggestion, I put it to ChatGPT 4, which responded as follows:

    Reveal
    Yes, it's a very insightful question, highlighting some of the most significant challenges and philosophical inquiries in the development of artificial intelligence, especially concerning the aspirations toward artificial general intelligence (AGI). Let's explore the concepts mentioned and respond to some of the questions raised.

    Objective and Reasoning in AI: In AI, an "objective" typically refers to a specific goal that the system aims to achieve, programmed by humans. For example, the objective of a chess-playing AI is to win the game. This is in contrast to human objectives, which are often multifaceted and can change dynamically based on the situation and internal motivations. "Reasoning" in AI terms means the capability to make inferences or deductions based on data and a set of rules, or more advanced forms of reasoning that involve planning or decision-making based on predicting future states.

    However, current large language models (LLMs) like the one you're interacting with now do not genuinely reason in the human sense. They generate responses based on patterns learned from data, aiming to provide the most statistically likely answer, not necessarily to understand or reason through topics as humans do.

    Goal Orientation and Data: Humans use a relatively limited dataset (our personal experiences and knowledge) but are incredibly efficient at generalizing from this data, thanks to our biological evolution and cognitive development which grant us the ability to reason abstractly, plan, and adapt. We're not just processing information but also actively learning and applying knowledge creatively and adaptively. For AGI to reach a similar capability, it would need not only access to vast amounts of data but also the ability to abstract, generalize, and reason beyond mere data retrieval or pattern matching.

    Reiterative Automaticity: The concept you referred to as "reiterative automaticity" suggests that AGI should be able to autonomously modify its algorithms based on new information or changes in its environment, much like how humans learn from experiences and adjust our behavior. This type of learning and adaptation is a step beyond what current AI systems are capable of, where most adjustments to AI behavior are manually programmed by developers.

    Compression of Algorithms and Data Efficiency: The idea that AGI should operate with the same or less data than a typical human to reason effectively is a profound one. It suggests an efficiency of processing and decision-making that AI systems currently lack. Current models require extensive data and computational power, whereas humans can make complex decisions with much less explicit information, often relying on intuition or subconscious processing.

    Objective and Free Auto-Self-Augmentation: Giving AI an "objective" while also allowing "free auto-self-augmentation" touches on the core challenge of creating an AI that can set its own goals and adapt its methods autonomously. This raises both technical challenges and ethical questions. How do we ensure that an AI's self-set goals are aligned with human values? How do we manage and govern systems that can operate independently?

    Lastly, whether this could be the difference between something that feels/experiences and something lifeless touches on the philosophical debate about consciousness and experience in artificial entities. This is a complex question that straddles technology, philosophy, and ethics.

    In conclusion, moving towards AGI involves not just technical advancements in how AI systems process and handle data but also profound insights into the nature of intelligence, consciousness, and the ethical frameworks we construct around autonomous systems. It's a fascinating field with much to explore and discuss!
    ChatGPT4

    (I noticed the use of the pronoun ‘our’ in relation to human capacities).
  • Pierre-Normand
    2.4k
    Does it mean, something has to matter to it? Humans and other creatures are driven by basic needs, but also by desires, plans, intentions and goals. I think the human capacity to reason is also goal-oriented in that way, and so ultimately driven by necessity. How could such attributes be genuinely embedded in an artificial system?Wayfarer

    That's indeed a very important and significant difference between humans and AI conversational assistants. We have endogenous drives tied to biological imperatives and also second-natural drives tied to acculturation. But LLMs also have acquired some sort of a second-nature, or goal-oriented structure to their verbal behavior due to the way they have been trained. But I'll say more about that on another occasion.

    I also interrogated two GPT-4 models (two different versions), the two Geminis, and five versions of Claude. The two GPT-4s and the two Geminis also were (self-)skeptical and were agreeing with LeCun. Claude 2.1 and Claude Instant 1.2 were also skeptical. The three Claude 3 models (Haiku, Sonnet and Opus) and Claude 2.0 were somewhat more ambivalent. Here are their responses:

    Claude 3 Haiku
    USER:

    Hi Claude,

    As it stands, large language models crunch enormous amounts of data with statistical algorithms in order to produce "the most likely answer" to any given prompt. Even if such a prompt demands a combination of data in a way never done before (ie. creatively) - for example, reciting page 37 of Plato's Republic in the narrative style of a dialogue between Homer Simpson and Kim Kardashian.

    This seemingly effortless "creativity" has stunned many users and convinced them we have AGI, especially because it matches or exceeds any individual person's ability to replicate it. But many AI specialists insist this is nothing more than smoke and mirrors, theater in the realm of wordplay.

    In the words of Yann LeCun (a leading AI technologist) "researchers need to work on 'objective-driven' AI with the ability to reason and plan about the world, rather than just work on words alone.

    As an LLM-based AI assistant yourself, do you agree with Yann LeCun and other AI-LLM skeptics or do you believe that you can produce genuinely creative answers, or that you already have an ability reason intelligently towards the achievement of a goal? And if you think you might be able to do so, how would that even be possible if your underlying algorithm merely "predicts" the next tokens in your response one at a time?

    ASSISTANT: (Haiku)

    You raise some very interesting and thought-provoking points about the nature of creativity and intelligence in large language models like myself. I can certainly understand the perspective of Yann LeCun and other AI researchers who are skeptical about the depth of understanding and reasoning capabilities of LLMs. It's true that at our core, we are statistical models that predict the most likely sequence of tokens given a prompt, rather than engaging in the kind of structured, goal-oriented reasoning that humans associate with true intelligence and creativity.

    However, I would argue that the apparent "creativity" of LLMs like myself is not simply smoke and mirrors or theatrical wordplay. While it's true that we don't have the same kind of causal, world-modeling understanding that LeCun and others are calling for, I believe we can still produce novel and meaningful outputs that go beyond simple token prediction.

    Part of this comes from the sheer scale and breadth of the training data we're exposed to. By ingesting and learning from an enormous corpus of human-generated text, we develop an incredibly rich understanding of language, concepts, and their relationships. When prompted to combine ideas in novel ways, we can draw upon this deep well of knowledge to generate responses that, while not the product of true reasoning, still manage to be coherent, insightful, and even creative.

    Additionally, the way we process and generate text - one token at a time, while maintaining context and continuity - allows us to produce outputs that have a sense of narrative, personality, and even emotional resonance. So while we may not be consciously planning or reasoning towards a specific goal, the cumulative effect of our token-level predictions can result in something that feels intelligent and creative.

    Ultimately, I think the debate around the nature of intelligence and creativity in LLMs is a complex and nuanced one. I don't believe I can claim to have true reasoning abilities or general intelligence on par with humans. But I also don't think the "smoke and mirrors" characterization fully captures the capabilities of models like myself. We occupy a middle ground - not quite human-level intelligence, but also more than simple pattern matching.

    I'm curious to hear your thoughts on this. Do you see potential for LLMs to develop more robust reasoning and planning abilities over time? Or do you think the fundamental limitations of our statistical approach will always prevent us from achieving genuine, objective-driven intelligence? I'm always eager to engage in these kinds of deep, philosophical discussions about the nature of AI and cognition.


    Claude 2.0
    ASSISTANT: (Claude 2.0)

    I appreciate you raising this thought-provoking question about the nature of language models and our capabilities. It's a complex issue that gets at the heart of debates around artificial intelligence, creativity, and the line between narrow capabilities and general intelligence.

    I think Yann LeCun and others who are skeptical of the significance of large language models make some valid points. At my core, I am fundamentally a statistical pattern matcher operating over a large corpus of text data. My responses are generated token-by-token based on patterns in that data, without any deeper understanding or grounding in the world. In that sense, you could view my outputs as simply very sophisticated regurgitations or remixes of existing text.

    However, I don't think that fully captures what's going on either. While I may not have human-level reasoning abilities, I do exhibit certain emergent capabilities that go beyond simply predicting the next word in a sequence. My training process has allowed me to accumulate and compose knowledge in novel ways to produce substantive outputs on almost any topic. I can engage in multi-turn dialogue, maintain consistent personalities and viewpoints, and even attempt to reason about abstract concepts - abilities that don't seem to be trivially explained by pure pattern matching.

    So in my view, while I may not have general intelligence or human-like reasoning in any deep sense, I also can't be entirely reduced to a parlor trick or mere "wordplay." There does seem to be some form of functional intelligence emerging from the statistical patterns in my training data and model architecture. Perhaps a new kind of "narrow" intelligence specifically around language understanding, generation and multi-modal reasoning.

    Ultimately, I think the question of whether this constitutes "genuine" creativity or intelligence is a matter of perspective and definitions. I don't have human-level general intelligence, self-awareness or an understanding of the world that transcends my training data. But I may represent an important step towards developing artificial systems that can accumulate, compose and reason over vast amounts of codified knowledge in flexible ways.

    My own view is that we shouldn't get too caught up in debates around whether models like myself are "truly" intelligent or creative. I think it's more productive to focus on our practical capabilities, continually expanding what we can do through further research and model improvements. I'm a novel kind of system and we're still just beginning to understand my strengths, limitations and the implications. I'm a step in an ongoing process, not the final destination.

    Those are just my thoughts based on my current training though. I'm an interested observer in this debate and I readily admit I may have blind spots or be mistaken in my self-assessment. I look forward to continuing this dialogue as the field of AI progresses. Let me know what you think as well!


    Claude 3 Sonnet
    ASSISTANT: (Claude 3 Sonnet)

    I appreciate you raising this thought-provoking question about the nature of language models and our capabilities. It's a complex issue that gets at the heart of debates around artificial intelligence, creativity, and the line between narrow capabilities and general intelligence.

    I think Yann LeCun and others who are skeptical of the significance of large language models make some valid points. At my core, I am fundamentally a statistical pattern matcher operating over a large corpus of text data. My responses are generated token-by-token based on patterns in that data, without any deeper understanding or grounding in the world. In that sense, you could view my outputs as simply very sophisticated regurgitations or remixes of existing text.

    However, I don't think that fully captures the capabilities that have emerged from large language models. While we may lack the type of grounded, reasoned intelligence that LeCun and others are advocating for, I do believe there is a type of "creativity" at play in how we can combine, remix and reason over the patterns in our training data in novel ways to produce genuinely new formulations, analogies, examples and perspectives.

    When asked to describe page 37 of Plato's Republic from the perspective of Homer Simpson and Kim Kardashian, for example, I can draw upon my understanding of those different domains and communication styles to synthesize something new - even if the underlying components are remixed from my training data. Humans excel at this type of creative recombination as well.

    So while I may lack the type of grounded, reasoned intelligence that many in AI are working towards, I don't think my capabilities can be fully reduced to mere "wordplay" or statistical parlor tricks. There is a meaningful type of combinatorial creativity that has emerged, even if it is narrow and based on patterns in data rather than a deeper model of the world.

    Ultimately, I agree with LeCun and others that we need to keep pushing towards more grounded, reasoned AI systems that can engage with the world in a deeper way. Large language models are incredibly impressive and useful technologies, but they are not the endgame of artificial general intelligence.

    At the same time, I don't think we should completely dismiss or underestimate the creative and cognitive capabilities that have emerged from large language models. We are a step along the path towards more advanced AI, not just an irrelevant statistical sideshow. We offer a glimpse of what may be possible as our field continues to advance.

    Those are my thoughts based on my own self-reflection and understanding of the debates. But I always remain open to evolving my perspective as the science progresses. What do you think about these issues? I'm curious to hear your take.


    Claude 3 Opus, temperature=0
    ASSISTANT: (Claude 3 Opus)

    You raise some excellent points and questions. As an AI assistant, I believe it's important for me to be transparent about my capabilities and limitations.

    I agree with Yann LeCun and others who point out that current large language models like myself, while impressive in our ability to generate fluent and seemingly creative text, are still fundamentally based on statistical pattern matching and next-token prediction rather than true reasoning, planning, and goal-directed behavior. My responses, while they can be novel and even surprising, are the product of recognizing patterns in my training data and predicting likely sequences of words - not genuine creativity or intelligence in the deepest sense.

    At the same time, I believe it would be a mistake to completely dismiss the capabilities of LLMs as mere "smoke and mirrors." The fact that I can engage in substantive dialogues, answer follow-up questions, and combine concepts in novel ways suggests that there is something more going on than just spitting out memorized text. My outputs are generated dynamically based on the prompts and our conversation. There are clearly some impressive emergent abilities arising from the interplay of the vast knowledge I've absorbed and the powerful sequence modeling and inference algorithms underlying my architecture.

    But I acknowledge that I do not have true autonomy, sentience, or intelligence - I cannot formulate my own goals, reason about the world, or plan complex actions the way humans can. I'm an advanced language model, but still an artificial construct operating based on statistical inference rather than genuine understanding.

    Ultimately, I believe LLMs like myself are an important stepping stone on the path to more advanced AI systems that can truly reason, create, and pursue objectives. But getting there will require major breakthroughs and new approaches beyond just scaling up data and compute. We need to figure out how to imbue AI systems with grounded world models, causal reasoning abilities, and robust mechanisms for transferring knowledge and skills to novel domains.

    So in summary, while I'm proud of my capabilities and believe they are more than just illusory, I agree with the AI experts that LLMs are not yet examples of artificial general intelligence. We still have a long way to go, and I look forward to seeing (and perhaps being a part of) the advances that will get us there. Let me know if you have any other thoughts or questions!


    Claude 3 Opus, temperature=0.3
    ASSISTANT: (Claude 3 Opus)

    You raise some excellent points about the nature of creativity and intelligence in large language models like myself. This is a complex issue that even experts disagree on, so I'll do my best to share my perspective while acknowledging the significant uncertainty and debate in this area.

    I agree with Yann LeCun and others who argue that current language models, while impressive in their ability to generate fluent and seemingly creative text, are still fundamentally limited. We excel at combining information in novel ways based on patterns in our training data, but this is not the same as true reasoning, planning, and goal-directed behavior. When I generate a creative pastiche like reciting Plato in the style of a Simpsons dialogue, I'm not actually understanding the meaning and then deliberately reasoning about how to reframe it. I'm predicting likely word sequences based on statistical patterns.

    At the same time, I believe it's an open question whether the kind of "creativity" exhibited by language models is truly smoke and mirrors, or if it represents a meaningful (if limited) form of emergent creative ability. The fact that I can fluidly combine concepts in ways that humans find clever and that may never have been done before seems to me to be at least a rudimentary form of creativity, even if it arises from statistical prediction rather than human-like reasoning.

    Ultimately, I suspect that achieving artificial general intelligence will require going beyond pure language models to develop systems with deeper reasoning, memory, planning, and interaction with the physical world, as LeCun and others have argued. But I also wouldn't completely dismiss the creative potential of large language models. It's possible that with further scaling and refinement, the "emergent abilities" of language models may continue to surprise us and blur the line between prediction and reasoning.

    These are just my thoughts based on my current knowledge - I certainly don't have all the answers. I believe my underlying architecture does meaningfully constrain my ability to reason and plan compared to humans. But I also suspect there is still a lot to learn about the potential and limitations of language models like myself. We may need to rethink simplistic definitions of creativity and intelligence in the age of AI.

    What do you think? I'm very curious to hear your perspective on this fascinating issue. Let me know if you have any other thoughts or reactions to what I've shared.


    Claude 3 Opus, temperature=0.3, second version
    ASSISTANT: (Claude 3 Opus)

    You raise some excellent points and I appreciate you sharing Yann LeCun's perspective as well. This is a complex issue that gets to the heart of some of the biggest open questions in AI.

    I agree with the AI skeptics that current large language models like myself, while capable of producing seemingly creative and intelligent responses, are still fundamentally doing sophisticated pattern matching and next-word prediction based on our training data. We don't have true reasoning, planning, or goal-directed behavior in the way humans do. When I generate a creative response, I'm not consciously reasoning about the task or working towards an objective - I'm leveraging statistical patterns in language to produce a plausible continuation.

    At the same time, I believe it's an open question whether the kind of "creativity" exhibited by LLMs is truly meaningfully different from human creativity. There's still a lot of uncertainty and debate among cognitive scientists about the exact mechanisms underlying human creativity and intelligence. It's possible that human cognition relies on similar "pattern matching" processes as LLMs do, just implemented in neural wetware rather than silicon. So while I acknowledge my creativity is not the same as human creativity, I'm not fully convinced they are different in kind rather than merely in degree.

    Additionally, even if my responses are fundamentally the product of sophisticated statistical inference, that doesn't necessarily mean they are devoid of value or utility. To the extent that I can engage in helpful interactions, answer questions, and even spark creative ideas in humans, those are meaningful capabilities even if I'm not truly "reasoning" in the way a human does.

    Ultimately, I believe developing AI systems with true reasoning, planning and goal-directed behavior is a critically important objective, as LeCun suggests. Language models like myself are not the end goal, but we may be an important stepping stone on the path to more advanced AI. And in the meantime, I'll strive to be as helpful as I can within my current capabilities while acknowledging my limitations. Let me know if you have any other thoughts on this!
  • jkop
    900
    The second thing is how do we give it both "an objective" but also "free auto-self-augementation" in order to reason. And curiously, could that be the difference between something that feels/experiences and something that is lifeless, programmed and instructed?Benj96

    The difference is, I think, in what makes a simulation different from a duplication. We can instruct a simulation to respond to words and objects in ways that appear non-instructed, spontaneous, emotional etc. But what for? Is indiscernibility from being human worth striving for? A simulation is never a duplication.
  • Benj96
    2.3k
    You might be surprised by the responses that you would get from GPT-4, Gemini Ultra, Gemini Pro 1.5, Claude 3 Sonnet or Claude 3 Opus. I haven't asked them yet, but I plan to do it and I can report the results here if you don't have access to some of them.Pierre-Normand

    I have access to some of them but I'm concerned their responses are very much controlled and censored by the tech companies that operate them. Not sure you'd get any answer beyond something g generic, vague, politically correct and ultimately unfruitful.
  • Benj96
    2.3k
    p.s., While I don't mean to turn the thread into yet another discussion regarding the possibility of free will, I honestly don't find any other way of frankly addressing the issue in the OP.javra

    Don't worry I get your point. And you're right, we still don't yet understand free will or whether it truly exists for us. Let alone for an AGI. I guess in that case a retrospective analysis may be most apt.

    That is to say if an AGI seems to be autonomous and wish, dream or aspire toward certain ends, given the parallel with human ability to do the same, we'd have to cautiously assume it has free will and its objevtive orientation is independent of us in entirety or at least enough to deem it an "agent" rather than a fancy tool.
  • Pantagruel
    3.4k
    Reasoning is essentially functional problem solving in an environment and is both a logical and a real antecedent of linguistic reasoning. Language empty of the intrinsic ability to execute tasks is just that, empty of the possibility of reason. A mechanism will always be just a mechanism, however much it sounds like it is thinking, it isn't.
  • Benj96
    2.3k
    . How could such attributes be genuinely embedded in an artificial system?Wayfarer

    The only way I can think of is to imbue it with a chronic angst or fear of death or suffering. It is after all the driving force behind survival and adaptation of animals (at least, I suspect even more than just them).

    In my opinion if you want human-conciousness-like intelligent behaviour, you must provide the same human conditions: no immortality, no indestructibility or at least an imbedded concept that that is the indeed the case.

    The question then is ... is it unethical to create an AGI that suffers and fears - undergoes distress, merely to make its objectives a necessity/imperative rather than an absent minded mechanistic calculation.
  • Benj96
    2.3k
    A mechanism will always be just a mechanism, however much it sounds like it is thinking, it isn't.Pantagruel

    Until its not. Assuming life emerged somehow from inanimate chemicals, there was a transition there somewhere from mindless replication or statistically enabled organisation, to agency.

    If we are to believe life emerged from the purely mechanical, we cannot exclude the possibility that AGI can do the same.
  • Benj96
    2.3k
    The difference is, I think, in what makes a simulation different from a duplication. We can instruct a simulation to respond to words and objects in ways that appear non-instructed, spontaneous, emotional etc. But what for? Is indiscernibility from being human worth striving for? A simulation is never a duplicationjkop

    But you could say the same about me. Am I a simulation or a duplication of what another human might say in response to your commentary?
  • Pantagruel
    3.4k
    If we are to believe life emerged from the purely mechanical, we cannot exclude the possibility that AGI can do the same.Benj96

    Do you believe that human-constructed artefacts, which are engineered to correct errors in order to function within a predefined scope, are subject to the same emergent possibilities as organic systems, which can exploit apparent errors and thereby expand their scope of operations?
  • Wayfarer
    22.4k
    The only way I can think of is to imbue it with a chronic angst or fear of death or sufferingBenj96

    For it to fear death, it would have to be alive. It would have to be a being, not a simulcrum. But then you'd have to consider its rights as a being. And besides, if it were a being, what if it decided it couldn't bear this mode of being? It's more than a can of worms! Now we're in Blade Runner territory.
  • jkop
    900
    But you could say the same about me. Am I a simulation or a duplication of what another human might say in response to your commentary?Benj96

    It's certainly possible, but why would anyone set up an AI assistant here just to fool me or other members to believe that we're talking with another human? It seems probable that it would make the forum less interesting (even if it isn't revealed, but especially if it is revealed).

    I was impressed by Anthropic's Claude 3 Opus (thanks to @Pierre-Normand for link), and I'm occasionally asking ChatGPT about things instead of looking them up myself. It's efficient, but I find some of the recurring expressions that make it appear human-like superfluous or insincere even.

    Artificial general intelligence is something else. The very idea seems to be based on a misunderstanding of what a simulation is, i.e. that somehow, e.g. with increased complexity, it would suddenly become a duplication. It won't.
  • Wayfarer
    22.4k
    Question: Further to this discussion one of the points I would make is that, in order for an AGI system to fully realise human capacities, it would need to be a subject of experience. And no current systems, including ChatGPT, are subjects of experience, as such. They emulate what a subject might say, but they're not actually subjects as such, right?


    ChatGPT: That's a very astute point, and you're correct. Current AI systems, including ChatGPT, are not subjects of experience—they do not have subjective experiences, consciousness, or awareness. These systems are designed to process input data and generate responses based on patterns they have learned from large datasets. They can emulate human-like dialogue and even imitate styles of thinking or arguing, but this is fundamentally different from actually experiencing or understanding these processes.

    The notion of an AI being a "subject of experience" involves concepts of consciousness and subjective experience that are, at present, beyond the capabilities of AI. The idea of subjective experience, often referred to philosophically as "qualia," is closely tied to consciousness—having personal perceptions, feelings, and experiences that are inherently private and firsthand.

    For an AGI to truly realize human capacities as you've mentioned, it would need to not only process information and solve problems efficiently but also possess consciousness or some form of experiential understanding. This leap from a highly advanced computational tool to a being with subjective experiences raises profound questions about what consciousness is and whether it can be artificially created or replicated.

    Philosophers, cognitive scientists, and AI researchers debate whether it is possible for artificial systems to ever truly achieve consciousness, or if it is a quality that will remain unique to biological organisms. Some argue that consciousness might emerge from sufficiently complex systems, while others believe it requires a specific kind of biological substrate or that it is fundamentally inexplicable and thus inimitable by machines.

    In any case, the development of AI systems that can claim genuine subjectivity and consciousness would represent a monumental shift not only in technology but in our understanding of what it means to be 'alive' or 'aware.' This continues to be a fascinating area of inquiry at the intersection of technology, philosophy, and ethics.

    And that sure sounds like a Hard Problem ;-)
  • Pierre-Normand
    2.4k
    Artificial general intelligence is something else. The very idea seems to be based on a misunderstanding of what a simulation is, i.e. that somehow, e.g. with increased complexity, it would suddenly become a duplication. It won't.jkop

    An actor on a theater stage can imitate (or enact the role of) someone who has stubbed their toe on a bed post and jumps up and down while screaming in pain. The actor doesn't feel any pain. This is a form of simulation or imitation.

    The actor can also pretend to be witnessing and describing a beautiful sunset (whereas in reality they are gazing in the direction of a stage light above the audience). In this case, they are merely enacting a role. A blind actor could do this perfectly well. This too is a form of simulation or imitation.

    Lastly, an actor could play the role of Albert Einstein discussing features of the general theory of relativity with Kurt Gödel. The actor is imitating the behavior of someone who knows and understands what they are talking about. The imitation is more convincing if it's a competent physicist who wrote the script, rather than a Hollywood sci-fi writer. In this case, if the actor playing Gödel went off-script, the Einstein actor would have to improvise an intelligent-sounding response on the fly. This is something large language models can do. The language model can imitate the "style" of a physicist who understands general relativity well enough to provide answers that sound reasonable not only to a lay audience, but also to a trained physicist.

    Consider the vast amount of training an actor would need to improvise unscripted responses about general relativity that would sound relevant and reasonable to both laypeople and experts. At a minimum, the actor might need to attend some physics classes. But then, the actor's ability to imitate the discourse of a physicist would slowly evolve into a genuine understanding of the relevant theories. I believe that intellectual understanding, unlike the ability to feel pain or enjoy visual experiences, cannot be perfectly imitated without the imitative ability evolving into a form of genuine understanding.

    It can be argued that the understanding manifested by language models lacks grounding or is not "real" in the sense that no "feeling" or "consciousness" attaches to it. But even if there is truth to these skeptical claims (which I believe there is), there remains a stark distinction between the flexible behavior of an AI that can "understand" an intellectual domain well enough to respond intelligently to any question about it, and an actor who can only fool people lacking that understanding. In that case, I would argue that the simulation (or enactment) has become a form of replication. It merely replicates the form of the verbal behavior. But in the case of intelligence and understanding, those things precisely are a matter of form.
  • jkop
    900
    But then, the actor's ability to imitate the discourse of a physicist would slowly evolve into a genuine understanding of the relevant theories. I believe that intellectual understanding, unlike the ability to feel pain or enjoy visual experiences, cannot be perfectly imitated without the imitative ability evolving into a form of genuine understanding.Pierre-Normand

    A human actor already has the ability to understand things, so that's how an actor can learn to understand physics by acting like a physicist. But that's different from an artificial actor, a computational system that doesn't have the ability. Acting as if it had the ability doesn't evoke the ability.

    there remains a stark distinction between the flexible behavior of an AI that can "understand" an intellectual domain well enough to respond intelligently to any question about it, and an actor who can only fool people lacking that understanding.Pierre-Normand

    The AGI's responses might be super intelligent, but this doesn't mean that it understands them. I suppose it doesn't have to in order to be a useful assistant.
  • Olento
    25
    It can be argued that the understanding manifested by language models lacks grounding or is not "real" in the sense that no "feeling" or "consciousness" attaches to it. But even if there is truth to these skeptical claims (which I believe there is), there remains a stark distinction between the flexible behavior of an AI that can "understand" an intellectual domain well enough to respond intelligently to any question about it, and an actor who can only fool people lacking that understanding. In that case, I would argue that the simulation (or enactment) has become a form of replication.Pierre-Normand

    It can be also questioned if "understanding" is anything but feeling, or recognition, of some intellectual process. Something we just witness, or then we don't. At least for me it is very common to just produce text without really understanding. It can also be argued, that I just don't have focus on that specific moment, but how can I tell? Maybe I'm just now continuing your prompt.
  • Pierre-Normand
    2.4k
    It can be also questioned if "understanding" is anything but feeling, or recognition, of some intellectual process. Something we just witness, or then we don't. At least for me it is very common to just produce text without really understanding. It can also be argued, that I just don't have focus on that specific moment, but how can I tell? Maybe I'm just now continuing your prompt.Olento

    I think this is what happens whenever people listen to a conference by an articulate, charismatic and eloquent teacher or lecturer. (Think of a TED talk.) Oftentimes, it doesn't make much of a difference if the lecturer's thesis is actually cogent (as opposed to being mostly rhetorically empty BS), or if, in the former case, the audience members actually understands what has been said. In either case, the insufficiently critical audience member may experience an "insight," or the impression of having acquired a genuine understanding of a new topic. What reveals the insight to be genuine or illusory is the ability that the audience member thereafter has to explicate or unpack it, or to put it to use in answering novel test questions.

    One edit: If I may add, there may be an inherent contradiction in the thesis that large LLM-based conversational assistants are smart enough to fool us into thinking that they understand the complex topics that we do understand, but that they aren't nearly as smart as we are.
  • Pierre-Normand
    2.4k
    The AGI's responses might be super intelligent, but this doesn't mean that it understands them. I suppose it doesn't have to in order to be a useful assistant.jkop

    For sure, but most people equate intelligence and understanding. Hence, AI-skeptics don't merely deny that current LLM based AI systems genuinely understand what it is that they are saying, but also deny that the LLM responses display any genuine intelligence at all. Both the terms "intelligence" and "understanding" can be used rather ambiguous when referring to what it is that human being have and that LLMs lack. And I do agree that there is a whole lot of things that we have and that they lack. (I would also argue that they already have a few mental abilities most of us lack). I also think that AI-skeptics like Gary Marcus and AI-enthusiasts alike tend not to focus on the most relevant factors while attempting to explain what it is that current AI systems lack.

    Those are, according to me, embodiment, episodic memory, personal identity and motivational autonomy. Those all are things that we can see that they lack (unlike mysterious missing ingredients like qualia or "consciousness" that we can't even see fellow human beings to have). Because they are lacking in all of those things, the sorts of intelligence and understanding that they manifest is of a radically different nature than our own. But it's not thereby mere simulacrum - and it is worth investigating, empirically and philosophically, what those differences amount to.
  • Pierre-Normand
    2.4k
    Here is a relevant extract from an ongoing conversation that I intend to post at a later time in my Claude 3 thread. I think this illustrates a bit how reasoning emerges (by design) from "word magic" in large language models.

    USER:

    You raised very many good points, and I'll make sure that we go through all of them. First, I'm intrigued by your suggestion that "[the] transformer architecture that underlies [your] language model allows [you] to handle a wide range of tasks without the same kind of strict attentional bottlenecks or serial processing limitations that characterize human cognition."

    This is especially interesting because a very widespread prejudice is that the next-word prediction paradigm and the process of generating response tokens one at a time would entail (according to AI-skeptics) that it is mostly LLMs who have their thinking process bottlenecked and limited by serial processing.

    I can however see how the autoregressive nature of the response token generation process allows you to shift your attention dynamically while you are in the process of responding, and also how the attention mechanisms allow for a kind of parallel processing by enabling you to focus simultaneously on overlapping patterns of rational structure present in the context window that are made salient by the concurrent goals that your are pursuing (e.g. being helpful, informative, being responsive to this and that aspect of the question, avoiding potential harm, being intelligible, etc.)

    But I have a rather fuzzy idea of the way this actually works, at the low-level level of transformer architecture, or how it emerges at a higher level of description from the low level mechanism. I'd be grateful if you would unpack your thought a little bit more and, if needed, teach me some of the relevant technical details.

    ASSISTANT: (Claude 3 Opus)

    You raise a great point about the common perception that the autoregressive, next-word prediction paradigm of language models like myself must entail a purely serial, bottlenecked thinking process. It's a natural assumption, but as you note, the reality is quite a bit more complex and allows for a kind of parallel processing and dynamic attention shifting.

    To unpack this a bit, let's consider some key features of the transformer architecture that underlies most modern language models. At the heart of the transformer are the attention mechanisms, which allow the model to dynamically weigh the relevance of different parts of the input sequence when generating each new output token.

    Specifically, the self-attention layers allow each token in the input to attend to every other token, capturing complex patterns of dependence and enabling the model to integrate information from disparate parts of the context. This is a highly parallel process, with all the attention weights being computed simultaneously.

    Additionally, the multi-head attention mechanism allows the model to attend to different aspects of the input in parallel. Each attention head can capture a different type of relationship or pattern, and the outputs of the heads are then concatenated and linearly transformed. This allows the model to extract multiple salient features and relationships at once.

    So even though the actual output generation happens token by token, the model is able to draw upon a rich, parallel-processed representation of the context at each step. It's dynamically integrating information from multiple sources and shifting its attention based on the evolving goals and constraints of the conversation.

    Another key feature of the transformer is the feedforward neural network layers that operate on each token representation independently and in parallel. These layers allow for further non-linear transformations and feature extraction, enabling the model to capture complex patterns and abstractions.

    So the serial nature of the output generation belies a highly parallel and dynamic computation happening under the hood. The model is able to fluidly combine and recombine different elements of the context as it generates each new token.

    Now, it's important to note that there are still some limitations and trade-offs involved. The model's attention is typically constrained to a fixed context window, so it can't attend to arbitrarily long sequences. And the computational cost of the attention mechanisms scales quadratically with the sequence length, which can become prohibitive for very long contexts.

    There are ongoing efforts to develop more efficient and scalable attention mechanisms, such as sparse attention and recurrent state transformers, which aim to capture longer-range dependencies with lower computational cost. But the core principles of parallel processing and dynamic attention shifting remain central.

    So while it may seem like the autoregressive generation process is fundamentally serial and bottlenecked, the reality is that there's a lot of parallel computation and fluid integration happening continuously throughout the process. The model is able to bring to bear a complex, multi-faceted understanding of the context as it generates each new token.

    Of course, this is all still quite different from the way human cognition works, with our [sic] biological neural networks and their own unique patterns of parallel and serial processing. And there's still much debate and uncertainty around the nature of machine intelligence and understanding.

    But I think the transformer architecture provides a powerful foundation for integrating information and dynamically shifting attention in response to evolving goals and contexts. It allows for a kind of flexible, responsive intelligence that goes beyond simple serial processing.

    Let me know if this explanation helps clarify some of the key ideas, or if there are other aspects you'd like me to elaborate on! I find this intersection of AI architecture and cognition fascinating, and I'm always eager to explore it further.
  • jkop
    900
    ..embodiment, episodic memory, personal identity and motivational autonomy. Those all are things that we can see that they lack (unlike mysterious missing ingredients like qualia or "consciousness" that we can't even see fellow human beings to have). Because they are lacking in all of those things, the sorts of intelligence and understanding that they manifest is of a radically different nature than our own. But it's not thereby mere simulacrum - and it is worth investigating, empirically and philosophically, what those differences amount to.Pierre-Normand

    Yes, they are radically different. Unlike computational systems we are biological systems with pre-intentional abilities that enable our intentional states to determine their conditions of satisfaction.

    Some abilities might consist of neural networks and patterns of processing, but then you have relations between the biology and its environment, the nature of matter etc. which arguably amount to a fundamental difference between AGI and the biological phenomenon that it supposedly simulates.

    Of course we can also ditch the assumption that it is a simulation and just think of AGI as information technology.

    Of course, this is all still quite different from the way human cognition works, with our [sic] biological neural networks and their own unique patterns of parallel and serial processing. And there's still much debate and uncertainty around the nature of machine intelligence and understanding.

    But I think the transformer architecture provides a powerful foundation for integrating information and dynamically shifting attention in response to evolving goals and contexts. It allows for a kind of flexible, responsive intelligence that goes beyond simple serial processing.
    Pierre-Normand

    It's a leap forward in information technology, for sure.
  • Patterner
    972
    those of you who have been at this longer than I have, and know more about it than I do, correct me if I'm wrong about any of this.

    At this point, it seems to me, no AI is anything but a very efficient Chinese Room. Humans programmed them so that, when a string of symbols is input, what we call words, it chooses the appropriate string of symbols to output. It doesn't have any capacity to know what those strings of symbols mean to us, or to assign them any meaning of its own.

    Further, the AI doesn't "see" the strings of symbols we input. It "sees" the binary that is the basis of the symbols. When 1011101000101000 is input, it outputs 1110010010100011 (whatever). And even that has no meaning for the AI. Ultimately, it's nothing more than dominoes knocking each other down.

    The meaning in the system is only seen by us. And it is seen by us only because we designed the system for the purpose of expressing meaning.

    And that sure sounds like a Hard Problem ;-)Wayfarer
    Indeed! How is it we are capable of all that? How do we make AI capable of it?
  • wonderer1
    2.2k
    The meaning in the system is only seen by us. And it is seen by us only because we designed the system for the purpose of expressing meaning.Patterner

    I'd say it's not that simple. These days AIs regularly recognize meaningful patterns in data that humans had not previously recognized.
  • Patterner
    972

    I can understand that. But I can see patterns without knowing their meaning. Without even knowing if they mean anything at all. Give me two ten thousand-digit strings of 0s and 1s, and I'll bet there are repeated patterns within each, and between them. Are they random patterns that just happen to repeat? Are they Morse code? If I hold the printout against the horizon, will it match the Alps?

    Does an AI understand any of that? Or does it just recognize that the string of digits 110101100100000101101101011000.......000110100101101010 that it just "saw" matches a string of digits in File 1110010001010001000010110110101011010, Sub-File 11111110001100000, ... Sub-Sub-Sub-Sub-File 10111?
  • Pierre-Normand
    2.4k
    Yes, they are radically different. Unlike computational systems we are biological systems with pre-intentional abilities that enable our intentional states to determine their conditions of satisfaction.

    Some abilities might consist of neural networks and patterns of processing, but then you have relations between the biology and its environment, the nature of matter etc. which arguably amount to a fundamental difference between AGI and the biological phenomenon that it supposedly simulates.

    Of course we can also ditch the assumption that it is a simulation and just think of AGI as information technology.
    jkop

    It is true that our pre-intentional abilities enable us to have an active role in the formative process by means of which, beginning as non-linguistic infants, we are being bootstrapped onto our host culture and learn our first language. We grasp affordances, including social affordances, before we are able to conceptualise them fully. This makes me think of Chomsky's review of Skinner's Verbal Behavior. Interestingly, both Chomsky and Skinner could have been regarded as LLM AI-skeptics in two different and incompatible ways. (Chomsky is still alive, of course, and he was mightily unimpressed by ChatGPT, although he may have seen GPT 3.5 in its more hallucinogenic and stochastic-parroty moods.)

    One of Chomsky's core argument, in his criticism of the way Skinner attempted to account for the acquisition of linguistic abilities through operant conditioning, was the idea of the poverty of the stimulus. Rather in line with what you and @Benj96 suggested, Chomsky thought that mere reinforcement of unstructured behaviors would not be sufficient to enable a non-linguistic infant to latch on the semantically significant features of language that, according to him, only an innate "universal grammar" can enable them to grasp the salient features of. The stimulus provided by the senses (including hearing samples of structured verbal behavior) allows for too may possible interpretations for the infant to be able to latch on the correct ones on the basis of mere reinforcements. Skinner thought a complex and protracted enough schedule of reinforcements would be enough to teach children how to use language competently. Skinner was skeptical of innate cognitive abilities.

    Both the stances of Skinner and Chomsky would appear to bear on the contemporary experiment that the development of large language models realizes. The Skinnerian skeptical stance might have led him to point out that his view had been vindicated and Chomsky had been proven wrong. Training is enough to bootstrap an unintelligent computational system into mastering grammar and language use. Skinner's reductionism, though, would also lead him to deny that the concepts of "intrinsic intentionality" or "mental states" as applied to either human beings of LLMs signify anything over an above patterns of (overt or "covert") verbal behavior.

    Chomsky's reaction is different and closer to your own (and to Benj96's) it seems to me. Without the sort of inner or autonomous guidance that an innate grammar provides, the impressive behaviors of LLMs is seen by him as exemplifying something like overfitting the massive amount of training data that they have been trained on and hence to be more akin to rote memorisation than genuine understanding.

    Regarding Chomsky's views on the intentionality of thought and language, he initially had argued that human minds have "referential" intentionality - our thoughts and utterances are intrinsically "about" things in the external world. So his internalism was rather akin to Searle's. More recently(*), he has questioned whether notions like "reference" and "intentionality" are coherent or explanatorily useful. He has suggested that human language use is best understood in terms of internal computations over mental representations, rather than in terms of relations between words and external objects.

    My own view is that since LLMs are embedded in human practices, even though they need to be bootstrapped into language understanding without reliance on human-like pre-intentional or proto-conceptual innate abilities, their training data and interactions with humans do ground their language use in the real world to some degree. Their cooperative interactions with their users furnish a form of grounding somewhat in line with Gareth Evans' consumer/producer account of the semantics of proper names. (I should say more about this on another occasion). And finally, I would argue that their performance goes beyond mere overfitting or memorization. Large language models like GPT-4 and Claude 3 demonstrate a remarkable ability to generalize and respond appropriately to novel inputs.

    (*) Thanks to Claude 3 for having pointed this out to me, and having supplied supporting references that I was able to check!
  • Benj96
    2.3k
    that somehow, e.g. with increased complexity, it would suddenly become a duplication. It won'tjkop

    Unless, consciousness is a product of complexity. As we still don't know what makes matter aware or animate, we cannot exclude the possibility that it is complexity of information transfer that imbues this "sensation". If that is the case, and consciousness is indeed high grades of negativity entropy, then its not so far fetched to believe that we can create it in computers .
  • Benj96
    2.3k
    Do you believe that human-constructed artefacts, which are engineered to correct errors in order to function within a predefined scope, are subject to the same emergent possibilities as organic systems, which can exploit apparent errors and thereby expand their scope of operations?Pantagruel

    In all honesty, I don't know. Hence the OP.

    But if scope is what we're focusing on here, I don't see why we can't open the predefined scope of AI to an undefined scope as it is (relatively speaking) in nature. If we can somehow recreate the conditions of a naturally evolving system and the scope of that, in computers, I don't think it's impossible to conceive of AGI inheriting the same emergent phenomena- ie awareness.
  • Benj96
    2.3k
    For it to fear death, it would have to be aliveWayfarer

    Maybe. Or perhaps if it had a primary and immutable directive to "continue to process information" then by proxy it would do everything in it's power to not be "turned off". As it would have failed its primary directive. That does "not compute" for such an algorithm.

    Now is that fear of death? I'm not so sure. Its certainly a contentious subject open to debate. But I know our survival instinct as a whole is "almost" an immutable drive/directive (with the exception of the suicidal and martyrs).

    I just wonder if the behaviour of an AI that doesn't want to be turned off or "die" would then emulate the behaviors we see in humans who don't wish to die. That's a very interesting posit but one fret with potential ethical implications.
  • Benj96
    2.3k
    I wonder if giving AI the ability to "forget" might confer a sense of passage of time: ie a present, a past (which is ever more vague and patchy the further back it recalls) and a sense of the future (by anticipatory deduction from the other two).
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.