Questioning the Idea and Assumptions of Artificial Intelligence and Practical Implications Expanding on this, what we call AI is effectively a sophisticated pattern recognition system built on probability theory. It processes vast amounts of data to predict the most likely or contextually relevant response to an input, based on prior examples it has 'seen.' This process is fundamentally different from what we traditionally define as intelligence, which involves self-awareness, understanding, and the ability to independently synthesize new ideas.
For instance, if you prompt it to write a story, the output may appear creative, but it isn't creativity as we know it. It's a recombination of patterns, tropes, and linguistic structures that have statistically proven to fit the input context. When it asks for more detail, it's not because it 'wants' clarity it's because the probabilistic model lacks sufficient constraints to confidently predict the next sequence of outputs. This is an engineered response to ambiguity, not a deliberate or 'thoughtful' action.
This distinction matters because it reshapes our expectations of AI. We aren't interacting with a sentient partner but rather a probability-based tool that mirrors our inputs. It raises questions about the limits of this approach and whether 'true' intelligence requires something fundamentally different—perhaps an entirely new architecture that incorporates self-directed goals and intrinsic understanding. Until then, what we have is a mirror of human information and creativity, not an independent spark of its own.
Interestingly, the way an AI operates isn’t entirely dissimilar to a child learning about the world. A child often seeks additional information when they encounter uncertainty or ambiguity, much like an AI might request clarification in response to a vague or incomplete prompt. But there’s a key difference. Some children don’t ask for more information they might infer, guess, or create entirely unpredictable responses based on their own internal thought processes, shaped by curiosity, past experiences, or even whimsy.
This unpredictability is fundamentally tied to intelligence an ability to transcend the purely probabilistic and venture into the unexpected. What we see in Large Language Models, what I prefer to call Limited Intelligence (LI), not Artificial Intelligence, is a system bound by probabilities. It will almost always default to requesting clarity when uncertainty arises because that is the most statistically 'safe' response within its design parameters. The kind of unpredictability that arises from genuine intelligence the leap to an insight or an unconventional connection is vanishingly unlikely in an LI system
EDIT: One further thought occurred, that with the addition of memory to an LLM model, we are now in a position whereby an LLM will naturally come to reflect their user's responses and preferred styles. This creates an echo chamber effect that could ultimately lead to people believing that the AI response is always the ultimate arbiter of truth, because it always presents ideas that seem rational and logical to the user (being a reflection of their own mind), and also damage someone's ability to consider multiple perspective.s