• Wayfarer
    25.2k
    All of a sudden, we’re living in the age of simulated intelligence. Large language models such as ChatGPT and Google Gemini compose essays, summarize arguments, and generate intelligent responses across a vast range of topics. However, a deep question lurks beneath the surface of these astounding developments. Surely, artificial intelligence mimics reasoning — but does it actually reason? For that matter, what does it mean to reason? Is reason something that can be described in terms of algorithms, inputs and outputs? Or is there something deeper at its core?

    Here I would like to explore the limits of simulated intelligence and the deeper nature of reason as seen through the lens of philosophy.

    Vincent J. Carchidi, “Rescuing Mind from the Machines”(link)

    This essay, published in Philosophy Now offers a timely and philosophically grounded argument for the irreducibility of mind in an era of rapid AI advancement. Carchidi begins by recalling the original aspiration of artificial intelligence: not merely to build machines that perform tasks, but to create systems capable of making sense of the human mind itself. Over time, however — and especially with the striking progress of today’s large language models — that aspiration has shifted from metaphor to ambition: from simulating the mind to replicating it. This shift, he warns, risks devaluing the uniquely human character of mind and meaning.

    To illuminate what is at stake, Carchidi revisits René Descartes’ classic ‘problem of other minds’ and, in particular, his famous language test. In Descartes’ time, the growing fascination with mechanical automata had already sparked speculation that humans might themselves be sophisticated machines. Descartes allowed that bodily functions could be explained mechanistically, but insisted that machines — no matter how well engineered — could never engage in genuinely meaningful speech. They might emit words (or vocables) in response to stimuli, but they could not participate in open-ended, context-sensitive dialogue. This, for Descartes, revealed the crucial difference: only beings with minds could speak meaningfully. He wrote with amazing prescience:

    For one could easily conceive of a machine that is made in such a way that it utters words, and even that it would utter some words in response to physical actions that cause a change in its organs—for example, if someone touched it in a particular place, it would ask what one wishes to say to it, or if it were touched somewhere else, it would cry out that it was being hurt, and so on. But it could not arrange words in different ways to reply to the meaning of everything that is said in its presence, as even the most unintelligent human beings can do. …[and] even if they did many things as well as or, possibly, better than any one of us, they would infallibly fail in others. Thus one would discover that they did not act on the basis of knowledge, but merely as a result of the disposition of their organs. For whereas reason is a universal instrument that can be used in all kinds of situations, these organs need a specific disposition for every particular action. — René Descartes

    For Descartes (and this was in 1637!) the universal instrument of reason — a faculty with which the rational soul of humans was alone endowed — was the key differentiator of human from mechanical intelligence.

    Carchidi then explores how the problem re-emerged in the 20th century through computability theory. Alan Turing (1912–1954) showed that a single machine — now called a Turing machine — could, in principle, perform any computation. This meant that infinite outputs could be generated from a finite set of rules. While this breakthrough founded computability theory, it didn’t solve the original, Cartesian problem: what makes language meaningful?

    In the mid-20th century, linguists like Noam Chomsky applied computability theory to human language, introducing a distinction between competence (the abstract capacity for an infinite variety of expressions) and performance (how language is used in context). Yet recognising this formal distinction doesn’t account for meaningful use — how we understand, interpret, and creatively generate language in real life. Computability tells us what’s possible, but not necessarily what’s meaningful. That gap marks the limit of machine models — and the return of Descartes’ old question: how can mechanical systems ever account for the creative, meaningful use of language — now posed in modern terms.” As Chomsky noted

    It is quite possible — overwhelmingly probable, one might guess — that we will always learn more about human life and human personality from novels than from scientific psychology.”*
— Language and Mind (1968)

    This underscores that meaning, not just the logical structure of a text, is at the heart of human language — and that on this basis it can be expected that the depth of reason goes well beyond what can be generated by algorithms, no matter how seemingly clever.

    From this, Carchidi identifies three distinctive attributes of human language use:

    • Spontaneity: Human language is not bound to specific environmental stimuli. As Carchidi observes, “Generally, stimuli in a human’s local environment appear to elicit utterances, but not cause them.” This distinction is crucial in separating intelligent expression from mere reflex.
    • Unboundedness: There is no fixed repertoire of utterances. Human language is infinitely generative, allowing for “unlimited combination and recombination of finite elements into new forms that convey new, independent meanings.”
    • Contextual Appropriateness: Human utterances are responsive to context in meaningful and often unpredictable ways — even when no immediate stimulus justifies the connection (e.g., “That reminds me of…”). Such responsiveness points to interpretive depth beyond algorithmic pattern-matching.

    Carchidi contends that current AI systems fall short of these human capacities in key ways:

    • Circumscribed: their outputs are fully dependent on training data and determined by algorithmic processes. They do not respond in the human sense; they merely react.
    • Weakly Unbounded: while they generate novel strings, they do not express thoughts or form true meaning-pairs. They recombine patterns, but do not initiate or express intentions.
    • Functionally Appropriate Only: appropriateness is mechanical, not interpretive; their outputs are not chosen but triggered.

    In contrast human speech is neither fully determined (as in a reflex) but neither is it random (as in mere noise or meaningless words). These distinctions show us that LLMs are not, and do not possess, minds. They lack the agency, intentionality, and freedom that characterise rational sentient beings.

    The Upshot: Meaningful Speech and Reasoning Are Not AlgorithmicCarchidi emphasizes that language use is not the product of causal determinism but an expression of freedom, situated within the space of reasons — a normative structure where meaning, not mere function, governs.

    This power of judgment, in the Kantian sense, cannot be reduced to pattern recognition or data processing. Large language models, though sophisticated, lack intentionality: they do not mean what they say, nor are they aware of their own outputs. That, precisely, is what Descartes meant when he claimed that machines cannot act “on the basis of knowledge.” His famous “language test” remains a challenge not only to mechanistic theories of mind, but to any account of cognition that reduces meaning to physical process. What Descartes intuited — and what Chomsky later formalized — is that language reveals a kind of universality and spontaneity that transcends stimulus-response mechanisms. Speech testifies to a formative power — the capacity of mind to shape, initiate, and express meaning. In short: the power of reason.
  • Astorre
    119
    This is a very interesting and fascinating topic for me. In addition to your conclusions, I would like to present some of my views, which I will express briefly.

    1. AI does not have its own bodily way of perceiving the world. Everything that AI "knows" about us, about our affairs, it takes from human texts. Returning to the ideas of Heidegger, Husserl, Merleau-Ponty, AI is deprived of temporality and finitude, it is deprived of living experience. Today it is a complex algorithmic calculator.

    2. I am convinced that the origins of being, which make a person who he is, cannot be known rationally. But if such knowledge occurs, the meaning of being itself will immediately disappear and it will simply disappear. If we describe the theory of being programmatically and algorithmically, it turns out that the very purpose of being is to execute the algorithm. Finding meaning leads to the loss of meaning.
  • sime
    1.1k
    I think a common traditional mistake of both proponents and critics of the idea of AGI, is the Cartesian presumption that humans are closed systems of meaning with concrete boundaries; they have both tended to presume that the concept of "meaningful" human behaviour is reducible to the idea of a killer algorithm passing some sort of a priori definable universal test, such as a Turing test, where their disagreement is centered around whether any algorithm can pass such a test rather than whether or not this conception of intelligence is valid. In other words, both opponent and critic have traditionally thought of intelligent behaviour, both natural and artificial, as being describable in terms of a winning strategy for beating a preestablished game that is taken to test for agentic personhood; critics of AGI often sense that this idea contains a fallacy but without being able to put their finger on where the fallacy lies.
    Instead of questioning whether intelligence is a meaningful concept, namely the idea that intelligence is a closed system of meaning that is inter-subjetive and definable a priori, critics instead reject the idea that human behaviour is describable in terms of algorithms and appeal to what they think of as a uniquely human secret sauce that is internal to the human mind for explaining the apparent non-computable novelty of human decision making. Proponents know that the secret sauce idea is inadmissible, even if they share the critic's reservation that something is fundamentally wrong in their shared closed conception of intelligence.

    We see a similar mistake in the Tarskian traditions of mathematics and physics, where meaning is considered to amount to a syntactically closed system of symbolic expressions that constitutes a mirror of nature, where human decision-making gets to decide what the symbols mean, with nature relegated to a secondary role of only getting to decide whether or not a symbolic expression is true. And so we end up with the nonsensical idea of a theory of everything, which is the idea that the universe is infinitely compressible into finite syntax, which parallels the nonsensical idea of intelligence as a strategy of everything, which ought to have died with the discovery of Godel's incompleteness theorems.

    The key to understanding AI, is to understand that the definition of intelligence in any specific context consists of satisfied communication between interacting parties, where none of the interacting parties get to self-identify as being intelligent, which is a consensual decision dependent upon whether communication worked. The traditional misconception of the Turing test is that the test isn't a test of inherent qualities of the agent sitting the test, rather the test represents another agent that interacts with the tested agent, in which the subjective criteria of successful communication defines intelligent interaction, meaning that intelligence is a subjective concept that is relative to a cognitive standpoint during the course of a dialogue.
  • Leontiskos
    5k
    - Great post. :up:
  • Leontiskos
    5k
    - Excellent OP. :up:
  • Philosophim
    3k
    1. AI does not have its own bodily way of perceiving the world. Everything that AI "knows" about us, about our affairs, it takes from human texts. Returning to the ideas of Heidegger, Husserl, Merleau-Ponty, AI is deprived of temporality and finitude, it is deprived of living experience. Today it is a complex algorithmic calculator.Astorre

    Fantastic. AI knows the world through language models. How it, 'understands it' is a subjective notion that we have no more understanding of than a person. But what we can know is its imputs and what is processes. If we could allow a language processing model to have access to the five senses of people, then we could begin to compare it to people. As it is, it is likely an intelligence, just one constrained in what thinking is for it, as well as what it can think about.
  • MoK
    1.8k
    Surely, artificial intelligence mimics reasoning — but does it actually reason?Wayfarer
    If by reasoning you mean the ability to think, then I have to say that we still don't know how humans think; therefore, we cannot build something with the ability to think until we understand how we think.
  • ToothyMaw
    1.4k
    I think a common traditional mistake of both proponents and critics of the idea of AGI, is the Cartesian presumption that humans are closed systems of meaning with concrete boundaries; they have both tended to presume that the concept of "meaningful" human behaviour is reducible to the idea of a killer algorithm passing some sort of a priori definable universal test, such as a Turing test, where their disagreement is centered around whether any algorithm can pass such a test rather than whether or not this conception of intelligence is valid.sime

    Given that we know the Turing Test, for example, only measures a subset of both human and intelligent behavior, I don't think anyone (here) is saying that there is some sort of a priori "universal" test that requires the complete distillation of the breadth of human behavior and the ways we create meaning in the form of an algorithm for said algorithm to pass such a test. As such, we wouldn't be testing for "meaningful" human behavior - what you say is equated to a killer algorithm - but rather behavior that humans are likely to engage in that is considered intelligent, which could be organized according to a factual criterion. Passing just shows that the machine or algorithm can exhibit intelligent behavior equivalent to that of a human, not that it is equivalent to a human in all of the cognitive capacities that might inform behavior. That's it. We can have a robust idea of intelligence and what constitutes meaningful behavior and still find a use for something like the Turing Test.
  • 180 Proof
    16k
    :up:

    Returning to the ideas of Heidegger, Husserl, Merleau-Ponty, AI is deprived of temporality and finitude, it is deprived of living experience. Today it is a complex algorithmic calculator.Astorre
    :up: :up:

    the nonsensical idea of a theory of everything, which is the idea that the universe is infinitely compressible into finite syntaxsime
    :fire:

    Vincent J. Carchidi, “Rescuing Mind from the Machines”(link)

    This essay, published in Philosophy Now ...
    Wayfarer
    Thanks for this. :up:

    https://philosophynow.org/issues/168/Rescuing_Mind_from_the_Machines
  • Wayfarer
    25.2k
    Instead of questioning whether intelligence is a meaningful concept, namely the idea that intelligence is a closed system of meaning that is inter-subjetive and definable a priori, critics instead reject the idea that human behaviour is describable in terms of algorithms and appeal to what they think of as a uniquely human secret sauce that is internal to the human mind for explaining the apparent non-computable novelty of human decision making. Proponents know that the secret sauce idea is inadmissible, even if they share the critic's reservation that something is fundamentally wrong in their shared closed conception of intelligence.sime

    I think that’s a rather deflationary way of putting it. The 'non-computable' aspect of decision-making isn’t some hidden magic, but the fact that our decisions take place in a world of values, commitments, and consequences. That’s not a closed system — it's an open horizon that makes responsibility possible. Human beings, as @Astorre points out above, are bound by limitations or constraints that could never even occur to an AI system. It has no 'skin in the game', so to speak. Nothing matters to it.

    I agree that phenomenology has some important things to say about what intelligence means. I'm also intrigued by your second point:

    I am convinced that the origins of being, which make a person who he is, cannot be known rationally. But if such knowledge occurs, the meaning of being itself will immediately disappear and it will simply disappear.Astorre

    Perhaps you might elaborate on why you think this must be so? (not that I don't agree with you!)

    Given that we know the Turing Test, for example, only measures a subset of both human and intelligent behavior, I don't think anyone (here) is saying that there is some sort of a priori "universal" test that requires the complete distillation of the breadth of human behavior and the ways we create meaning in the form of an algorithm for said algorithm to pass such a test.ToothyMaw

    With the possibility of AGI being debated - the 'G' in AGI signifying a degree of autonomous intelligence – and the related discussion of whether or if AI systems are truly conscious, the questions of meaning really ought to be central. I mean, there are many people now who are convinced AI systems are persons. There was a CNN story recently about a married couple, where the husband is convinced that his AI friend has a 'spiritual message for mankind', and the wife thinks him delusional (which he probably is.) But that's just one example, there are going to be many, many others.
  • Astorre
    119
    Perhaps you might elaborate on why you think this must be so? (not that I don't agree with you!)Wayfarer

    I’m happy to answer your question, as it’s part of my broader work. I’ll try to describe this idea concisely, avoiding unnecessary elaboration.

    My work is grounded in a process-oriented approach to ontology. Instead of seeking a final substance of everything, I’ve aimed to identify the key ontological characteristics of being—that is, the traits that define the existence of something. One such characteristic is limitation (not in the Kantian sense, but ontologically). Something is always distinct from something else; it always exists within certain boundaries. The uniqueness of human being, compared to other entities, lies in the ability to consciously alter some of its boundaries or limits, for example, through knowledge. However, boundaries must always be drawn, even if temporarily. Without boundaries, there is nothing to contain. Something without boundaries becomes nothing (just as a river that loses its banks ceases to be a river, or a planet that loses its limits ceases to be a planet). The same applies to human knowledge. Knowledge inherently requires boundaries. These boundaries must lie somewhere between knowing and not-knowing. Complete knowledge of everything would mean the absence of any boundary to knowledge, and the absence of a boundary implies the absence of being itself.

    This is a rather complex explanation of my idea at the ontological level, but I hope you find it interesting.

    People continue to live despite the lack of a definitive answer to the question of meaning, finding an irrational impulse that keeps them going. Let's leave it at that
  • Wayfarer
    25.2k
    I hope you find it interesting.Astorre

    I do. I make a similar point in On Purpose, with respect to organisms generally - they are all engaged, even very primitive organisms, with maintaining themselves distinct from their environment. If they’re subsumed by their environment, they are dead. It’s what ‘dead’ means.

    I wonder if this is at all relatable to Gilles Deleuze idea of the fundamental nature of difference? I only know about it from comments made here on this forum, but it strikes me that there’s a similarity.
  • Astorre
    119
    I wonder if this is at all relatable to Gilles Deleuze idea of the fundamental nature of difference? I only know about it from comments made here on this forum, but it strikes me that there’s a similarity.Wayfarer

    My work generally resonates with the ideas of Whitehead and Deleuze, but I add some additional layers to them. This slightly veers off from the main topic.

    For instance, the next characteristic in my ontology is participation. Existence is possible not only through the difference of one from another but also through participation (like a tree with the soil or a beetle with dung)—in other words, difference alone is not enough. I consistently argue that something cannot exist on its own, and participation is precisely an ontological characteristic. Returning to your example—organisms do not merely maintain boundaries but exist through active participation in their environment, without which neither the environment nor the organisms themselves would be possible.

    In Russian, I use the term "причастность," which doesn’t fully translate into English as "participation." Unfortunately, I cannot find a perfect English equivalent that captures its complete meaning.

    I’ve probably strayed too far from the main topic.
  • Wayfarer
    25.2k
    Perhaps, but it is a vital insight nonetheless. (Interestingly, if I select that Russian term and choose Translate, the choice offered is ‘involvement’, which is not too far from the mark!)
  • Astorre
    119


    Yes, perhaps this is the right word. AI suggests something like "communion." I will simply try to explain it: for a native Russian speaker, "communion" evokes a sense of deep involvement, participation, and almost mystical unity with something. In an Old Slavic context, the word may be associated with a religious or spiritual act (for example, "communion" in Orthodoxy as a connection to the divine).

    In general, I identify such features of existence as limitations and Communion. In addition, there are two more features: Embodiment and Tension. I would like to discuss the main work in a consistent manner over time, breaking it down into separate essays on this forum. I hope for your "participation" in the future!
  • Wayfarer
    25.2k
    For sure, looking forward to it!
  • sime
    1.1k
    I think that’s a rather deflationary way of putting it. The 'non-computable' aspect of decision-making isn’t some hidden magic, but the fact that our decisions take place in a world of values, commitments, and consequences.Wayfarer

    I actually find it tempting to define computability in terms of what humans do , to follow Wittgenstein's remark on the Church-Turing thesis, in which he identified the operations of the Turing machine with the concept of a human manually following instructions, a remark that if taken literally inverts the ontological relationship between computer science and psychology that is often assumed in the field of AI that tends to think of the former as grounding the latter rather than the converse.
    An advantage of identifying computability in terms of how humans understand rule following (as opposed to say, thinking of computability platonically in terms of a hypothesized realm of ideal and mind-independent mathematical objects), is that the term "non-computability" can then be reserved to refer to the uncontrolled and unpredictable actions of the environment in response to human-computable decision making.

    As for the secret-source remark, I was thinking in particular of the common belief that self-similar recursion is necessary to implement human level reasoning, a view held by Douglas Hofstadter, which he has come to question in recent years, given the lack of self-similar recursion in apparently successful LLM architectures that Hofstadter acknowledges came as a big surprise to him.

    Passing just shows that the machine or algorithm can exhibit intelligent behavior equivalent to that of a human, not that it is equivalent to a human in all of the cognitive capacities that might inform behavior. That's it. We can have a robust idea of intelligence and what constitutes meaningful behavior and still find a use for something like the Turing Test.ToothyMaw

    Sure, the Turing test is valid in particular contexts. The question is whether it is a test of an objective test-independent property: Is "passing a turing test" a proof of intelligence, or is it a context-specific definition of intelligence from the standpoint of the tester?
  • Gnomon
    4.2k
    Surely, artificial intelligence mimics reasoning — but does it actually reason? For that matter, what does it mean to reason? Is reason something that can be described in terms of algorithms, inputs and outputs? Or is there something deeper at its core?Wayfarer
    Charchidi touches on that "deeper" question. He notes, "although some scholars argue that language is not necessary for thought, and is best conceived as a tool for communication". For example, animals communicate their feelings via grunts & body language, their vocabulary is very limited. But human "reasoning" goes beyond crude feelings into differences and complex interrelationships between this & that. How do you understand human thought : Is it analogous to computer language, processing 1s & 0s, or more like amorphous analog Smells?

    One feature of human Reasoning is the ability to trace the chain of causes back to its origin or originator, either a mechanical cause or a creative agent. This is a necessary talent for social creatures. Reasoning is logic-based ; which is relationship-based ; and which, in a social context, is meaning-based. But algorithms are rule-based, not meaning-based. However, as computer algorithms get more complex and sophisticated, they may become better able to simulate human reasoning (like a higher resolution image). Yet, without a human body for internal reference, the simulation may be lacking substance. A metal frame robot may come closer to emulating humanity, but it's the frailties of flesh that human social groups have in common. :smile:
  • L'éléphant
    1.6k
    I like the OP a lot. I responded to another thread a while ago regarding language before coming to this thread.
    As a proponent of human agency and intentionality, I find the overwhelming, even aggressive, defense in favor of the AI having the human mind, or on equal footing with the human mind, a bit treacherous. WE are the default mind. Our mind is the model to which they look up for approval. Imitation is the best compliment.

    As Chomsky noted

    It is quite possible — overwhelmingly probable, one might guess — that we will always learn more about human life and human personality from novels than from scientific psychology.”*
— Language and Mind (1968)
    Wayfarer
    I enjoy the sarcasms of the philosophers. They are always nuggets of truth.
  • Wayfarer
    25.2k
    Thanks! I found the essay itself very insightful, I think he makes a really important point about the distinctin between human abilities and artificial simulation.

    without a human body for internal reference, the simulation may be lacking substance. A metal frame robot may come closer to emulating humanity, but it's the frailties of flesh that human social groups have in commonGnomon

    :100: I elaborate on some of these themes in Part II

    Part Two

    This brings us to the deeper and more elusive question: what does it mean to reason? The word is often invoked — as if self-evident — in contrast with feeling, instinct, or mere calculation. Yet its real meaning resists easy definition. Reason is not simply deduction or inference. As the discussion so far suggests, it involves a generative capacity: the ability to discern, initiate, and understand meaning. The following references are an attempt to explore the question of the grounding of reason, in something other than formal logic or scientific rationalism. The basic premise here is that the question must have some real concern - it needs to grapple with the 'why' of existence itself, not simply operational methods for solving specific problems.

    A Phenomenological Perspective

    This is the deeper territory Edmund Husserl, founder of phenomenology, explored in The Crisis of the European Sciences, his last work, published after his death in1938. He sees the ideal of reason not merely as a formal tool of logic or pragmatic utility, but as the defining spiritual project of European humanity. He calls this project an entelechy — a striving toward the full realization of humanity’s rational essence. In this view, reason is transcendental — because it seeks the foundations of knowledge, meaning, and value as such. It is this inner vocation — the dream of a life grounded in truth and guided by insight — that Husserl sees as both the promise but also as the crisis of Western civilization: promise, because the rational ideal still lives as a guiding horizon; crisis, because modern science, in reducing reason to an instrumental or objectivist pursuit, has severed it from its original philosophical and ethical grounding.

    The Instrumentalisation of ReasonThe idea of the instrumentalisation of reason was further developed by the mid-twentieth century Frankfurt School. Theodor Adorno and Max Horkheimer described the instrumentalisation of reason as the idea of reason as a tool to achieve specific goals or ends, focusing on efficiency and effectiveness in means-ends relationships. It is a form of rationality that prioritizes the selection of optimal means to reach a pre-defined objective, without necessarily questioning the value or morality of that objective itself, morality being left to individual judgement or social consensus.  

    
Instrumental reason focuses on optimizing the relationship between actions and outcomes, seeking to maximize the achievement of a specific goal, generally disregarding the subjective values and moral considerations that might normally be associated with the ends being pursued. According to them, instrumental reason has become a dominant mode of thinking in modern societies, particularly within technocratic and capitalist economies.

    In The Eclipse of Reason, Horkheimer contrasts two conceptions of reason: objective reason, as found in the Ancient Greek texts, grounded in universal values and aiming toward truth and ethical order; and today’s instrumental reason, which reduces reason to a tool for efficiency, calculation, and control. Horkheimer argues that modernity has seen the eclipse of reason, as rationality becomes increasingly subordinate to technical utility and subjective interest, severed from questions of meaning, purpose, or justice. This shift, he warns, impoverishes both philosophy and society, leading to a form of reason that can no longer critically assess ends—only optimize means.

    Tillich’s Ultimate Concern

    For humans, even ordinary language use takes place within a larger horizon. As Paul Tillich observed, we are defined not simply by our ability to speak or act, but by the awareness of an ultimate concern — something that gives weight and direction to all our expressions, whether we are conscious of it or not. This concern is not merely psychological; it is existential. It forms the background against which reasoning, judgment, and meaning become possible at all.

    “Man, like every living being, is concerned about many things... But man, in contrast to other living beings, has spiritual concerns — cognitive, aesthetic, social, political. They are expressed in every human endeavor, from language and tools to philosophy and religion. Among these concerns is one which transcends all others: it is the concern about the ultimate.”
— Paul Tillich, The Dynamics of Faith (1957)

    Without this grounding, reason risks becoming a kind of shell — formally coherent, apparently persuasive, but conveying nothing meaningful. Rationality divorced from meaning can lead to propositions that are syntactically correct yet semantically meaningless — the form of reason but without content.

    Heidegger: Reason is Grounded by Care

    If Tillich’s notion of ultimate concern frames reason in theological terms — as a responsiveness to what is of final or transcendent significance — Heidegger grounds the discussion in the facts of human existence. His account of Dasein (the being for whom Being is a question) begins not with faith or transcendence, but with facticity — the condition of being thrown into a world already structured by meanings, relationships, and obligations.

    Even if Heidegger is not speaking in a theological register he, too, sees reason not merely as abstract inference but as embodied in concerned involvement with the world. For Heidegger, we do not stand apart from existence as detached spectators. We are always already in the world — in a situated, embodied, and temporally finite way. This “thrownness” (Geworfenheit) is not a flaw but essential to existence. And we need to understand, because something matters to us. Even logic, for Heidegger, is not neutral. It emerges from care — our directedness toward what matters. This is the dimension of reasoning that is absent from AI systems.

    What AI Systems Cannot DoThe reason AI systems do not really reason, despite appearances, is, then, not a technical matter, so much as a philosophical one. It is because for those systems, nothing really matters. They generate outputs that simulate understanding, but these outputs are not bound by an inner sense of value or purpose. Their processes are indifferent to meaning in the human sense — to what it means to say something because it is true, or because it matters. They do not live in a world; they are not situated within an horizon of intelligibility or care. They do not seek understanding, nor are they transformed by what they express. In short, they lack intentionality — not merely in the technical sense, but in the fuller phenomenological sense: a directedness toward meaning.

    This is why machines cannot truly reason, and why their use of language — however fluent — remains confined to imitation or simulation. Reason is not just a pattern of inference; it is an act of mind, shaped by actual concerns. The difference between human and machine intelligence is not merely one of scale or architecture — it is a difference in kind.

    Finally, and importantly, this is not a criticism, but a clarification. AI systems are enormously useful and may well reshape culture and civilisation. But it's essential to understand what they are — and what they are not — if we are to avoid confusion, delusion, and self-deception in using them.
  • Metaphysician Undercover
    14.1k
    The following references are an attempt to explore the question of the grounding of reason, in something other than formal logic or scientific rationalism.Wayfarer

    The grounding of reason is necessarily something outside the bounds of reason. This makes it unreasonable or even irrational. The grounding feature, the irrational, will always be a part of any instance of human reasoning, and this is what makes human reasoning impossible to be imitated by AI. In a broad sense, this feature is known to philosophers as intuition.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.