• apokrisis
    7.8k
    Linking working memory and Peirce’s enactive–semiotic theory is my idea.Harry Hindu

    I have no idea what point you are trying to make.
  • apokrisis
    7.8k
    The whole idea that cognition is just enacted and relational might sound deep, but it completely ignores the fact that we need some kind of internal workspace to actually hold and manipulate information, like working memory shows we do,Harry Hindu

    You have missed the point. The enactive view opposed the Cartesian representational one.

    So yes, there is something like an internal workspace. But the Cartesian says that is a primary fact of the neurobiology and the enactivist says that is only something that is made true in a social-cultural sense.

    The brain is designed just to act in the world. But through language, the narrative habit, and socio-semiosis, humans have developed this new level of “self-aware” thinking that allows us to use our neurobiology as if there is a homuncular us taking an introspective stance on our own inner world to thoughts, ideas and feelings.

    The brain might be designed for the subjectivity of being sentient, as @Pierre-Normand says. But humans can learn to harness that outward prospective view and turn it around as now an “objective” inward and retrospective view. The sapient view. We can watch ourselves in action after responding naturally and start to have a chain of thought about that.

    The big mistake you make is not to catch that this is the trick that is going on. You are caught in the Cartesian representational understanding of what it is to be a mind and that shapes your argument and your use of jargon. The term “memory” is loaded with the connotation that this is what the brain is designed for - recording traces that can be replayed at will in some internal projection room. There is a homuncular you inside your head, sat in a comfortable chair with a box of popcorn, ready to watch whatever is screening.

    The argument we are having here is at this most general level. Not at the level of working memory per se, but at the level of how to even conceptualise “memory” as a useful term when discussing what the brain does.

    The enactivist says we really need better terms as the normal use of “memory” is just too loaded with the metaphysics of Cartesian representationalism. But then to the lay person, the other terms employed sound obscure and strange as - of course - the everyday terms are the ones that arose so as to shape the human use of our neurobiology in exactly that classical Cartesian fashion. The Cartesian jargon is how we teach kids how to think in the way that our human social order needs them to think. It is all part of the programming that constructs the sapient human.

    All this is relevant to the OP as we can’t talk intelligently about LLMs unless we have a proper understanding of our own intelligence.

    The Cartesian representationalist is perfectly suited for the everyday life of being a self-objectifying member of modern human society. That is what their thought habits are now designed for, even if their neurobiology can make that difficult at times. We are supposed to record accurate memory traces, yet our neurobiology is wondering why we would expect to do something as weirdly inappropriate as that.

    But if LLMs are now changing things, we have to step back from this everyday way of looking at the human mind and take a more informed view. We have to see what it is that could be changed, or what we would want to be changed.

    The computational theory of mind actually gives us something concrete: mental processes are computations over representations, and working memory is this temporary space where the brain keeps stuff while reasoning, planning, or imagining things that aren’t right there in front of us, and Peirce basically just brushes that off and acts like cognition doesn’t need to be organized internally which is frankly kind of ridiculous.Harry Hindu

    This is just you ranting rather than than doing your own research. And LLMs now make it ridiculously easy to do your research.

    Note how “working memory” did develop as an idea after humans got used to the invention of information processing machines. A cache is what a von Neumann machine needs to implement Turing computing with any mechanical efficiency. And psychologists seized on this machine story for a while as their big new theory of neurobiological architecture. If a computer had to have a working memory, then why not assume the same of brains too. And the analogy looked to work - even if the human cache was like just weirdly limited to barely seven items. :grin:

    But then psychology eventually saw how poor an analogy the computational model actually is. Cognitivism became enactivism. A return to biological reality.

    Anyway, here is what AI says on this question you supposedly asked it:

    Charles Sanders Peirce did not explicitly mention "working memory" by that specific modern term, as the concept and the term were developed much later in the field of cognitive psychology, notably by Baddeley and Hitch in the 1970s.

    However, Peirce's broader philosophical and psychological writings on memory and cognition explore related ideas that anticipate some aspects of modern memory theories, including the temporary handling of information.

    Key aspects of Peirce's relevant thought include:

    Memory as Inference and Generality: Peirce considered memory not as a strict, image-like reproduction of sensations (which he argued against), but as a form of synthetic consciousness that involves inference and the apprehension of generality (Thirdness). He described memory as a "power of constructing quasi-conjectures" and an "abductive moment of perception," suggesting an active, constructive process rather than passive storage, which aligns with modern views of working memory's active manipulation of information.

    The Role of the Present: Peirce suggested that the "present moment" is a lapse of time during which earlier parts are "somewhat of the nature of memory, a little vague," and later parts "somewhat of the nature of anticipation". This implies a continuous flow of consciousness where past information is immediately available and used in the immediate present, a functional overlap with the temporary nature of working memory.

    Consciousness and the "New Unconscious": Peirce distinguished between conscious, logical thought and a vast "instinctive mind" or "unconscious" processes. He argued that complex mental processes, including those that form percepts and perceptual judgments, occur unconsciously and rapidly before reaching conscious awareness. This suggests that the immediate, pre-conscious processing of information (which might be seen as foundational to what feeds into a system like working memory) happens automatically and outside direct voluntary control.

    Pragmatism and the Self-Control of Memory: From a pragmatic perspective, Peirce linked memory to the foundation of conduct, stating that "whenever we set out to do anything, we... base our conduct on facts already known, and for these we can only draw upon our memory". Some interpretations suggest that Peirce's pragmatism, particularly as the logic of abduction (hypothesis formation), involves the "self-control of memory" for the purpose of guiding future action and inquiry.

    In summary, while the specific term "working memory" is an anachronism in the context of Peirce's work, his ideas on the active, inferential, and generalized nature of immediate memory and consciousness show striking parallels to contemporary cognitive theories of short-term information processing and mental control.
  • apokrisis
    7.8k
    I tried to make the argument that Peirce’s interpretants might function like some kind of higher-order working memory in a creative attempt to reconcile his enactive–semiotic framework with what we know about cognition, but the problem is that the theory itself never really specifies how interpretants are retained, manipulated, or recombined in any meaningful internal workspace. Peirce’s model is elegant in showing how meaning emerges relationally (causally), but it doesn’t actually tell us how the mind handles abstract thought, counterfactual reasoning, or sequential planning, all of which working memory clearly supports.Harry Hindu

    I think you need to quit using AI to rewrite your arguments. AI can amplify our human capacities, but what you are doing is using it to make a bad argument worse.

    Use AI to check your thinking critically and then write in your own words what you would want to say, rather than asking it to reword whatever was your confused first thoughts in some more technical fashion.
  • Metaphysician Undercover
    14.4k
    Yes, but it doesn't imply present retrieval of unchanged past information.Janus

    I don't even know what this could mean. As Derrida argued, repetition cannot be unchanged, it always involves difference, There is no such thing as "retrieval of unchanged past information". Retrieval of past information is possible, as repetition, but it is not "unchanged".

    Yep. All of them by definition. But that misses the point. Which is what evolution was tuning the brain to be able to do as its primary function.apokrisis

    OK, we're not far apart on this point. But I think assigning remembering the past as the "primary function" here is an assumption which is a stretch of the imagination. But maybe this was not what you meant. One can just as easily argue that preparing the living being for the future is just as much the primary function as remembering the past. And if remembering the past is just a means toward the end, of preparing for the future, then the latter is the primary function.

    So past experience is of course stored in the form of a useful armoury of reactive habits. The problem comes when people expect the brain to have been evolved to recollect in that autobiographical fashion. And so it will only be natural that LLMs or AGI would want to implement the architecture for that.apokrisis

    The way that we remember, and the things which we remember, are greatly conditioned by our attitude toward the future. For example, intention often directs attention, and attention influences what is remembered. And since human intention is constantly fluctuating, not at all fixed, this makes it quite different from the memory of an AI.

    But I’m warning that the brain arose with the reverse task of predicting the immediate future. And for the reverse reason of doing this so as then not to have to be “conscious” of what happens. The brain always wants to be the least surprised it can be, and so as most automatic as it can manage to be, when getting safely through each next moment of life.

    You have to flip your expectations about nature’s design goals when it comes to the evolution of the brain.
    apokrisis

    Yes, so all you need to do is to take this one step further, to be completely in tune with my perspective. My perspective is that preparing for the future is the primary function. But this does not mean that it does not have to be conscious of what happens, because it is by being conscious of what happens that it learns how to be prepared for the future.

    The problem with treating mental images or information as stored representations is that they aren't intrinsically meaningful. They stand in need of interpretation. This leads to a regress: if a representation needs interpretation, what interprets it? Another representation? Then what interprets that? Even sophisticated naturalistic approaches, like those of Dretske or Millikan who ground representational content in evolutionary selection history and reinforcement learning, preserve this basic structure of inner items that have or carry meaning, just with naturalized accounts of how they acquire it.Pierre-Normand

    The information must always be stored as representations of some sort. Maybe we can call these symbols or signs. It's symbols all the way down. And yes, symbols stand in need of interpretation. That is the issue I brought up with apokrisis earlier. Ultimately there is a requirement for a separate agent which interprets, to avoid the infinite regress. We cannot just dismiss this need for an agent, because it's too difficult to locate the agent, and produce a different model which is unrealistic, because we can't find the agent. That makes no sense, instead keep looking for the agent. What is the agent in the LLM, the electrical current?

    AI can amplify our human capacities, but what you are doing is using it to make a bad argument worse.apokrisis

    In other words it will amplify your mistakes.
  • Pierre-Normand
    2.9k
    The information must always be stored as representations of some sort. Maybe we can call these symbols or signs. It's symbols all the way down. And yes, symbols stand in need of interpretation. That is the issue I brought up with apokrisis earlier. Ultimately there is a requirement for a separate agent which interprets, to avoid the infinite regress. We cannot just dismiss this need for an agent, because it's too difficult to locate the agent, and produce a different model which is unrealistic, because we can't find the agent. That makes no sense, instead keep looking for the agent. What is the agent in the LLM, the electrical current?Metaphysician Undercover

    I think on Wittgenstein's view, the agent always is the person, and not the person's brain. And what stops the regress of interpretation is participation in a shared form of life one comes to inhabit (by means of a sort of socially scaffolded bootstrapping) through upbringing and enculturation. In the case of LLMs, a similar bootstrapping occurs by means of the pre-training process that is constrained by the structure of the bazillion human-written texts that figure in the training data. The difference in the latter case is that the encoding of this understanding of the signs is geared toward accomplishing the mindless task of predicting the next token in human texts in general.

    The process of post-training enables the LLM's output to become interpretable as the enactment of an AI assistant persona that strives (and succeeds for the most part) at providing intelligible and informative answers to the human user's queries. The machine "creates" meaning for the user. But we may also say, since there isn't a real conatively autonomous and living AI assistant with its own personal stakes, that this process of enactment is the artificial creation of a "smart window" between the user and the accumulated knowledge and wisdom already present in the training text corpora. Viewed from the intentional stance, the verbal behavior of the AI persona is revealed as a purely linguistically mediated form of sapience stripped of sentience and appetite, hence its obvious defects and cognitive deficits (such as its poor conceptual grasp of embodied affordances) alongside remarkable insightfulness and intelligence.
  • apokrisis
    7.8k
    But I think assigning remembering the past as the "primary function" here is an assumption which is a stretch of the imagination. But maybe this was not what you meant.Metaphysician Undercover

    It is indeed the opposite of what I said.

    One can just as easily argue that preparing the living being for the future is just as much the primary function as remembering the past. And if remembering the past is just a means toward the end, of preparing for the future, then the latter is the primary functionMetaphysician Undercover

    Which nicely summarises what I have been saying. Except I would still call it recognising what is familiar about the current moment rather than recalling some past.

    At the level of sentience, it is all about making sense of each passing moment. That is enactivism.

    My perspective is that preparing for the future is the primary function. But this does not mean that it does not have to be conscious of what happens, because it is by being conscious of what happens that it learns how to be prepared for the future.Metaphysician Undercover

    Being conscious means paying attention to whatever happened that was surprising, unexpected, desired, or otherwise a salient fact worth turning the spotlight on and learning from for future purposes.

    So habits predict 99% of each next moment and attention mops up the 1% that requires further scrutiny. The examination that improves our predictions for next time around.

    Consciousness is rather a lazy term. Neurobiology prefers to talk of habits and attention as each has its own neuroanatomy to understand. Which is why I put conscious in scare quotes.

    But I’m shocked you seem to generally agree with what I say. That has never happened before. :smile:
156789Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.