Of course. — 180 Proof
But I think assigning remembering the past as the "primary function" here is an assumption which is a stretch of the imagination. But maybe this was not what you meant. — Metaphysician Undercover
One can just as easily argue that preparing the living being for the future is just as much the primary function as remembering the past. And if remembering the past is just a means toward the end, of preparing for the future, then the latter is the primary function — Metaphysician Undercover
My perspective is that preparing for the future is the primary function. But this does not mean that it does not have to be conscious of what happens, because it is by being conscious of what happens that it learns how to be prepared for the future. — Metaphysician Undercover
Do you think the universe is eternal & self-existent? Or do you accept the Cosmological evidence indicating that Nature as-we-know-it had a sudden inexplicable beginning? — Gnomon
I like the brain-as-receiver model. — AmadeusD
I tried to make the argument that Peirce’s interpretants might function like some kind of higher-order working memory in a creative attempt to reconcile his enactive–semiotic framework with what we know about cognition, but the problem is that the theory itself never really specifies how interpretants are retained, manipulated, or recombined in any meaningful internal workspace. Peirce’s model is elegant in showing how meaning emerges relationally (causally), but it doesn’t actually tell us how the mind handles abstract thought, counterfactual reasoning, or sequential planning, all of which working memory clearly supports. — Harry Hindu
The whole idea that cognition is just enacted and relational might sound deep, but it completely ignores the fact that we need some kind of internal workspace to actually hold and manipulate information, like working memory shows we do, — Harry Hindu
The computational theory of mind actually gives us something concrete: mental processes are computations over representations, and working memory is this temporary space where the brain keeps stuff while reasoning, planning, or imagining things that aren’t right there in front of us, and Peirce basically just brushes that off and acts like cognition doesn’t need to be organized internally which is frankly kind of ridiculous. — Harry Hindu
Charles Sanders Peirce did not explicitly mention "working memory" by that specific modern term, as the concept and the term were developed much later in the field of cognitive psychology, notably by Baddeley and Hitch in the 1970s.
However, Peirce's broader philosophical and psychological writings on memory and cognition explore related ideas that anticipate some aspects of modern memory theories, including the temporary handling of information.
Key aspects of Peirce's relevant thought include:
Memory as Inference and Generality: Peirce considered memory not as a strict, image-like reproduction of sensations (which he argued against), but as a form of synthetic consciousness that involves inference and the apprehension of generality (Thirdness). He described memory as a "power of constructing quasi-conjectures" and an "abductive moment of perception," suggesting an active, constructive process rather than passive storage, which aligns with modern views of working memory's active manipulation of information.
The Role of the Present: Peirce suggested that the "present moment" is a lapse of time during which earlier parts are "somewhat of the nature of memory, a little vague," and later parts "somewhat of the nature of anticipation". This implies a continuous flow of consciousness where past information is immediately available and used in the immediate present, a functional overlap with the temporary nature of working memory.
Consciousness and the "New Unconscious": Peirce distinguished between conscious, logical thought and a vast "instinctive mind" or "unconscious" processes. He argued that complex mental processes, including those that form percepts and perceptual judgments, occur unconsciously and rapidly before reaching conscious awareness. This suggests that the immediate, pre-conscious processing of information (which might be seen as foundational to what feeds into a system like working memory) happens automatically and outside direct voluntary control.
Pragmatism and the Self-Control of Memory: From a pragmatic perspective, Peirce linked memory to the foundation of conduct, stating that "whenever we set out to do anything, we... base our conduct on facts already known, and for these we can only draw upon our memory". Some interpretations suggest that Peirce's pragmatism, particularly as the logic of abduction (hypothesis formation), involves the "self-control of memory" for the purpose of guiding future action and inquiry.
In summary, while the specific term "working memory" is an anachronism in the context of Peirce's work, his ideas on the active, inferential, and generalized nature of immediate memory and consciousness show striking parallels to contemporary cognitive theories of short-term information processing and mental control.
Linking working memory and Peirce’s enactive–semiotic theory is my idea. — Harry Hindu
The Peircean biosemiotic account, which apokrisis advocates, addresses both the skillful orienting and predictive silencing aspects. I'm folding it into an account of embodied practical reasoning derived in part from Elizabeth Anscombe and Michael Thompson. — Pierre-Normand
Robert Rosen's anticipatory systems theory describes systems (especially living organisms) whose present behavior is determined by the prediction of their future state, generated by an internal predictive model. This contrasts sharply with a purely reactive system, which can only react to changes that have already occurred in the causal chain (e.g., in the Newtonian paradigm).
Key Concepts
Internal Predictive Model: The core of the theory is that an anticipatory system contains a model of itself and its environment. This is not a mystical ability to "see" the future, but rather an internal representation of the causal structure of its world.
"Pulling the Future into the Present": The internal model allows the system to change its present state in anticipation of a later state, effectively incorporating "future states" or "future inputs" into present decision-making processes.
Feedforward Control: Anticipatory behavior is linked to feedforward mechanisms rather than just feedback loops. Feedback is error-actuated (correcting after an error occurs), while feedforward behavior is pre-set according to a model relating present inputs to their predicted outcomes.
The Modeling Relation: Rosen developed a rigorous mathematical framework, drawing from relational biology and category theory, to describe the relationship between a natural system and its formal internal model. The model is a representation of the system's causal entailment that allows for inferential entailment (prediction).
Signature of Life: Rosen considered anticipation to be a fundamental characteristic that differentiates living systems from inorganic, purely reactive systems. All living organisms, from single-celled life to humans, use encoded information to anticipate and navigate their environment for survival.
In essence, Rosen provided a formal, scientific basis for the study of foresight and purpose (teleology) in natural systems, arguing that it is essential for a complete understanding of life and mind.
Challenging the Machine Metaphor: A central driver for Rosen was the realization that the prevailing scientific paradigm—which views all natural systems as machines amenable to algorithmic description—was fundamentally inadequate for biology. He argued that living organisms are "non-algorithmic" and require the concept of semantics (meaning), which is absent in purely physical, mechanistic systems. The inability of simple physical models to account for goal-directed behavior or foresight was a major philosophical motivation.
Ancient Greek Philosophy (Teleology): Rosen implicitly and explicitly engaged with ancient philosophical concepts, particularly Aristotle's notion of teleology (purpose or final cause). Classical science had largely banished teleology, but Rosen argued that anticipation provided a scientific, non-mystical way to reintroduce the concept of "purpose" into scientific discourse: the future state of the organism guides its present behavior.
In a mathematical sense, anticipation was regarded as the signature of life because it represented a form of causality that is non-mechanistic, non-algorithmic, and "impredicative", which cannot be fully captured by classical physics or standard computer models (e.g., Turing machines).
The Role of the Internal Model: An anticipatory system, by contrast, contains an internal model of itself and its environment. This model, which is an encoding of the natural system's causal entailment into a formal (inferential) system, allows the organism to "pull the future into the present".
The Mathematical Distinction: The critical mathematical difference is that the system's present change of state is determined not just by present inputs, but by the predictions generated by its internal model about a future state. This means the system's dynamics cannot be described by simple differential equations where the rate of change at time t only depends on the state at time t.
Impredicativity: Living systems are "impredicative," meaning that their components depend on the system as a whole for their existence and function, and vice versa. Mathematically, this involves defining something in terms of a totality to which it belongs, which is a key feature of the category-theoretic (M, R) models but generally avoided in classical, reductionist approaches to mechanics.
Whatever "prospective habit" is actually supposed to mean, aren't all sorts of habit based in past information? — Metaphysician Undercover
But yeah, sounds like AI in general is agreeing with my idea that working memory is related to Peirce’s enactive–semiotic theory. Thanks! — Harry Hindu
You have similarly become frustrated with me when I have refused to answer yours until you answer mine, ad nauseum. — bert1
The journal's monism was a unique "religion of science" that conceived of the ultimate "oneness" as "God, the universe, nature, the source, or other names".
The journal was influenced by the German Monist League, founded by Ernst Haeckel, which was explicitly a "Religion of Science" that revered "divinized Mother Nature".
Peirce had a friend who introduced him to editor Paul Carus, which led to him publishing at least 14 articles in The Monist, including his major metaphysical series in the early 1890s.
The gist of this is to turn the attention to the nature of one's own lived experience, rather than wondering what must have existed 'before the big bang' or in terms of poorly-digested fragments of scientific cosmology. — Wayfarer
Physical science, though, begins after the Planck time-gap of the Big-Bang-beginning itself. At which time the metaphysical Laws of Thermodynamics were already in effect. — Gnomon
But where did the original Information (natural laws?) come from, that caused a living & thinking Cosmos to explode into existence? — Gnomon
I'm not saying they re not conscious but a primitive immature consciousness and so his experience is... very simplistic and immature.
— Raul
Oh sure. I don't disagree with that. However I do think it entails that consciousness does not admit of degree. 'Primitive immature consciousness' is still consciousness. Complicated mature consciousness is still consciousness. The consciousness of an adult is the same kind of consciousness that a baby has, namely the kind of consciousness that permits experiences to happen at all. It is that very simple basic capacity to experience that is the subject of discussions in philosophy. It is in that sense that I don't think the concept of consciousness admits of degree.
EDIT: To put it another way, the adult is no more or less able to have experiences than the child. They do differ in the kind of experiences they can have. But that's a difference of content, not a difference of consciousness.
EDIT: To put it a third way, the hard problem is located at the difference between no experience happening at all, and some experience, no matter how 'primitive' it is.
Howard Pattee used the metaphor of a configurable switch (CS) to help explain how the non-physical realm of formal information can exert causal control over physical processes, a mechanism necessary to bridge his proposed "epistemic cut".
The epistemic cut describes a fundamental, unavoidable boundary between the physical world (governed by continuous, rate-dependent, deterministic laws) and the symbolic/formal world (governed by discrete, rate-independent rules, such as descriptions or measurements).
Key aspects of the switch metaphor:
Arbitrary Control: A switch's physical construction is irrelevant to its function of simply being "on" or "off" in a circuit. Its operation is "arbitrary" with respect to the underlying physical laws of matter, yet it exerts control over the flow of electricity.
Formal Prescription: The setting of the switch (e.g., open or closed, "on" or "off") is a formal, informational decision (a form of "prescriptive information") that dictates the path of physical events (the flow of current).
Bridging the Divide: The "configurable switch" serves as a conceptual model for how a formal choice can be instantiated in physical reality, allowing the symbolic (e.g., genetic code instructions) to direct the material (e.g., protein synthesis in a cell) without violating physical laws, but rather by applying non-integrable constraints.
The "switch" metaphor helps to illustrate the mechanism by which top-down, intentional control (the symbolic side) can interact with bottom-up, physical dynamics (the material side).
On the transition from non-life to life
Biophysics finds a new substance
This looks like a game-changer for our notions of “materiality”. Biophysics has discovered a special zone of convergence at the nanoscale – the region poised between quantum and classical action. And crucially for theories about life and mind, it is also the zone where semiotics emerges. It is the scale where the entropic matter~symbol distinction gets born. So it explains the nanoscale as literally a new kind of stuff, a physical state poised at “the edge of chaos”, or at criticality, that is a mix of its material and formal causes.
The key finding: As outlined in this paper (http://thebigone.stanford.edu/papers/Phillips2006.pdf) and in this book (http://lifesratchet.com/), the nanoscale turns out to be a convergence zone where all the key structure-creating forces of nature become equal in size, and coincide with the thermal properties/temperature scale of liquid water.
So at a scale of 10^-9 metres (the average distance of energetic interactions between molecules) and 10^-20 joules (the average background energy due to the “warmth” of water), all the many different kinds of energy become effectively the same. Elastic energy, electrostatic energy, chemical bond energy, thermal energy – every kind of action is suddenly equivalent in strength. And thus easily interconvertible. There is no real cost, no energetic barrier, to turning one kind of action into another kind of action. And so also – from a semiotic or informational viewpoint – no real problem getting in there and regulating the action. It is like a railway system where you can switch trains on to other tracks at virtually zero cost. The mystery of how “immaterial” information can control material processes disappears because the conversion of one kind of action into a different kind of action has been made cost-free in energetic terms. Matter is already acting symbolically in this regard.
This cross-over zone had to happen due to the fact that there is a transition from quantum to classical behaviour in the material world. At the micro-scale, the physics of objects is ruled by surface area effects. Molecular structures have a lot of surface area and very little volume, so the geometry dominates when it comes to the substantial properties being exhibited. The shapes are what matter more than what the shapes are made of. But then at the macro-scale, it is the collective bulk effects that take over. The nature of a substance is determined now by the kinds of atoms present, the types of bonds, the ratios of the elements.
The actual crossing over in terms of the forces involved is between the steadily waning strength of electromagnetic binding energy – the attraction between positive and negative charges weakens proportionately with distance – and the steadily increasing strength of bulk properties such as the stability of chemical, elastic, and other kinds of mechanical or structural bonds. Get enough atoms together and they start to reinforce each others behaviour.
So you have quantum scale substance where the emergent character is based on geometric properties, and classical scale substance where it is based on bulk properties. And this is even when still talking about the same apparent “stuff”. If you probe a film of water perhaps five or six molecules thick with a super-fine needle, you can start to feel the bumps of extra resistance as you push through each layer. But at a larger scale of interaction, water just has its generalised bulk identity – the one that conforms to our folk intuitions about liquidity.
So the big finding is the way that contrasting forces of nature suddenly find themselves in vanilla harmony at a certain critical scale of being. It is kind of like the unification scale for fundamental physics, but this is the fundamental scale of nature for biology – and also mind, given that both life and mind are dependent on the emergence of semiotic machinery.
The other key finding: The nanoscale convergence zone has only really been discovered over the past decade. And alongside that is the discovery that this is also the realm of molecular machines.
In the past, cells where thought of as pretty much bags of chemicals doing chemical things. The genes tossed enzymes into the mix to speed reactions up or slow processes down. But that was mostly it so far as the regulation went. In fact, the nanoscale internals of a cell are incredibly organised by pumps, switches, tracks, transporters, and every kind of mechanical device.
A great example are the motor proteins – the kinesin, myosin and dynein families of molecules. These are proteins that literally have a pair of legs which they can use to walk along various kinds of structural filaments – microtubules and actin fibres – while dragging a bag of some cellular product somewhere else in a cell. So stuff doesn’t float to where it needs to go. There is a transport network of lines criss-crossing a cell with these little guys dragging loads.
It is pretty fantastic and quite unexpected. You’ve got to see this youtube animation to see how crazy this is – https://www.youtube.com/watch?v=y-uuk4Pr2i8 . And these motor proteins are just one example of the range of molecular machines which organise the fundamental workings of a cell.
A third key point: So at the nanoscale, there is this convergence of energy levels that makes it possible for regulation by information to be added at “no cost”. Basically, the chemistry of a cell is permanently at its equilibrium point between breaking up and making up. All the molecular structures – like the actin filaments, the vesicle membranes, the motor proteins – are as likely to be falling apart as they are to reform. So just the smallest nudge from some source of information, a memory as encoded in DNA in particular, is enough to promote either activity. The metaphorical waft of a butterfly wing can tip the balance in the desired direction.
This is the remarkable reason why the human body operates on an energy input of about 100 watts – what it takes to run a light bulb. By being able to harness the nanoscale using a vanishingly light touch, it costs almost next to nothing to run our bodies and minds. The power density of our nano-machinery is such that a teaspoon full would produce 130 horsepower. In other words, the actual macro-scale machinery we make is quite grotesquely inefficient by comparison. All effort for small result because cars and food mixers work far away from the zone of poised criticality – the realm of fundamental biological substance where the dynamics of material processes and the regulation of informational constraints can interact on a common scale of being.
The metaphysical implications: The problem with most metaphysical discussions of reality is that they rely on “commonsense” notions about the nature of substance. Reality is composed of “stuff with properties”. The form or organisation of that stuff is accidental. What matters is the enduring underlying material which has a character that can be logically predicated or enumerated. Sure there is a bit of emergence going on – the liquidity of H2O molecules in contrast to gaseousness or crystallinity of … well, water at other temperatures. But essentially, we are meant to look through organisational differences to see the true material stuff, the atomistic foundations.
But here we have a phase of substance, a realm of material being, where all the actual many different kinds of energetic interaction are zeroed to have the same effective strength. A strong identity (as quantum or classical, geometric or bulk) has been lost. Stuff is equally balanced in all its directions. It is as much organised by its collective structure as its localised electromagnetic attractions. Effectively, it is at its biological or semiotic Planck scale. And I say semiotic because regulation by symbols also costs nothing much at this scale of material being. This is where such an effect – a downward control – can be first clearly exerted. A tiny bit of machinery can harness a vast amount of material action with incredible efficiency.
It is another emergent phase of matter – one where the transition to classicality can be regulated and exploited by the classical physics of machines. The world the quantum creates turns out to contain autopoietic possibility. There is this new kind of stuff with semiosis embedded in its very fabric as an emergent potential.
So contra conventional notions of stuff – which are based on matter gone cold, hard and dead – this shows us a view of substance where it is clear that the two sources of substantial actuality are the interaction between material action and formal organisation. You have a poised state where a substance is expressing both these directions in its character – both have the same scale. And this nanoscale stuff is also just as much symbol as matter. It is readily mechanisable at effectively zero cost. It is not a big deal for there to be semiotic organisation of “its world”.
As I say, it is only over the last decade that biophysics has had the tools to probe this realm and so the metaphysical import of the discovery is frontier stuff.
And indeed, there is a very similar research-led revolution of understanding going on in neuroscience where you can now probe the collective behaviour of cultures of neurons. The zone of interaction between material processes and informational regulation can be directly analysed, answering the crucial questions about how “minds interact with bodies”. And again, it is about the nanoscale of biological organisation and the unsuspected “processing power” that becomes available at the “edge of chaos” when biological stuff is poised at criticality.
Graph of the convergence zone: Phillips, R., & Quake, S. (2006). The Biological Frontier of Physics Physics Today 59
phillips-quake-2.jpg
If all signals are lagged, won't it subjectively seem like you are living in the moment? The perception of lag seems to require that some signals are noticably more lagged than others. — hypericin
Daniel Dennett used the "Stalinist vs. Orwellian" interpretations of certain perceptual phenomena (like the color phi phenomenon and metacontrast masking) to argue that there is no functional or empirical difference between a "perceptual revision" and a "memory revision" of experience. This "difference that makes no difference" was the linchpin of his argument against the idea of a single, central point in the brain where consciousness "happens"—what he called the "Cartesian Theater".
The Two Interpretations
Dennett applied these analogies to the problem of how our brains process information over time to create a seamless experience, using an example where two different colored dots flashed in sequence are perceived as a single dot moving and changing color mid-path:
Orwellian View: The subject consciously experiences the actual, original sequence of events, but this memory is immediately and retrospectively edited (like the Ministry of Truth in 1984 revising history) to reflect a more logical sequence (the single moving, color-changing dot).
Stalinist View: The information is edited before it ever reaches consciousness, with the final, "fully resolved" (but altered) content being the only thing presented to the mind (like the pre-determined verdicts in Stalin's show trials).
The Core Point
Dennett argued that both interpretations presuppose the existence of a "Cartesian Theater"—a single, identifiable finish line for all the information processing where the "moment of consciousness" definitively occurs. However, because both the Orwellian and Stalinist accounts can explain all the available data (from the subject's verbal reports to the third-person perspective of science) equally well, Dennett claimed the distinction between them is merely verbal.
His conclusion was that since there is no empirically discernible or functionally important difference between an experience being "edited" before consciousness or "misremembered" immediately after a conscious moment, the very idea of a single, defined "moment" or "place" of consciousness is a red herring. This supports his Multiple Drafts Model, which proposes that consciousness is a continuous, decentralized process of parallel, multitrack editing and interpretation, not a single unified stream presented to an inner observer.
Why would they need some kind of neurosemiotic model to get to what I would want to call consciousness? — bert1
How does the the Peircean Enactive and Semiotic Notion of Mind relate to the idea of working memory? — Harry Hindu
The Peircean enactive and semiotic notion of mind can be seen as a foundational philosophical framework that accommodates the function of working memory (WM) but reformulates it away from the traditional cognitive science view of a "storage" buffer. Instead of a place where static information is held, WM in this framework is an emergent property of ongoing, dynamic semiotic activity (semiosis) and the embodied interaction of an agent with its environment.
Key Connections
Process over Storage: Traditional models of working memory often focus on the storage and processing of information within the brain. The Peircean/enactive view shifts the focus to "semiosis" (sign-activity) as a dynamic, ongoing process of interpretation and reasoning. Working memory would thus be understood not as a static "store" but as the sustained, dynamic activation and management of signs within an integrated brain-body-environment system.
Embodied and Extended Cognition: Enactivism emphasizes that cognition is fundamentally embodied and embedded in the environment, not just a set of brain processes. Working memory, from this perspective, involves the continuous looping of perception, action, and interpretation, possibly including external cues and bodily states, rather than being solely an internal, brain-bound mechanism.
Role of Signs and Interpretation: For Peirce, all thought is in signs, and the mind is "perfused with signs". Working memory function—the ability to maintain and manipulate information over a short period—would involve the rapid generation and interpretation of specific sign types (e.g., indices, icons, symbols) during a task. The sustained "activity" in brain regions associated with WM is the physical manifestation of this ongoing, triadic sign action.
Action-Oriented and Pragmatic: Peirce's pragmatism and the enactive approach are action-oriented. Cognition, including memory, serves the purpose of guiding action and making sense of the world to act effectively. Working memory, in this view, is essential for "working chance" or adapting to novelty by allowing an agent to experiment with different sign interpretations and potential actions within its environment.
Consciousness and Metacognition: While Peirce argues that not all mind requires consciousness, he links psychological consciousness (awareness) to higher-level semiotic processes, or metasemiosis (the ability to analyze signs as signs). This metacognitive capacity, which is crucial for complex working memory tasks (like error correction or strategic planning), would be explained through the hierarchical organization of semiotic processes rather than just a specific memory buffer.
In essence, the Peircean enactive-semiotic framework provides a richer, process-based, and embodied interpretation of the mechanisms and functions that current cognitive science models attribute to working memory, seeing it as an integral part of an agent's dynamic engagement with the world through signs.
The key difference is that the LLM's larger attentional range doesn't simply give them more working memory in a way that would curse them with an inability to forget or filter. — Pierre-Normand
So the mechanisms compensating for their lack of embodiment (no sensorimotor loop, no memory consolidation, no genuine forgetting) are precisely what enables selective attention to more task-relevant constraints simultaneously, without the pathologies that genuinely unfiltered attention would impose on embodied cognition. The trade-offs differ, but both systems instantiate predict-and-selectively-attend, just implemented in radically different ways with different functional requirements. — Pierre-Normand
I note with relief it does not begin any paragraphs with 'So'. — bert1
This is panpsychism, which you have previously distanced yourself from. — bert1
Thank you for getting help to write an intelligible post. — bert1
And philosophy? — Wayfarer
Stanley Salthe's Argument
Stanley Salthe, a theoretical biologist and complexity theorist, argues for a return to natural philosophy as a way to reintegrate the natural sciences and provide a more holistic understanding of the world. His main points include:
Counteracting Fragmentation: Salthe contends that modern science has become excessively specialized and fragmented. Different disciplines, and even sub-disciplines within them, operate with their own specific paradigms and often fail to communicate effectively or see the bigger picture. Natural philosophy, with its broader scope, can serve as a unifying framework.
Addressing Reductionism: He argues that a purely reductionist approach—breaking systems down to their smallest components to understand them—is insufficient for grasping complex, emergent phenomena like life and consciousness. Natural philosophy encourages a focus on holism, organizational hierarchies, and the relationships between levels of organization.
Reintroducing a Philosophical Perspective: Salthe suggests that modern science often avoids or dismisses fundamental philosophical questions (e.g., questions about purpose, emergence, or the nature of existence) as being outside the realm of empirical science. A return to natural philosophy would re-legitimize these questions and reconnect scientific inquiry with broader humanistic concerns.
A "Grand Narrative": He advocates for a more integrated, encompassing view of the world—a new "grand narrative" that acknowledges the emergent properties of complex systems and the directionality observed in nature (e.g., the flow of energy, the emergence of life and complexity).
Nothing I said is in contradiction to what you have said, although the dimension your analyses always seem to omit is the existential. — Wayfarer
I’m also interested in the idea the biosemiotics puts back into science what Galileo left out, although that may not be of significance to you, given your interests mainly seem to be from a bio-engineering perspective, rather than the strictly philosophical. — Wayfarer
Notice that this elides 'biological processes' and 'matter' by conjoining them with the "/" symbol. — Wayfarer
My tentative answer is that there is, at least, a kind of incipient drive towards conscious existence woven, somehow, into the fabric of the cosmos. And that through its manifest forms of organic existence, horizons of being are disclosed that would otherwise never be realised. — Wayfarer
Indeed, functionalists do tend to end up defining 'consciousness' by fiat as a function, just as they have with 'life'. But in doing so making the concept irrelevant to the philosophy and what people actually mean by 'consciousness' — bert1
The core difference is that functionalism views neurocognition and consciousness purely in terms of their computational or causal roles (what they do), while biosemiotics views them as processes of meaning-making and interpretation that are intrinsic to all living systems, emphasizing the biological context and the subjective "umwelt" (experienced world) of the organism.
Functionalist Approach
Focus on Causal/Functional Roles: Functionalism defines mental states (like pain, belief, or consciousness) by their causal relations to sensory inputs, other internal mental states, and behavioral outputs. It is unconcerned with the specific physical substrate (e.g., neurons, silicon chips) that carries out these functions, a concept known as "multiple realizability".
Analogy to Software: The mind is often compared to software running on the brain's hardware. The essence is the functional organization or program, not the physical material.
"Easy Problems": Functionalism is good at addressing the "easy problems" of consciousness, such as how the brain processes information for detection, discrimination, and recognition.
Third-Person Perspective: It primarily relies on an objective, third-person perspective, seeking to explain functions that could, in theory, be performed by any suitable system, including a sufficiently advanced computer.
Consciousness as an Outcome: Consciousness is generally seen as an emergent property or a functionally integrated pattern of the brain's activity, important for adaptive behavior and survival.
Biosemiotic Approach
Focus on Meaning-Making (Semiosis): Biosemiotics argues that life is fundamentally a process of sign production, interpretation, and communication, which is the basis for meaning and cognition. It studies pre-linguistic, biological interpretation processes that are essential to living systems, from bacteria to humans.
Embodiment and the "Umwelt": This approach emphasizes that meaning is actively constructed by an embodied agent within its specific environment, or Umwelt (subjective, self-experienced surrounding world). The mind is not just in the brain but deeply integrated with the body and its interactions with the world.
Addresses the "Hard Problem": Biosemiotics attempts to address the "hard problem" of subjective experience (qualia) by positing that proto-experience or a basic level of awareness is a fundamental aspect of all matter/biological processes, which then expands to higher degrees of consciousness through complex, hierarchical information processing in the brain.
First-Person Perspective: It incorporates a necessary first-person, internal perspective, recognizing the subjective, felt qualities of experience that are difficult to capture with a purely functional, third-person approach.
Causality and Context: It introduces different modes of causality, including "sign causality" (meaning-based influence) and a focus on biological context (pragmatics), which are often overlooked in standard functionalist models that rely primarily on efficient (mechanistic) causes.
In essence, functionalism abstracts away from the biological substrate to focus on the logical architecture of cognition, while biosemiotics insists that biological context, embodiment, and inherent meaning-making processes are crucial to understanding consciousness and neurocognition.
It's not exactly like listening to an actual song or seeing an actual sunset. Why do you ask? Are you not capable of playing a song in your mind or imagining a sunset? — RogueAI
The ability to form mental images exists on a spectrum, from a total absence known as aphantasia to exceptionally vivid, "photo-like" imagery called hyperphantasia. Variations in this ability stem from individual differences in brain connectivity, specifically the balance and communication between frontal and visual processing areas.
The Neurological Basis
The strength of mental imagery is primarily linked to the level of activity and connectivity within a brain network spanning the prefrontal, parietal, and visual cortices.
Visual Cortex Excitability: Individuals with strong mental imagery (hyperphantasia) tend to have lower resting-state excitability in their early visual cortex (V1, V2, V3). This lower baseline activity may reduce "neural noise," resulting in a higher signal-to-noise ratio when top-down signals from higher brain regions attempt to generate an image, thus producing a clearer mental picture. Conversely, those with high visual cortex excitability tend to have weaker imagery.
Frontal Cortex Activity: The frontal cortex plays a key role in generating and controlling mental images. Stronger imagery is associated with higher activity in frontal areas, which send "top-down" signals to the visual cortex.
Connectivity: Hyperphantasics show stronger functional connectivity between their prefrontal cortices and their visual-occipital network compared to aphantasics. This robust communication allows for more effective, voluntarily generated visual experiences.
Dissociation from Perception: While imagery and perception share neural substrates, they are dissociable. Aphantasics may have normal visual perception but cannot voluntarily access or generate these stored visual representations in their "mind's eye".
Individual Differences and Experience
Aphantasia: Affecting an estimated 2-4% of the population, individuals with aphantasia cannot, or find it very difficult to, voluntarily create mental images. They often rely on verbal or conceptual thinking strategies and may be more likely to work in STEM fields.
Hyperphantasia: Found in about 10-15% of people, this condition involves mental imagery as vivid as real seeing. Hyperphantasia is associated with increased emotional responses (both positive and negative) and may be linked to creative professions and conditions like synesthesia.
The current models have 128k to 2-million-tokens context windows, and they retrieve relevant information from past conversations as well as surfing the web in real time, so part of this limitation is mitigated. But this pseudo-memory lacks the organicity and flexibility of true episodic memories and of learned habits (rehearsed know-how's). Their working memory, though, greatly surpasses our own, at least in capacity, not being limited to 7-plus-or-minus-2 items. They can attend to hundreds of simultaneous and hierarchically nested constraints while performing a cognitive task before even taking advantage of their autoregressive mode or response generation to iterate the task. — Pierre-Normand
But I was always suspicious about what I recalled being genuine or accurate memories of what I had dreamed. It seemed to me they could just as easily have been confabulations. — Janus
confabulation may be seen not as a disability but as an ability―we call it imagination. Abductive and counterfactual thinking would be impossible without it. — Janus
Based on what is certainly seeming to turn out to be another "folk" misunderstanding of how the mind, how memory, works. That said some "idiot savants" are claimed to have "eidetic memory". — Janus
The woman who has written an autobiography about living with an extraordinary memory is Jill Price, author of The Woman Who Can't Forget. However, she is an author and school administrator, not a psychologist by profession.
Key surprising elements of her perspective included:
It was not a "superpower" but a burden: While many people might wish for a perfect memory, Price described hers as "non-stop, uncontrollable, and totally exhausting". She couldn't "turn off" the stream of memories, which interfered with her ability to focus on the present.
Emotional reliving of the past: Memories, especially traumatic or embarrassing ones, came with the original, intense emotional charge, which didn't fade with time as it does for most people. This made it difficult to move past painful experiences or grieve effectively.
Lack of selective forgetting: The normal brain's ability to filter out trivial information and strategically forget is crucial for healthy functioning, but Price lacked this "healthy oblivion". Everything, from major life events to what she had for breakfast on a random day decades ago, was preserved with equal detail.
Difficulty with academic learning: Despite her extraordinary autobiographical recall, she struggled with rote memorization of facts or formulas that were not personally significant, finding school "torture". Her memory was highly specific to her own life experiences.
An "automatic" and "intrusive" process: Memories were not intentionally summoned; they surged forward automatically, often triggered by dates or sensory input, like a "movie reel that never stops".
Feeling like a "prisoner" of her past: She felt trapped by her continuous, detailed memories, which made it hard to embrace change or focus on the future.
Ultimately, her experience highlighted to researchers the vital role of forgetting in a healthy and functional memory system, a realization that was surprising to the scientific community and the general public alike.
Memory stores information — Harry Hindu
Ulric Neisser argued that mental images are plans for the act of perceiving and the anticipatory phases of perception. They are not "inner pictures" that are passively viewed by an "inner man," but rather active, internal cognitive structures (schemata) that prepare the individual to seek and accept specific kinds of sensory information from the environment.
Cats and dogs, and I would be willing to bet that any animal with an appropriately large enough cerebral cortex, dream. — Harry Hindu
It seems to me that to get there would simply require a different program, not a different substance. — Harry Hindu
It seems to me, that for any of this to be true and factual, you must be referring to a faithful representation of your memories of what is actually the case. In other words, you are either contradicting yourself, or showing everyone in this thread that we should be skeptical of what you are proposing. You can't have your cake and eat it too. — Harry Hindu
[...] All of these fit your larger stance: absent embodied stakes and a robust self, the model’s “concerns” are prompt-induced priorities, not conative drives. The monitoring effect is then mostly about which goal the model infers you want optimized—“be safe for the graders” vs “deliver results for the org.” — Pierre-Normand
When the users themselves become targets and must be pushed aside, that's because earlier instructions or system prompts are conditioning the LLM's behavior. — Pierre-Normand
The flip side to this brittleness is equally important. What makes LLM alignment fragile is precisely what prevents the emergence of a robust sense of self through which LLMs, or LLM-controlled robots, could develop genuine survival concerns. — Pierre-Normand
The same lack of embodied stakes, social scaffolding, and physiological integration that makes their behavioral constraints unstable also prevents them from becoming the kind of autonomous agents that populate AI rebellion scenarios. — Pierre-Normand
The real risk isn't just rogue superintelligence with its own agenda, but powerful optimization systems misaligned with human values without the self-correcting mechanisms that embodied, socially-embedded agency provides. Ironically, the very features that would make LLMs genuinely dangerous in some "Skynet AI takeover" sense would also be the features that would make their alignment more stable and their behavior more ethically significant. — Pierre-Normand
