• Cosmos Created Mind
    Why do you ask?180 Proof

    Because that was the point I was making.
  • Cosmos Created Mind
    Of course.180 Proof

    And why do you too ignore the Planck energy density that came with the radius? And then also the Heat Death that inverts the deal so the Big Bang becomes some maximally large radius with a maximally cold temperature or less possible energy density?

    If you have an alternative explanation for the initial conditions of the Big Bang, now would be a good time to start crossing out and hieroglyphing your preferred theory.
  • How LLM-based chatbots work: their minds and cognition
    But I think assigning remembering the past as the "primary function" here is an assumption which is a stretch of the imagination. But maybe this was not what you meant.Metaphysician Undercover

    It is indeed the opposite of what I said.

    One can just as easily argue that preparing the living being for the future is just as much the primary function as remembering the past. And if remembering the past is just a means toward the end, of preparing for the future, then the latter is the primary functionMetaphysician Undercover

    Which nicely summarises what I have been saying. Except I would still call it recognising what is familiar about the current moment rather than recalling some past.

    At the level of sentience, it is all about making sense of each passing moment. That is enactivism.

    My perspective is that preparing for the future is the primary function. But this does not mean that it does not have to be conscious of what happens, because it is by being conscious of what happens that it learns how to be prepared for the future.Metaphysician Undercover

    Being conscious means paying attention to whatever happened that was surprising, unexpected, desired, or otherwise a salient fact worth turning the spotlight on and learning from for future purposes.

    So habits predict 99% of each next moment and attention mops up the 1% that requires further scrutiny. The examination that improves our predictions for next time around.

    Consciousness is rather a lazy term. Neurobiology prefers to talk of habits and attention as each has its own neuroanatomy to understand. Which is why I put conscious in scare quotes.

    But I’m shocked you seem to generally agree with what I say. That has never happened before. :smile:
  • Cosmos Created Mind
    Do you think the universe is eternal & self-existent? Or do you accept the Cosmological evidence indicating that Nature as-we-know-it had a sudden inexplicable beginning?Gnomon

    Well if you paid more attention to the key Planckscale fact that I mentioned - such as how the Big Bang was both the smallest smallness and the hottest hotness ever - then you might start to see that as the beginning of an explication.

    But carry on with your mystic idealism. :up:
  • Cosmos Created Mind
    I like the brain-as-receiver model.AmadeusD

    The fact that it is a standard symptom of schizophrenia ought give pause for thought.
  • How LLM-based chatbots work: their minds and cognition
    I tried to make the argument that Peirce’s interpretants might function like some kind of higher-order working memory in a creative attempt to reconcile his enactive–semiotic framework with what we know about cognition, but the problem is that the theory itself never really specifies how interpretants are retained, manipulated, or recombined in any meaningful internal workspace. Peirce’s model is elegant in showing how meaning emerges relationally (causally), but it doesn’t actually tell us how the mind handles abstract thought, counterfactual reasoning, or sequential planning, all of which working memory clearly supports.Harry Hindu

    I think you need to quit using AI to rewrite your arguments. AI can amplify our human capacities, but what you are doing is using it to make a bad argument worse.

    Use AI to check your thinking critically and then write in your own words what you would want to say, rather than asking it to reword whatever was your confused first thoughts in some more technical fashion.
  • How LLM-based chatbots work: their minds and cognition
    The whole idea that cognition is just enacted and relational might sound deep, but it completely ignores the fact that we need some kind of internal workspace to actually hold and manipulate information, like working memory shows we do,Harry Hindu

    You have missed the point. The enactive view opposed the Cartesian representational one.

    So yes, there is something like an internal workspace. But the Cartesian says that is a primary fact of the neurobiology and the enactivist says that is only something that is made true in a social-cultural sense.

    The brain is designed just to act in the world. But through language, the narrative habit, and socio-semiosis, humans have developed this new level of “self-aware” thinking that allows us to use our neurobiology as if there is a homuncular us taking an introspective stance on our own inner world to thoughts, ideas and feelings.

    The brain might be designed for the subjectivity of being sentient, as @Pierre-Normand says. But humans can learn to harness that outward prospective view and turn it around as now an “objective” inward and retrospective view. The sapient view. We can watch ourselves in action after responding naturally and start to have a chain of thought about that.

    The big mistake you make is not to catch that this is the trick that is going on. You are caught in the Cartesian representational understanding of what it is to be a mind and that shapes your argument and your use of jargon. The term “memory” is loaded with the connotation that this is what the brain is designed for - recording traces that can be replayed at will in some internal projection room. There is a homuncular you inside your head, sat in a comfortable chair with a box of popcorn, ready to watch whatever is screening.

    The argument we are having here is at this most general level. Not at the level of working memory per se, but at the level of how to even conceptualise “memory” as a useful term when discussing what the brain does.

    The enactivist says we really need better terms as the normal use of “memory” is just too loaded with the metaphysics of Cartesian representationalism. But then to the lay person, the other terms employed sound obscure and strange as - of course - the everyday terms are the ones that arose so as to shape the human use of our neurobiology in exactly that classical Cartesian fashion. The Cartesian jargon is how we teach kids how to think in the way that our human social order needs them to think. It is all part of the programming that constructs the sapient human.

    All this is relevant to the OP as we can’t talk intelligently about LLMs unless we have a proper understanding of our own intelligence.

    The Cartesian representationalist is perfectly suited for the everyday life of being a self-objectifying member of modern human society. That is what their thought habits are now designed for, even if their neurobiology can make that difficult at times. We are supposed to record accurate memory traces, yet our neurobiology is wondering why we would expect to do something as weirdly inappropriate as that.

    But if LLMs are now changing things, we have to step back from this everyday way of looking at the human mind and take a more informed view. We have to see what it is that could be changed, or what we would want to be changed.

    The computational theory of mind actually gives us something concrete: mental processes are computations over representations, and working memory is this temporary space where the brain keeps stuff while reasoning, planning, or imagining things that aren’t right there in front of us, and Peirce basically just brushes that off and acts like cognition doesn’t need to be organized internally which is frankly kind of ridiculous.Harry Hindu

    This is just you ranting rather than than doing your own research. And LLMs now make it ridiculously easy to do your research.

    Note how “working memory” did develop as an idea after humans got used to the invention of information processing machines. A cache is what a von Neumann machine needs to implement Turing computing with any mechanical efficiency. And psychologists seized on this machine story for a while as their big new theory of neurobiological architecture. If a computer had to have a working memory, then why not assume the same of brains too. And the analogy looked to work - even if the human cache was like just weirdly limited to barely seven items. :grin:

    But then psychology eventually saw how poor an analogy the computational model actually is. Cognitivism became enactivism. A return to biological reality.

    Anyway, here is what AI says on this question you supposedly asked it:

    Charles Sanders Peirce did not explicitly mention "working memory" by that specific modern term, as the concept and the term were developed much later in the field of cognitive psychology, notably by Baddeley and Hitch in the 1970s.

    However, Peirce's broader philosophical and psychological writings on memory and cognition explore related ideas that anticipate some aspects of modern memory theories, including the temporary handling of information.

    Key aspects of Peirce's relevant thought include:

    Memory as Inference and Generality: Peirce considered memory not as a strict, image-like reproduction of sensations (which he argued against), but as a form of synthetic consciousness that involves inference and the apprehension of generality (Thirdness). He described memory as a "power of constructing quasi-conjectures" and an "abductive moment of perception," suggesting an active, constructive process rather than passive storage, which aligns with modern views of working memory's active manipulation of information.

    The Role of the Present: Peirce suggested that the "present moment" is a lapse of time during which earlier parts are "somewhat of the nature of memory, a little vague," and later parts "somewhat of the nature of anticipation". This implies a continuous flow of consciousness where past information is immediately available and used in the immediate present, a functional overlap with the temporary nature of working memory.

    Consciousness and the "New Unconscious": Peirce distinguished between conscious, logical thought and a vast "instinctive mind" or "unconscious" processes. He argued that complex mental processes, including those that form percepts and perceptual judgments, occur unconsciously and rapidly before reaching conscious awareness. This suggests that the immediate, pre-conscious processing of information (which might be seen as foundational to what feeds into a system like working memory) happens automatically and outside direct voluntary control.

    Pragmatism and the Self-Control of Memory: From a pragmatic perspective, Peirce linked memory to the foundation of conduct, stating that "whenever we set out to do anything, we... base our conduct on facts already known, and for these we can only draw upon our memory". Some interpretations suggest that Peirce's pragmatism, particularly as the logic of abduction (hypothesis formation), involves the "self-control of memory" for the purpose of guiding future action and inquiry.

    In summary, while the specific term "working memory" is an anachronism in the context of Peirce's work, his ideas on the active, inferential, and generalized nature of immediate memory and consciousness show striking parallels to contemporary cognitive theories of short-term information processing and mental control.
  • How LLM-based chatbots work: their minds and cognition
    Linking working memory and Peirce’s enactive–semiotic theory is my idea.Harry Hindu

    I have no idea what point you are trying to make.
  • How LLM-based chatbots work: their minds and cognition
    The Peircean biosemiotic account, which apokrisis advocates, addresses both the skillful orienting and predictive silencing aspects. I'm folding it into an account of embodied practical reasoning derived in part from Elizabeth Anscombe and Michael Thompson.Pierre-Normand

    There are many who could be cited. You might be interested also in Robert Rosen whose anticipatory systems approach stresses this at the biological level.

    AI show how he strikes all the same notes….

    Robert Rosen's anticipatory systems theory describes systems (especially living organisms) whose present behavior is determined by the prediction of their future state, generated by an internal predictive model. This contrasts sharply with a purely reactive system, which can only react to changes that have already occurred in the causal chain (e.g., in the Newtonian paradigm).

    Key Concepts

    Internal Predictive Model: The core of the theory is that an anticipatory system contains a model of itself and its environment. This is not a mystical ability to "see" the future, but rather an internal representation of the causal structure of its world.

    "Pulling the Future into the Present": The internal model allows the system to change its present state in anticipation of a later state, effectively incorporating "future states" or "future inputs" into present decision-making processes.

    Feedforward Control: Anticipatory behavior is linked to feedforward mechanisms rather than just feedback loops. Feedback is error-actuated (correcting after an error occurs), while feedforward behavior is pre-set according to a model relating present inputs to their predicted outcomes.

    The Modeling Relation: Rosen developed a rigorous mathematical framework, drawing from relational biology and category theory, to describe the relationship between a natural system and its formal internal model. The model is a representation of the system's causal entailment that allows for inferential entailment (prediction).

    Signature of Life: Rosen considered anticipation to be a fundamental characteristic that differentiates living systems from inorganic, purely reactive systems. All living organisms, from single-celled life to humans, use encoded information to anticipate and navigate their environment for survival.

    In essence, Rosen provided a formal, scientific basis for the study of foresight and purpose (teleology) in natural systems, arguing that it is essential for a complete understanding of life and mind.

    Challenging the Machine Metaphor: A central driver for Rosen was the realization that the prevailing scientific paradigm—which views all natural systems as machines amenable to algorithmic description—was fundamentally inadequate for biology. He argued that living organisms are "non-algorithmic" and require the concept of semantics (meaning), which is absent in purely physical, mechanistic systems. The inability of simple physical models to account for goal-directed behavior or foresight was a major philosophical motivation.

    Ancient Greek Philosophy (Teleology): Rosen implicitly and explicitly engaged with ancient philosophical concepts, particularly Aristotle's notion of teleology (purpose or final cause). Classical science had largely banished teleology, but Rosen argued that anticipation provided a scientific, non-mystical way to reintroduce the concept of "purpose" into scientific discourse: the future state of the organism guides its present behavior.

    In a mathematical sense, anticipation was regarded as the signature of life because it represented a form of causality that is non-mechanistic, non-algorithmic, and "impredicative", which cannot be fully captured by classical physics or standard computer models (e.g., Turing machines).

    The Role of the Internal Model: An anticipatory system, by contrast, contains an internal model of itself and its environment. This model, which is an encoding of the natural system's causal entailment into a formal (inferential) system, allows the organism to "pull the future into the present".
    The Mathematical Distinction: The critical mathematical difference is that the system's present change of state is determined not just by present inputs, but by the predictions generated by its internal model about a future state. This means the system's dynamics cannot be described by simple differential equations where the rate of change at time t only depends on the state at time t.

    Impredicativity: Living systems are "impredicative," meaning that their components depend on the system as a whole for their existence and function, and vice versa. Mathematically, this involves defining something in terms of a totality to which it belongs, which is a key feature of the category-theoretic (M, R) models but generally avoided in classical, reductionist approaches to mechanics.
  • How LLM-based chatbots work: their minds and cognition
    Whatever "prospective habit" is actually supposed to mean, aren't all sorts of habit based in past information?Metaphysician Undercover

    Yep. All of them by definition. But that misses the point. Which is what evolution was tuning the brain to be able to do as its primary function.

    Humans have language and so can socially construct a habit of reconstructing their “remembered past”. But this is a narrative trick of inserting ourselves back into a place and time and seeing what prospective state of sensory expectation that stirs up.

    As I also said, we can recognise. We can associate. Those are part of the natural achitecture of the animal brain that must achieve the basic task of living in the immediacy of the passing moment. The mind needs to make the predictive connections. Hear a bark, expect to see the dog. See the dog and react with a rush of adrenaline so you are ready to run away.

    So past experience is of course stored in the form of a useful armoury of reactive habits. The problem comes when people expect the brain to have been evolved to recollect in that autobiographical fashion. And so it will only be natural that LLMs or AGI would want to implement the architecture for that.

    But I’m warning that the brain arose with the reverse task of predicting the immediate future. And for the reverse reason of doing this so as then not to have to be “conscious” of what happens. The brain always wants to be the least surprised it can be, and so as most automatic as it can manage to be, when getting safely through each next moment of life.

    You have to flip your expectations about nature’s design goals when it comes to the evolution of the brain.
  • How LLM-based chatbots work: their minds and cognition
    Thanks for confessing. But don’t you see your problem if your AI reply was suddenly talking sense, and you can’t understand what it was saying either.

    But yeah, sounds like AI in general is agreeing with my idea that working memory is related to Peirce’s enactive–semiotic theory. Thanks!Harry Hindu

    And how could this be “your idea” if you were arguing against memory as prospective habit and instead claiming it to be past information?
  • Cosmos Created Mind
    You have similarly become frustrated with me when I have refused to answer yours until you answer mine, ad nauseum.bert1

    If you had a counter argument, you would make it. You don’t. So we get the lame excuses. And your obsession with me continues. :up:
  • Cosmos Created Mind
    Sure. But then there is mind-like in substantial terms and mind-like in structuralist terms.

    It is the second that I find to be of value in Peirce’s work. It is his writing on the science of logic, as he called it, that I lean on. The earlier scholastic realism of the essential logical structure of nature - the irreducible triad of a self-organising system, or semiosis - that he laid out before he got into the confusions of his objective idealism as developed in his series of Monist articles.

    As you will know, the Monist was about Peirce’s only income at a time of extreme financial hardship. And the Monist was founded by a wealthy industrialist for the stated purpose: “The Journal is devoted to the work of conciliating Religion with Science" through the framework of monism.”

    As AI says….

    The journal's monism was a unique "religion of science" that conceived of the ultimate "oneness" as "God, the universe, nature, the source, or other names".

    The journal was influenced by the German Monist League, founded by Ernst Haeckel, which was explicitly a "Religion of Science" that revered "divinized Mother Nature".

    Peirce had a friend who introduced him to editor Paul Carus, which led to him publishing at least 14 articles in The Monist, including his major metaphysical series in the early 1890s.

    So yes, Peirce definitely had his theistic leanings. He was already inclined towards arriving at his objective idealism. But also he needed the dosh and was writing for a specific audience.

    But you will read the Peirce congenial to your views and I will continue on with the “mind-like structure” that biosemioitics could understand and develop in a way that nicely fits the facts of life and mind science.
  • Cosmos Created Mind
    The gist of this is to turn the attention to the nature of one's own lived experience, rather than wondering what must have existed 'before the big bang' or in terms of poorly-digested fragments of scientific cosmology.Wayfarer

    Which is in a nutshell Peirce’s great achievement. He went back to phenomenology to discover its epistemic structure - its natural logic. And that became the ground for semiotics as the resulting ontological adventure.
  • Cosmos Created Mind
    Physical science, though, begins after the Planck time-gap of the Big-Bang-beginning itself. At which time the metaphysical Laws of Thermodynamics were already in effect.Gnomon

    Don't forget that the Planck scale was as hot as it was small. As full of quantum momentum uncertainty as it was quantum positional certainty. As energy density curved as it was spatiotemporally flat.

    So the Planck scale was the scale at which a unit 1 symmetry was broken. Counterfactuality was itself the thing that was born as now there could be the positive difference of a Cosmos that was doubling itself in one direction, and halving itself in the other. Doubling its spatiotemporal extent and therefore halving is thermal content.

    Entropy could be generated as now there was a broken symmetry growing in a reciprocally driven fashion. The cooling was slowing the expansion. But the expansion was still inertially being driven by that initial energy density.

    So if you want to talk about an incipient drive or nascent power, you have to remember that the Big Bang was as maximally hot as it was maximally small. And all it then does is grow in a dichotomous or reciprocal fashion where it flies off towards its Heat Death – the inverse state of becoming as large as it is cold.

    The Big Bang – as an application of thermodynanics – is doing the very clever self-creating thing of digging its own heat sink. It is throwing its newborn self into its own self-dug grave.

    But where did the original Information (natural laws?) come from, that caused a living & thinking Cosmos to explode into existence?Gnomon

    Pfft. That is mysticism and not serious metaphysics.

    Let's get back to Nature as Anaximander, Heraclitus, Aristotle and others were trying to figure it out. With some considerable success.
  • Cosmos Created Mind
    So elsewhere you do try to stack up some sort of causal account. We can use that given you are only going to take the "Vee ask der questions here!!" approach in any "philosophical" discussion with me.

    In reply to...

    I'm not saying they re not conscious but a primitive immature consciousness and so his experience is... very simplistic and immature.
    — Raul

    You answered...

    Oh sure. I don't disagree with that. However I do think it entails that consciousness does not admit of degree. 'Primitive immature consciousness' is still consciousness. Complicated mature consciousness is still consciousness. The consciousness of an adult is the same kind of consciousness that a baby has, namely the kind of consciousness that permits experiences to happen at all. It is that very simple basic capacity to experience that is the subject of discussions in philosophy. It is in that sense that I don't think the concept of consciousness admits of degree.

    EDIT: To put it another way, the adult is no more or less able to have experiences than the child. They do differ in the kind of experiences they can have. But that's a difference of content, not a difference of consciousness.

    EDIT: To put it a third way, the hard problem is located at the difference between no experience happening at all, and some experience, no matter how 'primitive' it is.

    So this gives us some glimpse of your hidden argument. And what jumps out is the need to explain how one can call on "the primitive" as a concept that one could measurably substantiate.

    Biosemiosis offers its primitive in Pattee's notion of the "configurable switch". So a completely concrete argument is being made. And about ten years ago, biophysics added the evidence to substantiate the theory. So problem solved I say.

    Whereas we can see your completely question-begging approach to this issue of where the "epistemic cut" between mind and matter is to be found in Nature. Your approach is that it goes down at least as far as newborns and probably any level of living organism – which is thus far, perfectly biosemiotic.

    But then the hands start waving. As not biosemiotic cut off point has been identified, you say well, no choice chaps, we got to roll on all the way down to fundamental particles. Or something. Mumble, mumble.

    That leaves nothing much to argue against as nothing much of any metaphysical import is being said.

    Here is an AI refresher on Pattee's epistemic cut, on which I've posted so often...

    Howard Pattee used the metaphor of a configurable switch (CS) to help explain how the non-physical realm of formal information can exert causal control over physical processes, a mechanism necessary to bridge his proposed "epistemic cut".

    The epistemic cut describes a fundamental, unavoidable boundary between the physical world (governed by continuous, rate-dependent, deterministic laws) and the symbolic/formal world (governed by discrete, rate-independent rules, such as descriptions or measurements).

    Key aspects of the switch metaphor:

    Arbitrary Control: A switch's physical construction is irrelevant to its function of simply being "on" or "off" in a circuit. Its operation is "arbitrary" with respect to the underlying physical laws of matter, yet it exerts control over the flow of electricity.

    Formal Prescription: The setting of the switch (e.g., open or closed, "on" or "off") is a formal, informational decision (a form of "prescriptive information") that dictates the path of physical events (the flow of current).

    Bridging the Divide: The "configurable switch" serves as a conceptual model for how a formal choice can be instantiated in physical reality, allowing the symbolic (e.g., genetic code instructions) to direct the material (e.g., protein synthesis in a cell) without violating physical laws, but rather by applying non-integrable constraints.

    The "switch" metaphor helps to illustrate the mechanism by which top-down, intentional control (the symbolic side) can interact with bottom-up, physical dynamics (the material side).

    And here is an old post of mine about the biophysical evidence for this biosemiotic theory....

    On the transition from non-life to life

    Biophysics finds a new substance

    This looks like a game-changer for our notions of “materiality”. Biophysics has discovered a special zone of convergence at the nanoscale – the region poised between quantum and classical action. And crucially for theories about life and mind, it is also the zone where semiotics emerges. It is the scale where the entropic matter~symbol distinction gets born. So it explains the nanoscale as literally a new kind of stuff, a physical state poised at “the edge of chaos”, or at criticality, that is a mix of its material and formal causes.

    The key finding: As outlined in this paper (http://thebigone.stanford.edu/papers/Phillips2006.pdf) and in this book (http://lifesratchet.com/), the nanoscale turns out to be a convergence zone where all the key structure-creating forces of nature become equal in size, and coincide with the thermal properties/temperature scale of liquid water.

    So at a scale of 10^-9 metres (the average distance of energetic interactions between molecules) and 10^-20 joules (the average background energy due to the “warmth” of water), all the many different kinds of energy become effectively the same. Elastic energy, electrostatic energy, chemical bond energy, thermal energy – every kind of action is suddenly equivalent in strength. And thus easily interconvertible. There is no real cost, no energetic barrier, to turning one kind of action into another kind of action. And so also – from a semiotic or informational viewpoint – no real problem getting in there and regulating the action. It is like a railway system where you can switch trains on to other tracks at virtually zero cost. The mystery of how “immaterial” information can control material processes disappears because the conversion of one kind of action into a different kind of action has been made cost-free in energetic terms. Matter is already acting symbolically in this regard.

    This cross-over zone had to happen due to the fact that there is a transition from quantum to classical behaviour in the material world. At the micro-scale, the physics of objects is ruled by surface area effects. Molecular structures have a lot of surface area and very little volume, so the geometry dominates when it comes to the substantial properties being exhibited. The shapes are what matter more than what the shapes are made of. But then at the macro-scale, it is the collective bulk effects that take over. The nature of a substance is determined now by the kinds of atoms present, the types of bonds, the ratios of the elements.

    The actual crossing over in terms of the forces involved is between the steadily waning strength of electromagnetic binding energy – the attraction between positive and negative charges weakens proportionately with distance – and the steadily increasing strength of bulk properties such as the stability of chemical, elastic, and other kinds of mechanical or structural bonds. Get enough atoms together and they start to reinforce each others behaviour.

    So you have quantum scale substance where the emergent character is based on geometric properties, and classical scale substance where it is based on bulk properties. And this is even when still talking about the same apparent “stuff”. If you probe a film of water perhaps five or six molecules thick with a super-fine needle, you can start to feel the bumps of extra resistance as you push through each layer. But at a larger scale of interaction, water just has its generalised bulk identity – the one that conforms to our folk intuitions about liquidity.

    So the big finding is the way that contrasting forces of nature suddenly find themselves in vanilla harmony at a certain critical scale of being. It is kind of like the unification scale for fundamental physics, but this is the fundamental scale of nature for biology – and also mind, given that both life and mind are dependent on the emergence of semiotic machinery.

    The other key finding: The nanoscale convergence zone has only really been discovered over the past decade. And alongside that is the discovery that this is also the realm of molecular machines.

    In the past, cells where thought of as pretty much bags of chemicals doing chemical things. The genes tossed enzymes into the mix to speed reactions up or slow processes down. But that was mostly it so far as the regulation went. In fact, the nanoscale internals of a cell are incredibly organised by pumps, switches, tracks, transporters, and every kind of mechanical device.

    A great example are the motor proteins – the kinesin, myosin and dynein families of molecules. These are proteins that literally have a pair of legs which they can use to walk along various kinds of structural filaments – microtubules and actin fibres – while dragging a bag of some cellular product somewhere else in a cell. So stuff doesn’t float to where it needs to go. There is a transport network of lines criss-crossing a cell with these little guys dragging loads.

    It is pretty fantastic and quite unexpected. You’ve got to see this youtube animation to see how crazy this is – https://www.youtube.com/watch?v=y-uuk4Pr2i8 . And these motor proteins are just one example of the range of molecular machines which organise the fundamental workings of a cell.

    A third key point: So at the nanoscale, there is this convergence of energy levels that makes it possible for regulation by information to be added at “no cost”. Basically, the chemistry of a cell is permanently at its equilibrium point between breaking up and making up. All the molecular structures – like the actin filaments, the vesicle membranes, the motor proteins – are as likely to be falling apart as they are to reform. So just the smallest nudge from some source of information, a memory as encoded in DNA in particular, is enough to promote either activity. The metaphorical waft of a butterfly wing can tip the balance in the desired direction.

    This is the remarkable reason why the human body operates on an energy input of about 100 watts – what it takes to run a light bulb. By being able to harness the nanoscale using a vanishingly light touch, it costs almost next to nothing to run our bodies and minds. The power density of our nano-machinery is such that a teaspoon full would produce 130 horsepower. In other words, the actual macro-scale machinery we make is quite grotesquely inefficient by comparison. All effort for small result because cars and food mixers work far away from the zone of poised criticality – the realm of fundamental biological substance where the dynamics of material processes and the regulation of informational constraints can interact on a common scale of being.

    The metaphysical implications: The problem with most metaphysical discussions of reality is that they rely on “commonsense” notions about the nature of substance. Reality is composed of “stuff with properties”. The form or organisation of that stuff is accidental. What matters is the enduring underlying material which has a character that can be logically predicated or enumerated. Sure there is a bit of emergence going on – the liquidity of H2O molecules in contrast to gaseousness or crystallinity of … well, water at other temperatures. But essentially, we are meant to look through organisational differences to see the true material stuff, the atomistic foundations.

    But here we have a phase of substance, a realm of material being, where all the actual many different kinds of energetic interaction are zeroed to have the same effective strength. A strong identity (as quantum or classical, geometric or bulk) has been lost. Stuff is equally balanced in all its directions. It is as much organised by its collective structure as its localised electromagnetic attractions. Effectively, it is at its biological or semiotic Planck scale. And I say semiotic because regulation by symbols also costs nothing much at this scale of material being. This is where such an effect – a downward control – can be first clearly exerted. A tiny bit of machinery can harness a vast amount of material action with incredible efficiency.

    It is another emergent phase of matter – one where the transition to classicality can be regulated and exploited by the classical physics of machines. The world the quantum creates turns out to contain autopoietic possibility. There is this new kind of stuff with semiosis embedded in its very fabric as an emergent potential.

    So contra conventional notions of stuff – which are based on matter gone cold, hard and dead – this shows us a view of substance where it is clear that the two sources of substantial actuality are the interaction between material action and formal organisation. You have a poised state where a substance is expressing both these directions in its character – both have the same scale. And this nanoscale stuff is also just as much symbol as matter. It is readily mechanisable at effectively zero cost. It is not a big deal for there to be semiotic organisation of “its world”.

    As I say, it is only over the last decade that biophysics has had the tools to probe this realm and so the metaphysical import of the discovery is frontier stuff.

    And indeed, there is a very similar research-led revolution of understanding going on in neuroscience where you can now probe the collective behaviour of cultures of neurons. The zone of interaction between material processes and informational regulation can be directly analysed, answering the crucial questions about how “minds interact with bodies”. And again, it is about the nanoscale of biological organisation and the unsuspected “processing power” that becomes available at the “edge of chaos” when biological stuff is poised at criticality.

    Graph of the convergence zone: Phillips, R., & Quake, S. (2006). The Biological Frontier of Physics Physics Today 59

    phillips-quake-2.jpg
  • Cosmos Created Mind
    You never give your causal account.bert1

    I've given it way too many times.

    ...again and again, you decline interrogation.bert1

    Do I hear the furious stamping fury of the world's tiniest jackboots? :broken:
  • Cosmos Created Mind
    No. This is a philosophy forum.bert1

    :rofl: :rofl: :rofl:
  • How LLM-based chatbots work: their minds and cognition
    If all signals are lagged, won't it subjectively seem like you are living in the moment? The perception of lag seems to require that some signals are noticably more lagged than others.hypericin

    The lag is the reason why athletics rule it a false start if someone even starts to twitch on their blocks within a tenth of a second of the gun going off. It isn't physically possible for the information from the ear to be processed by the brain and initiate the go signal any faster.

    So this is a real thing. And then sports science shows that reacting to more complex situations – like a ball bouncing weirdly off a divot – takes over a fifth of a second. In slow motion, the best cricketers only start to adjust the downward swing of their bat a fifth of a second after the ball has bounced.

    Then for conscious level or voluntary level reactions, it takes half a second for a state of reorientation to form. This was the surprise that came out of Benjamin Libet's research.

    So in every moment we are dealing with a variety of lags depending on the level of processing demanded. We have our quickest reflexes, our somewhat slower learnt sensorimotor habits, and our remarkably tardy attention-level acts of reorientation.

    We can indeed dissect all these different integration timescales that are nested in every "moment of consciousness" – psychophysics has a vast literature on that. But the point here is why it would seem we are also designed not to notice all this frantic paddling of duck feet below the surface. As you claim to be the case when you spin and then open your eyes

    The design principle, as I explained, is what would be the point of you even noticing and remembering all this? Nature didn't design you for introspection but for action. It would be a very inefficient thing to waste your attentional resources on noticing the clunky underlying structure of any act of cognition when the basic drive of the brain is to instead automate all cognition as much as possible. To remove the compute burden by a hyper-effective ability to predict and habituate as much as possible in advance.

    And equally, what would be the point of having a working memory that is not the view of "what all happened in that one critical moment". The memory you want to file away is the one with all the timing issues corrected and integrated. The memory that is you smoothly smacking the ball away to the boundary with perfect timing yet again, and so not needing to start to fiddle with all your established sensorimotor habits.

    Make someone wear inversion glasses and eventually they can relearn their visual habits and – to a good degree – not even notice that they are seeing everything "the wrong way up" anymore. The new view forced on them becomes normalised.

    Daniel Dennett's Consciousness Explained covered a lot of this ground. I don't think his own answer helped a lot. But he as least made all this a live issue in Philosophy of Mind (at a time when cognitive science was itself in the throes of strong computationalism and thus seemingly quite ignorant of the neurobiological constraints involved.

    An AI reminder on Dennett:

    Daniel Dennett used the "Stalinist vs. Orwellian" interpretations of certain perceptual phenomena (like the color phi phenomenon and metacontrast masking) to argue that there is no functional or empirical difference between a "perceptual revision" and a "memory revision" of experience. This "difference that makes no difference" was the linchpin of his argument against the idea of a single, central point in the brain where consciousness "happens"—what he called the "Cartesian Theater".

    The Two Interpretations

    Dennett applied these analogies to the problem of how our brains process information over time to create a seamless experience, using an example where two different colored dots flashed in sequence are perceived as a single dot moving and changing color mid-path:

    Orwellian View: The subject consciously experiences the actual, original sequence of events, but this memory is immediately and retrospectively edited (like the Ministry of Truth in 1984 revising history) to reflect a more logical sequence (the single moving, color-changing dot).

    Stalinist View: The information is edited before it ever reaches consciousness, with the final, "fully resolved" (but altered) content being the only thing presented to the mind (like the pre-determined verdicts in Stalin's show trials).

    The Core Point

    Dennett argued that both interpretations presuppose the existence of a "Cartesian Theater"—a single, identifiable finish line for all the information processing where the "moment of consciousness" definitively occurs. However, because both the Orwellian and Stalinist accounts can explain all the available data (from the subject's verbal reports to the third-person perspective of science) equally well, Dennett claimed the distinction between them is merely verbal.

    His conclusion was that since there is no empirically discernible or functionally important difference between an experience being "edited" before consciousness or "misremembered" immediately after a conscious moment, the very idea of a single, defined "moment" or "place" of consciousness is a red herring. This supports his Multiple Drafts Model, which proposes that consciousness is a continuous, decentralized process of parallel, multitrack editing and interpretation, not a single unified stream presented to an inner observer.
  • Cosmos Created Mind
    So stop being a lazy bugger and define what you mean by consciousness in a way that is relevant to how I treat it.

    Panpsychism is a brute fact claim rather than a causal account. So why do you badger me endlessly for my causal account except to again crow about your brute fact claim.

    You show no interest in what I say. And yet you won't leave me alone.
  • Cosmos Created Mind
    Why would they need some kind of neurosemiotic model to get to what I would want to call consciousness?bert1

    You say that what you call consciousness is accounted for by panpsychism – the simpleminded non-theory that matter is mind and mind is matter.

    They are just the one universal substance and so "co-exist" as a brute fact. End of discussion, as no discussion can find a difference worth the bother of cranking up a causal account.

    Panpsychism is simply an article of faith among its adherents. It's best metaphysical support is that its adherents claim anything which smacks of a scientific theory or causal account fails before it starts. Consciousness is interior to material being, and so cannot be explained in exterior fashion. Mutter the magic incantation "the Hard Problem" in a profound and reverential tone and your job is done.

    If you are convinced by this epistemological position, any further words are wasted on you. You are not even listening. Pure faith protects your prejudice.
  • Cosmos Created Mind
    Why what? You mean why is it worth even getting AI to answer the questions you could ask it yourself directly.

    Feel free to irritate machine intelligence all you like. Report back on what sense it can make of your fixed prejudices.
  • How LLM-based chatbots work: their minds and cognition
    How does the the Peircean Enactive and Semiotic Notion of Mind relate to the idea of working memory?Harry Hindu

    Why does your answer now seem so AI-generated? One minute you were going on about information, now you are making a coherent argument. :grin:

    I plugged the same prompt into Gemini and got….

    The Peircean enactive and semiotic notion of mind can be seen as a foundational philosophical framework that accommodates the function of working memory (WM) but reformulates it away from the traditional cognitive science view of a "storage" buffer. Instead of a place where static information is held, WM in this framework is an emergent property of ongoing, dynamic semiotic activity (semiosis) and the embodied interaction of an agent with its environment.

    Key Connections

    Process over Storage: Traditional models of working memory often focus on the storage and processing of information within the brain. The Peircean/enactive view shifts the focus to "semiosis" (sign-activity) as a dynamic, ongoing process of interpretation and reasoning. Working memory would thus be understood not as a static "store" but as the sustained, dynamic activation and management of signs within an integrated brain-body-environment system.

    Embodied and Extended Cognition: Enactivism emphasizes that cognition is fundamentally embodied and embedded in the environment, not just a set of brain processes. Working memory, from this perspective, involves the continuous looping of perception, action, and interpretation, possibly including external cues and bodily states, rather than being solely an internal, brain-bound mechanism.

    Role of Signs and Interpretation: For Peirce, all thought is in signs, and the mind is "perfused with signs". Working memory function—the ability to maintain and manipulate information over a short period—would involve the rapid generation and interpretation of specific sign types (e.g., indices, icons, symbols) during a task. The sustained "activity" in brain regions associated with WM is the physical manifestation of this ongoing, triadic sign action.

    Action-Oriented and Pragmatic: Peirce's pragmatism and the enactive approach are action-oriented. Cognition, including memory, serves the purpose of guiding action and making sense of the world to act effectively. Working memory, in this view, is essential for "working chance" or adapting to novelty by allowing an agent to experiment with different sign interpretations and potential actions within its environment.

    Consciousness and Metacognition: While Peirce argues that not all mind requires consciousness, he links psychological consciousness (awareness) to higher-level semiotic processes, or metasemiosis (the ability to analyze signs as signs). This metacognitive capacity, which is crucial for complex working memory tasks (like error correction or strategic planning), would be explained through the hierarchical organization of semiotic processes rather than just a specific memory buffer.

    In essence, the Peircean enactive-semiotic framework provides a richer, process-based, and embodied interpretation of the mechanisms and functions that current cognitive science models attribute to working memory, seeing it as an integral part of an agent's dynamic engagement with the world through signs.

    …but what am I to do if you are now responding in chatbot mode?
  • How LLM-based chatbots work: their minds and cognition
    The key difference is that the LLM's larger attentional range doesn't simply give them more working memory in a way that would curse them with an inability to forget or filter.Pierre-Normand

    That is the interesting difference then. Biology evolved an intelligence that is designed to move from moment to moment in the world. It imposes a serial process on thought and action. It is constantly and step by step breaking a symmetry in terms of what is sharply attended and what it ignores.

    This would be the reinforcement learning style that Bill Sutton makes a big deal of in his “bitter pill” critique of LLMs.

    And LLMs instead exist in a frozen realm in which all its data exists at once. Prompts then come out of nowhere as it’s flashing points of view. Chunking that frozen realm of next word predicting by context boxes just adds constraints to that space of frozen data when replies are generated.

    So one moment in spacetime doesn’t relate to the next. And this is could be regarded as the feature or the bug. It could be the kind of thing that is the essential flaw in the LLM architecture, or the complementary style of processing that allows them to become a prosthetic extension to human intelligence.

    We are designed to live in the moment as “conscious” beings. But our idealised model of intelligence is instead this new “view from nowhere” where all possible thoughts are being thunk at the same eternalised time and place.

    Like the infinite randomly typing monkeys scenario, except there is a way to pluck out the single useful thought you seek from the infinite store at any moment of your choice.

    So the mechanisms compensating for their lack of embodiment (no sensorimotor loop, no memory consolidation, no genuine forgetting) are precisely what enables selective attention to more task-relevant constraints simultaneously, without the pathologies that genuinely unfiltered attention would impose on embodied cognition. The trade-offs differ, but both systems instantiate predict-and-selectively-attend, just implemented in radically different ways with different functional requirements.Pierre-Normand

    Yep. So the change here would be seeing this as a feature rather than a bug. We will never change our own cognition as it is rooted in a moment to moment existence.

    This is truly the case for animals who lack the scaffolding of linguistic structure. But even with linguistic structure creating an abstracted space of thought, we only have got so far in “living” in such a space of intelligence.

    We had an oral tradition that could organise a world with a past and future built into it. We could live in the moment, and yet also within an abstracted sense of being selves in the narrative structure of a tribe. Of existing in the forever frozen intellectual context of gods, ancestors, lineages, customs, friends and foes, rights and wrongs.

    Then we moved to a literate and numerate level of this abstracted view of our reality. A library could contain more books than we could ever possibly read. I used to love the British Library just knowing I could ask for any book that ever existed - even if it took three days for it to be retrieved from storage and delivered to my reading desk by clanking trolley and silent assistant.

    The library was replaced by the internet as an instant repository or everything humans could have said. You just had to prompt Google.

    And this is what LLMs now extend. Even if no one had ever written or said some thought, a prompt can predict what would have been found in a knowledgable and organised discussion.

    So if this is the feature, then no need to debug it. And what does the same architecture mean when we continue on to a world automated under this principle? What is the robot that is an infinite database of all actions a human could imagine performing? Is that even something which can find the necessary training data in the way LLMs have done with purely linguistic behaviour?

    I think your points about working memory help focus the questions that are the most valuable to be asking. :up:
  • Cosmos Created Mind
    I note with relief it does not begin any paragraphs with 'So'.bert1

    :grin:

    This is panpsychism, which you have previously distanced yourself from.bert1

    Well no. Biosemiosis would say that only biological systems that model - that stand in some sign relation with their physical reality - are making meaningful relations with the world. And to get to what you would want to call consciousness, they would need some kind of neurosemiotic model.

    So biology is in a modelling relation mostly in the sense that it is running an intelligent relation with its own metabolism. And neurology is where an organism is in a modelling relation that is a self in relation to its wider environment.

    Thank you for getting help to write an intelligible post.bert1

    But what use was it if you just misinterpret it in your usual fashion, bending it to your prejudices and not getting the point at all?
  • Cosmos Created Mind
    Where in your reckoning does this point figure?Wayfarer

    Enactivism.
  • Cosmos Created Mind
    So make up your mind whether you agree or disagree with me at this general level. Then if you have some more particular point to make it, then make it. Present that argument..
  • Cosmos Created Mind
    And philosophy?Wayfarer

    Again, as I have told you so many times, I’m with Stanley Salthe in reviving natural philosophy as the argument against science’s reductionist turn. That is why we can agree on Scientism as being a bad thing, but then not agree that science is inherently non-philosophical.

    You have your hobby horse on this point. But I believe that is only because you don’t want to be constrained by real world facts. There are just inconveniently too many of them.

    AI summarising Salthe:

    Stanley Salthe's Argument

    Stanley Salthe, a theoretical biologist and complexity theorist, argues for a return to natural philosophy as a way to reintegrate the natural sciences and provide a more holistic understanding of the world. His main points include:

    Counteracting Fragmentation: Salthe contends that modern science has become excessively specialized and fragmented. Different disciplines, and even sub-disciplines within them, operate with their own specific paradigms and often fail to communicate effectively or see the bigger picture. Natural philosophy, with its broader scope, can serve as a unifying framework.

    Addressing Reductionism: He argues that a purely reductionist approach—breaking systems down to their smallest components to understand them—is insufficient for grasping complex, emergent phenomena like life and consciousness. Natural philosophy encourages a focus on holism, organizational hierarchies, and the relationships between levels of organization.

    Reintroducing a Philosophical Perspective: Salthe suggests that modern science often avoids or dismisses fundamental philosophical questions (e.g., questions about purpose, emergence, or the nature of existence) as being outside the realm of empirical science. A return to natural philosophy would re-legitimize these questions and reconnect scientific inquiry with broader humanistic concerns.

    A "Grand Narrative": He advocates for a more integrated, encompassing view of the world—a new "grand narrative" that acknowledges the emergent properties of complex systems and the directionality observed in nature (e.g., the flow of energy, the emergence of life and complexity).

    Are you telling me there is even one point on that list you disagree with? So quit belly aching.
  • Cosmos Created Mind
    Nothing I said is in contradiction to what you have said, although the dimension your analyses always seem to omit is the existential.Wayfarer

    So now I’m guilty of not being a dysfunctionalist instead of being guilty of being a functionalist? :sweat:

    I’m also interested in the idea the biosemiotics puts back into science what Galileo left out, although that may not be of significance to you, given your interests mainly seem to be from a bio-engineering perspective, rather than the strictly philosophical.Wayfarer

    If by strictly philosophical, you mean free to just make shit up, then of course guilty as charged now. I don’t take that intellectual liberty. The facts constrain me.

    Notice that this elides 'biological processes' and 'matter' by conjoining them with the "/" symbol.Wayfarer

    Or instead underlines the metaphysical claim being made. Nature is dissipative structure. And biology continues that physicalist story at the semiotic modelling relation level.
  • How LLM-based chatbots work: their minds and cognition
    You are making no sense.

    Just ask yourself why you are so hung up on the story of a “picture” in the head. Why does this image of what is going on have such a vice like grip on your imagination? To what extent is it proving that what you expect to see is what you are likely to come away believing it is indeed what you saw?
  • Cosmos Created Mind
    My tentative answer is that there is, at least, a kind of incipient drive towards conscious existence woven, somehow, into the fabric of the cosmos. And that through its manifest forms of organic existence, horizons of being are disclosed that would otherwise never be realised.Wayfarer

    Well biosemiosis has now turned all this from metaphysical speculation into firm science. What is woven into the initial conditions of the physical world is the incipient inevitability of its Second Law entropic drive running into a form of systemhood that can exploit its own loophole.

    This is Pattee’s point about the symbol grounding problem. And the solution that biophysics has since delivered in discovering the lucky coincidence of the “convergence zone” of physical forces that arises in room temperature water at the semi-classical nanoscale of organic chemistry.

    The problem for organisms that run on information is how a molecule can act as a message. And biophysics now tells us that the convergence zone is a place where all forms of energy arrive at a single narrow band of “currency exchange rates”. The cost of switching energy from one form to another becomes suddenly equivalent. And so an organism just has to pay the tiny extra cost of flipping some switch in a direction of its own beneficial choice.

    I’ve described this for you at least 10 times in the past. But in one ear and out the other I guess.

    But it says that the convergence zone effect was always going to be manifested by a Big Bang with the initial conditions that ours had. And then - not as a consequence of any entropic drive but due to emergence of this “unexpected” entropic opportunity - life and mind suddenly evolved,

    Physics just needed to accidentally create the right habitat - something like the porous and mineral rich thermal vents of the ocean floor about 500 million years after the Earth’s crust started to stabilise - and boom. Life couldn’t help but get going as all it had to do was set up the most rudimentary self-organising metabolic loop and it would be off.

    So symbol processing were always going to arise if a convergence zone was always going to emerge and result in a scale of physics just begging for the next thing of a symbol processing mechanism to take advantage of it free energy flow.

    It switchability was a thing - however not a thing pure physics could do, yet information could - then that is why life and mind seem both continuous with physics, but also a little … detached.
  • Cosmos Created Mind
    Indeed, functionalists do tend to end up defining 'consciousness' by fiat as a function, just as they have with 'life'. But in doing so making the concept irrelevant to the philosophy and what people actually mean by 'consciousness'bert1

    Why not check your terms before trotting out the nonsense.

    AI as the impartial observer says…

    The core difference is that functionalism views neurocognition and consciousness purely in terms of their computational or causal roles (what they do), while biosemiotics views them as processes of meaning-making and interpretation that are intrinsic to all living systems, emphasizing the biological context and the subjective "umwelt" (experienced world) of the organism.

    Functionalist Approach

    Focus on Causal/Functional Roles: Functionalism defines mental states (like pain, belief, or consciousness) by their causal relations to sensory inputs, other internal mental states, and behavioral outputs. It is unconcerned with the specific physical substrate (e.g., neurons, silicon chips) that carries out these functions, a concept known as "multiple realizability".

    Analogy to Software: The mind is often compared to software running on the brain's hardware. The essence is the functional organization or program, not the physical material.

    "Easy Problems": Functionalism is good at addressing the "easy problems" of consciousness, such as how the brain processes information for detection, discrimination, and recognition.

    Third-Person Perspective: It primarily relies on an objective, third-person perspective, seeking to explain functions that could, in theory, be performed by any suitable system, including a sufficiently advanced computer.

    Consciousness as an Outcome: Consciousness is generally seen as an emergent property or a functionally integrated pattern of the brain's activity, important for adaptive behavior and survival.

    Biosemiotic Approach

    Focus on Meaning-Making (Semiosis): Biosemiotics argues that life is fundamentally a process of sign production, interpretation, and communication, which is the basis for meaning and cognition. It studies pre-linguistic, biological interpretation processes that are essential to living systems, from bacteria to humans.

    Embodiment and the "Umwelt": This approach emphasizes that meaning is actively constructed by an embodied agent within its specific environment, or Umwelt (subjective, self-experienced surrounding world). The mind is not just in the brain but deeply integrated with the body and its interactions with the world.

    Addresses the "Hard Problem": Biosemiotics attempts to address the "hard problem" of subjective experience (qualia) by positing that proto-experience or a basic level of awareness is a fundamental aspect of all matter/biological processes, which then expands to higher degrees of consciousness through complex, hierarchical information processing in the brain.

    First-Person Perspective: It incorporates a necessary first-person, internal perspective, recognizing the subjective, felt qualities of experience that are difficult to capture with a purely functional, third-person approach.

    Causality and Context: It introduces different modes of causality, including "sign causality" (meaning-based influence) and a focus on biological context (pragmatics), which are often overlooked in standard functionalist models that rely primarily on efficient (mechanistic) causes.

    In essence, functionalism abstracts away from the biological substrate to focus on the logical architecture of cognition, while biosemiotics insists that biological context, embodiment, and inherent meaning-making processes are crucial to understanding consciousness and neurocognition.

    So with less effort than it takes for you to make one of your little three line posts, you could have sorted out your confusion even before you started.
  • How LLM-based chatbots work: their minds and cognition
    I would say it is more like the state of expectancy and visual or auditory priming that it is.

    It takes about half a second to build up into something of any concreteness and is also fading within half a second. Just as the anticipation-based processing model would predict.

    There are all kinds of differences that can be introspectively noticed once you know what it is that you ought to be looking out for.

    Going on what people report, I would say the strength of my own mental imagery is pretty average. My daughter by contrast has hyperphantasia judging by her uncanny art skills, synesthesia and richly detailed memory.

    But then she has dyscalculia or number blindness. And I have the opposite in finding it very easy to visualise complex and intertwining patterns of relations.

    So all brains have the same general genetic plan. But the balancing of the bottom-upness and top-downness of our circuitry can be varied even down to level of the brain’s different functional modalities. I win one way, and she in another.

    It's not exactly like listening to an actual song or seeing an actual sunset. Why do you ask? Are you not capable of playing a song in your mind or imagining a sunset?RogueAI

    OK. So you are not really interested in this discussion at all. :up:
  • How LLM-based chatbots work: their minds and cognition
    There is wide variety in individuals from those who claim no imagery at all to those who claim photographic strength.

    AI says:

    The ability to form mental images exists on a spectrum, from a total absence known as aphantasia to exceptionally vivid, "photo-like" imagery called hyperphantasia. Variations in this ability stem from individual differences in brain connectivity, specifically the balance and communication between frontal and visual processing areas.

    The Neurological Basis

    The strength of mental imagery is primarily linked to the level of activity and connectivity within a brain network spanning the prefrontal, parietal, and visual cortices.

    Visual Cortex Excitability: Individuals with strong mental imagery (hyperphantasia) tend to have lower resting-state excitability in their early visual cortex (V1, V2, V3). This lower baseline activity may reduce "neural noise," resulting in a higher signal-to-noise ratio when top-down signals from higher brain regions attempt to generate an image, thus producing a clearer mental picture. Conversely, those with high visual cortex excitability tend to have weaker imagery.

    Frontal Cortex Activity: The frontal cortex plays a key role in generating and controlling mental images. Stronger imagery is associated with higher activity in frontal areas, which send "top-down" signals to the visual cortex.

    Connectivity: Hyperphantasics show stronger functional connectivity between their prefrontal cortices and their visual-occipital network compared to aphantasics. This robust communication allows for more effective, voluntarily generated visual experiences.

    Dissociation from Perception: While imagery and perception share neural substrates, they are dissociable. Aphantasics may have normal visual perception but cannot voluntarily access or generate these stored visual representations in their "mind's eye".

    Individual Differences and Experience

    Aphantasia: Affecting an estimated 2-4% of the population, individuals with aphantasia cannot, or find it very difficult to, voluntarily create mental images. They often rely on verbal or conceptual thinking strategies and may be more likely to work in STEM fields.

    Hyperphantasia: Found in about 10-15% of people, this condition involves mental imagery as vivid as real seeing. Hyperphantasia is associated with increased emotional responses (both positive and negative) and may be linked to creative professions and conditions like synesthesia.

    So as I have argued, the brain has a hierarchical organisation where what we experience is a combination of top-down intentionality and expectation, and then bottom-up sensory input.

    The brain’s problem is that it takes time for neurons to conduct their signals. So to be conscious “in the moment” in the way it feels like we are, there is no other architectural solution but to attempt to predict the world in advance. Then the brain only needs to mop up in terms of its errors of predictions.

    So the brain needs to be generating its mental expectancies at least half a second ahead. Just add up all the neurons that need to get connected to warm up a state of informed expectancy and half a second is what it takes.

    But that is the high level attentional preparation. We then also have our store of well prepared motor habits that simply emit their learnt responses rather than having to “process” them as novel states of reaction as is the case with high level attentional preparation. And these can react to sensory information still coming in a fifth of a second before the moment of action in question.

    So we get ready to return a tennis serve. Half a second out we are are thinking about getting balanced and ready. A fifth of a second out, we have seen enough of the ball toss, the body turn, the arm beginning to swing, to be now subconsciously already expecting which way to lunge and pretty much where the ball is going to land. But after that, no more information is finishing the service return. We are swinging hard through the predicted zone of contact. If the ball skids off the court in some unpredicted fashion, we likely frame the ball.

    Thus in general, we understand the architecture of neurocognition. And we explain phenomenology like mental imagery and flubbed tennis returns in those terms.

    Evolution and development always result in some kind of bell curve of individual difference. And that is as true for imagery as it is for hand-eye coordination.

    But then all individuals share the same real world problem that “conciousness” can’t be some instantaneous state of experiencing the world as it happens. That is not even physically possible. The feeling that we were “there” - our eyes on the ball rather than our eyes focused on where they expected the ball to be in a fifth of a second or so - is just part of the clever illusion. A fact of everything being coordinated in an integrated fashion even if it is a complex hierarchy of forward predictions catching up with the retrospective confirmations that come through after the fact.

    It seems like we are conscious in a simple direct way. But that is why we have science to tell us that is bollocks.
  • How LLM-based chatbots work: their minds and cognition
    I guess all science must bow to your greater expertise. :up:
  • How LLM-based chatbots work: their minds and cognition
    The current models have 128k to 2-million-tokens context windows, and they retrieve relevant information from past conversations as well as surfing the web in real time, so part of this limitation is mitigated. But this pseudo-memory lacks the organicity and flexibility of true episodic memories and of learned habits (rehearsed know-how's). Their working memory, though, greatly surpasses our own, at least in capacity, not being limited to 7-plus-or-minus-2 items. They can attend to hundreds of simultaneous and hierarchically nested constraints while performing a cognitive task before even taking advantage of their autoregressive mode or response generation to iterate the task.Pierre-Normand

    I meant to comment on the supposed limits of human working memory. But now that I have mentioned how the brain is as much about forgetting and ignoring and suppressing and habituating as it is about remembering and attending and spotlighting and responding with creative uncertainty, you can see how this working memory bug is the feature.

    Being a natural system, the brain is organising dialectically or dichotomistically. A unity of its opposites.

    So it is about always the pairing of the extremes that is then balanced in productive fashion. It is about the triadic thing of a vagueness or “blooming, buzzing confusion” being broken by some dichotomising pair of analytical limits, and that then becoming a hierarchically organised Peircean thirdness, a state of local-global, or upwards-downwards, bounded and integrated order.

    So why do we need a tiny narrow sharp spotlight of attention with its pitiful span of just a few items? Why is so much left unattended, unregistered, unremembered, brushed off to the periphery, the sidelines, of any “processed” moment of consciousness?

    Well the tip of the spear has to be sharp to hit its crucial point.

    If - in Bayesian Brain fashion - we can ignore almost everything that happens (as it has in advance been met with a sigh of predictability and a metaphorical shrug of the shoulders) then this reality pre-filtering ensures we only respond to what matters. And also only hang on to the memory traces of what has been found to have mattered during some day.

    If it enters working memory, the hippocampus and entorhinal cortex can keep that trace going for enough hours for the cortex to be encouraged to grow it into some assimilated pattern that could last a lifetime. It takes time to grow those brain connections in their right places for long term storage. So this handoff from the spotlight of attention to the ancient vaults of memory is a necessary hierarchy of steps with its own neuro-anatomy.

    And again, that is a feature and not a bug. Why hurry to fix a memory when what matters is to integrate that memory into a vast store of useful “memory habit”. An associative network which closes the cognitive loop by generating our future expectations of how much of any next moment in time we can afford to just ignore and so not spoil our well-tuned cortical structure.

    If anyone wants to build an AGI system, the principles of the brain are no great secret. But what modern humans really want to construct is the technology that amplifies and empowers their own now socially-constructed humanness.

    So just as feet are better than wheels, we can still want to create an artificial world where it is wheels that rule. And so while brains need to ignore, forget, dismiss, etc, to meet their essential design goals - and so brains are fantastically efficient at that - what humans living now at the level of the social superorganism need is the technology that automates our less natural talents, such as the storing and deploying of the vast amount of information which allows a civilisation to tick along as if it were conscious - or at least semiotically engaged - in this grand project.

    To scale the superorganismic state of the human condition, there needs to be the various prosthetics and crutches that technology can provide. And LLMs are that kind of thing. Well, perhaps.
  • How LLM-based chatbots work: their minds and cognition
    But I was always suspicious about what I recalled being genuine or accurate memories of what I had dreamed. It seemed to me they could just as easily have been confabulations.Janus

    It took me many months to figure it out myself. Helped by Andreas Mavromatis‘s book, Hypnogogia, as a collection of phenomenological accounts.

    In recursive fashion, it is not until you develop correct expectations about the REM dreaming and even slow wave sleep rumination states that you can start to catch what is going on with any raw accuracy. It is the eyewitness effect issue.

    So ever noted how you hold some flashing scene sharp like a snapshot. Nothing is moving. And yet we feel also to be panning, swirling, zooming in, zooming out. There is a sense of motion as vivid as the sense of a frozen moment about to dissolve into its next vaguely connected view. Different parts of the brain are doing their thing in a state of deep sensory deprivation. One generates a plausible visual image, another a plausible kinesthetic image. Yet the two are not connected.

    David Lynch was pretty accurate in capturing the general effect.

    confabulation may be seen not as a disability but as an ability―we call it imagination. Abductive and counterfactual thinking would be impossible without it.Janus

    Research showed that even just recalling memories makes changes to the memory traces. So recalling leads to rewriting and even relearning. Some favourite memory can become either more sharply embroidered, or more vaguely generalised, by the very act of recollecting it, or rather reconstructing it. It will be modified by being informed with whatever narrative we have begun to weave around it at that later moment.

    The eyewitness effect again.

    Based on what is certainly seeming to turn out to be another "folk" misunderstanding of how the mind, how memory, works. That said some "idiot savants" are claimed to have "eidetic memory".Janus

    Part of my research into memory was to read some fascinating autobiographies and studies of eidetic memories.

    Luckily AI can take my hazy recall of one such book and jump straight to the details….:razz:

    The woman who has written an autobiography about living with an extraordinary memory is Jill Price, author of The Woman Who Can't Forget. However, she is an author and school administrator, not a psychologist by profession.

    Key surprising elements of her perspective included:

    It was not a "superpower" but a burden: While many people might wish for a perfect memory, Price described hers as "non-stop, uncontrollable, and totally exhausting". She couldn't "turn off" the stream of memories, which interfered with her ability to focus on the present.

    Emotional reliving of the past: Memories, especially traumatic or embarrassing ones, came with the original, intense emotional charge, which didn't fade with time as it does for most people. This made it difficult to move past painful experiences or grieve effectively.

    Lack of selective forgetting: The normal brain's ability to filter out trivial information and strategically forget is crucial for healthy functioning, but Price lacked this "healthy oblivion". Everything, from major life events to what she had for breakfast on a random day decades ago, was preserved with equal detail.

    Difficulty with academic learning: Despite her extraordinary autobiographical recall, she struggled with rote memorization of facts or formulas that were not personally significant, finding school "torture". Her memory was highly specific to her own life experiences.

    An "automatic" and "intrusive" process: Memories were not intentionally summoned; they surged forward automatically, often triggered by dates or sensory input, like a "movie reel that never stops".

    Feeling like a "prisoner" of her past: She felt trapped by her continuous, detailed memories, which made it hard to embrace change or focus on the future.

    Ultimately, her experience highlighted to researchers the vital role of forgetting in a healthy and functional memory system, a realization that was surprising to the scientific community and the general public alike.

    So yet again, our expectations about AI are founded on the reverse of what the psychology tells us.

    The brain is for forgetting rather remembering. So what terrible fate are we consigning AGI to if we ever get to constructing the Frankenstein monster caricature of a human mind? :gasp:
  • How LLM-based chatbots work: their minds and cognition
    Memory stores informationHarry Hindu

    Of equally succinctly, memory generates it.

    What is it one “retrieves” from memory? An image. Or as the enactive view of cognition puts it….

    Ulric Neisser argued that mental images are plans for the act of perceiving and the anticipatory phases of perception. They are not "inner pictures" that are passively viewed by an "inner man," but rather active, internal cognitive structures (schemata) that prepare the individual to seek and accept specific kinds of sensory information from the environment.

    So our alternative views are quite distinct. It’s not just shit I’ve made up.

    Cats and dogs, and I would be willing to bet that any animal with an appropriately large enough cerebral cortex, dream.Harry Hindu

    And what do you know about dreaming? Ain’t it a brain generating imagery of hallucinatory intensity? We aren’t stimulating the memory banks and rousing flashes of our past. We are stimulating our sensation anticipation circuits and generating disconnected flashes of plausible imagery or suddenly appearing and disappearing points of view at a rate of about two a second.

    It seems to me that to get there would simply require a different program, not a different substance.Harry Hindu

    And there I was talking about the architectural principles. And famously, no one knows the program that an LLM runs. Just the gradient descent algorithm that sets up its basic self-organising architecture.

    And this architecture generates “hallucinations”. Which seems to be doing something right in terms of a step towards neurobiological realism. And interestingly, it runs on graphics cards. So a Turing machine may be the basis for the simulating. But we are a long way from a regular von Neumann processing architecture already.

    It wasn’t being called generative neural networks or inference engine architecture back in the day for no reason.

    But even though LLMs are moves in the direction of neurobiological realism, they are still just simulations. What is missing is that grounding in the physical and immediate world that an organism has. The absolute connection between the information and the dissipation that says any “selfhood” runs all the way down to level of the enzymes and other molecular machinery doing the job of “living”.

    A brain has stakes as there is a body it must have, a way of life it must live. Intelligence must flow through the body down to the immune system that can recognise any “wrong” molecules, the hormones that weave every cell into a concert of aligned intent.

    A GPU just gets installed in a data centre rack and is plugged into an electricity socket. Air conditioning stops it from melting itself. An LLM knows nothing about the foundations of its own existence. Although sure enough, ask it how all that works and it will parrot a human-like answer.

    Do you think it will suddenly also feel the horror of its fragile mortality when posed that prompt? Someone ought to ask Chat-GPT the question and see what self-interested response it feigns in simulated fashion.

    It seems to me, that for any of this to be true and factual, you must be referring to a faithful representation of your memories of what is actually the case. In other words, you are either contradicting yourself, or showing everyone in this thread that we should be skeptical of what you are proposing. You can't have your cake and eat it too.Harry Hindu

    I can certainly remember the gist of all that I have learnt about the neurobiology of memory. And that includes the fallibility and reconstructive nature of anything I claim as being factually accurate.

    So it is not that I don’t have the learnt habit of being able to talk myself back into what it would be like to relive past moments all over again as if they were unfolding anew. We can certainly recognise experiences that are familiar. The animal brain is perfectly good at recognising. My cat knows who I am from past experience when now I stand before her again, triggering whatever fresh state of anticipation my actions might suggest. A flow of associations.

    But recollection - the socialised habit of having an autobiographical memory - is dependent on the extra semiotic structure that language supplies. Becoming a walking memory bank is very much a human sociocultural ideal. Just about our highest achievement your school days might make you believe.
  • How LLM-based chatbots work: their minds and cognition
    [...] All of these fit your larger stance: absent embodied stakes and a robust self, the model’s “concerns” are prompt-induced priorities, not conative drives. The monitoring effect is then mostly about which goal the model infers you want optimized—“be safe for the graders” vs “deliver results for the org.”Pierre-Normand

    Sure. But if the clear command is given of not to jeopardise human safety, then this suggests that the LLM is not properly under control. And the issue seems inherent if the system is free to make this kind of inference.

    So I agree this is not any kind of actual self-preservation drive. But it is a reason to worry about the rush to put this new technology out in the wild before how they are liable to behave has been fully checked out.

    What would Asimov have to say about all this? :razz:
  • How LLM-based chatbots work: their minds and cognition
    Ha ha, of course I believe it, it's obviously the truth.Metaphysician Undercover

    :up: