then life is not a process of copying, modeling or representing a world, — Joshs
As a result, they do not explain how certain processes actively generate and sustain an identity that also constitutes an intrinsically normative way of being in the world.”(Thompson) — Joshs
Why do you exclude modelling along with copying and representing? The biosemiotic approach of biologists like Pattee, Salthe, Rosen and many more stress the need for the epistemic cut that indeed produces the closure of autonomy.
And Pattee shows how this even goes back to von Neumann’s mathematical treatment of self-reproducing automata. Rosen likewise provides the strong mathematical arguments. So even just for genetic copying, the need for a model that is separated from what it replicates is an axiomatic basic.
The problem with autopoiesis is that it was fuzzy on this aspect of the story. But there is a good grounding in semiotics to understand how selfhood and autonomy must emerge in life and mind. It is because they are this new thing of a semiotic modelling relation. It is all founded on the logical necessity of making an epistemic cut between self and world so as to start acting as a self in a world.
The informational machinery of a code has its first job in securing a state of enactive organisation. It must have a model of the self in its world so as to organise its metabolic flows and repair its dissipating structures - Rosen’s MR model of anticipatory systems. Then after that enactive relationship is established, there might be some kind of machinery worth replicating by making transmissible copies of a set of genes. The ability to replicate is somewhat secondary - although a logical inevitably because it allows biological development to be joined by biological evolution. And that is a powerful extra.
Note how words and numbers are semiotic codes that first exist as a way of separating a self from its world. Children have minds that become logically organised as they learn language and become able to self-regulate - as was well understood by symbolic interactionism and Vygotskian psychology. Humans have a heightened sense of selfhood because they must socially construct themselves as actors in a cultural drama, and now in the modern era, actors in a techno-neoliberal drama (the world made by thinking in terms of numbers or pure quantification).
And then words and numbers become something that can be transmitted and copied - turned into information or inert symbols to be decoded - by being rendered as marks on a page or electronic fluctuations on a wire. Human culture developed the power to become copyable and thus fully evolvability - capable of explosive change and growth over time as history shows once the digitised habits of writing and counting got started.
Oral culture is weakly transmissible. You had to be there to hear how the story was told and the gestures were used to really get the message. The machinery of copying was still more enactive than representational. It was not symbolic so much as indexical.
But with alphabet systems and numerals, along with punctuation and the sequestering of these marks in inert substrates - in the same way DNA is zipped up and inert and so physically separated from the molecular storm it regulates - humans continued on to full strength symbolism. Or a proper epistemic cut where the transmissiblity of information is separated from the interpretation or enaction of that information.
So you can see why huge confusion results from not being clear that syntax and semantics are two different things when we want to talk about “information” in some generalised way. An informational system - like a biological organism with genes and neurons, perhaps even words and numbers - is both enactive and representational. It is involved in both development (of a self-world modelling relation) and evolution (of a self-world modelling relation).
As usual, there is always a dialectic. And academic camps spring up at either pole to defend their end as the right end.
Again, I stick with systems thinkers or hierarchy theorists who can frame things more coherently.
Enaction is about the first person particularity of being in some actual selfish state in regard to the world. Representation is about what can be objectively copied and replicated so as to pass on the underlying machinery that could form such a particular state of world adaptedness.
Genes represent a generalised growth schedule and a list of essential molecular recipes that are the basic machinery for a body having an enactive modelling relation with its world. And genes also are in some active state of enaction when they are part of a body doing and feeling things as it indeed lives and transacts its metabolic flows.
In any moment of active selfish living existence, the DNA is unzipped and coated with all kinds of regulatory feedback signals so that it is functioning as the anchor to a vast cell and body-wide epigenetic hierarchy of “information”. The code couldn’t be more enactive.
And then the DNA is zipped tight, reduced to the frozen dialectic of sperm and ovum, mechanically recombined as now part of a different kind of story - one that couldn’t be more representational in being an inert process of information copying and the seeding of a next generation with the syntactic variety upon which the process of evolution depends.
If we talk about neurology or neurosemiosis, the stress is of course more on the enaction than the representation. Nature relies on genes to encode neural structure. So experience is something that can both be enacted and represented if you are dealing with simple intelligence in the form or ants or jumping spiders. Genes can specify the shape of the wiring to the degree that habits of thought are pretty much hard wired.
But large brained animals become more or less dependent on personal development or enaction. Thoughts, feeling and memories - some package of life experience that shaped the mind of a tiger or elephant - is information gained and lost. Only very general parts of being a tiger or elephant, as a self in some particular ecological niche, can be captured and transmitted as evolvable and representational information passed on to the next generation.
Humans became even more enactive and developmental as a large brain species. Our babies are at a real extreme in being born with unformed circuitry awaiting the imprint of life experience and hence the accumulation of untransmissible states of attentional response and new thought habit forming.
So genetics was strained to its outer limit in this tilt towards the enactive pole.
But then - hey presto - that paved the way for linguistic culture as a new higher level of semiotic code or information enaction/information representation. We could restore the balance between making minds and being minds with oralism, and then oralism’s continued evolution towards literacy and numeracy.
So neurosemiosis is sort of a gap in the story. It is where the baton gets passed as the genes get stretched to the limit and suddenly - with Homo sapiens - something more abstract, something arising out of social level systemhood, arises to continue the semiotic journey to a higher level of organisation.
This is why the neural code is so hard to find, and why we have patent idiocies like integrated information theory or quantum consciousness theories trying to fill the explanatory gap.
Biology can point to genes as the dual basis of enaction and representation, development and evolution. Social psychology can point to words and numbers in the same way. Brain scientists have to talk in terms of neural network principles to feel they are getting at what makes it all tick in terms of a mind that can be both some particular enactive first person state, and then also the other thing of a genetically transmissible algorithm which a new generation of minds can implement.
Again, I return to the neuroscientists who are actually homing in on this understanding of the great hunt for the neural code - folk like Friston and Grossberg. It is easy to see why they are on the right track, scientifically speaking.
The neural code has to be understood not as a train of symbols but as a standard microcircuit design. A bit of computational machinery. An architectural motif. A transmissible algorithm that is the brain’s basic building block.
And the problem there is Turing machine based notions of neurology’s canonical microcircuit - the standard approach - are so far off the mark. The only people to pay attention to are the ones that talk the language of anticipatory systems.