When we acknowledge that much of what we do is unconscious, we don't need to thereby posit sub-personal "agents" doing interpretation at the neural level. — Pierre-Normand
The point is that the true agency within the person is at the subconscious level. Like I said, we assign agency to the consciousness, but that is a simplistic representation designed to facilitate the concept of moral/legal responsibility.
If you look at habits, you'll see that we move in a lot of ways which do not require conscious choice, after the habit is developed. Walking for example does not require conscious choice for each movement of the leg. After you decide to walk, parts are moving without conscious choice, so this is where the true agency is, in the unconscious, which moves the parts without the requirement of conscious choice. The consciousness directs some activities, but the vast majority of activities of the human body are internal, and involuntary. Habits develop along the boundary between conscious and unconscious. So learning how to walk for example requires conscious effort to control unconscious activities, but once the activities are learned and practised they become united to the unconscious, not requiring the same conscious effort anymore.
When we acknowledge that much of what we do is unconscious, we don't need to thereby posit sub-personal "agents" doing interpretation at the neural level. — Pierre-Normand
But if you consider biosemiotics as somewhat accurate, then there must be interpretation being carried out at all unconscious levels where signs or symbols are used. The issue now is that interpretation requires that decisions or choices of some sort, are being carried out according to some principles or rules. Therefore we really do need to posit sub-personal agents doing interpretations at the neural level.
The key is recognizing that interpretation isn't a mysterious prior act by some inner agent. Rather, it's the person's skilled responsiveness to signs enabled by neural processes but enacted at the personal level through participation in practices and a shared forms of life. — Pierre-Normand
But if we accept biosemiotic principles, then we have inner interpretation therefore inner agency.
And crucially, it doesn't require internal mental representations either. It's direct responsiveness to what the environment affords, enabled by but not mediated by neural processes. — Pierre-Normand
Clearly we are not talking about "mental" representations at this level, but the same principles hold. There are signs, they must be interpreted, and interpretation requires agency.
On the other hand, we have linguistic affordances: socially instituted symbolic systems like spoken and written language, whose meaning-making capacity derives from normatively instituted practices that must be socially transmitted and taught, as you granted regarding writing systems. — Pierre-Normand
I believe that this is a misrepresentation of "meaning-making capacity". We are born with "meaning-making capacity", and it extends throughout the biological realm. Spoken and written language, and social institutions are just an extension of this preexisting meaning-making capacity, directed in a specific way, toward communion.
The social-normative dimension becomes indispensable specifically for sophisticated forms of communication. — Pierre-Normand
Yes, I agree with this. But the "social-normative dimension" is just one small aspect of a very expansive system which we know very little about. We, as conscious beings engaged in communication, look at this ability to communicate amongst each other as such a great thing, but in doing this we fail to recognize that the use of symbols at the other levels of biosemiotics is a far greater thing, and that higher level, called communication, is completely dependent on the lower levels which are far more substantial.
Likewise, LLMs aren't just decoding words according to dictionary definitions or algorithmic rules. — Pierre-Normand
I disagree. If LLMs are using more than algorithmic rules in "decoding", then show me what this "more" is, and where does it come from.
Rather, the context furnished by the prompt (and earlier parts of the conversation) activates a field of expectations that allows the LLM (or rather the enacted AI-assistant "persona" that the LLM enables) to transparently grasp my request and my pragmatic intent. — Pierre-Normand
That's nonsense, the LLM does not grasp your intent. That this is true is clearly evident from the fact that you can lie to it or mislead it. Obviously it is not grasping your intent, or it could see through your misleading use of words, to see that you are lying to it.
Rather, it comes from exposure to billions of human texts that encode the normative patterns of linguistic practice. — Pierre-Normand
Yes, that's all it is, an analysis of patterns. There is no grasping your intent here. The fact is that human beings are educated in very standard, conventional ways. Therefore we have very similar habits of thinking. So, the LLM can examine the patterns of billions of texts, and through rules of probability it can very easily produce texts which are imitative of standard conventional texts. This is not a matter of understanding intent, it it is a matter of imitation. You know, it's like a parent, but the parent probably understands the intent of the human being better than the LLM, because it observes the human responses, and relates to the human being as another living creature.
Through pre-training, LLMs have internalized what kinds of moves typically follow what in conversations, what counts as an appropriate response to various speech acts, how context shapes what's pragmatically relevant, and the structured expectations that make signs transparent to communicative intent. — Pierre-Normand
Exactly. Do you see that this is merely a matter of imitating patterns through probability laws?
When we talk about a bird perched on a branch or hearing the sound of rain, LLMs "understand" these linguistically through patterns in how humans write about such experiences but they lacks the embodied grounding that would come from actually perceiving such affordances. — Pierre-Normand
If you believe this, then how can you argue at the same time, that the LLM grasps your intention? If you say "I hear a bird sweetly singing", and the LLM says "That's beautiful", what could make you conclude that the LLM has grasped your intention? Unless the LLM can produce the same image in its mind, of the sound of a bird singing, which is what you are referring to, it's not grasping your intention at all. All it is doing is giving you an appropriate linguistic reply. That's like the thermostat. It doesn't grasp your intent to stay warm, it just makes the appropriate response.
They exhibit mastery of second-order linguistic affordances without grounding in first-order natural and perceptual affordances. — Pierre-Normand
I propose to you, that this grounding is the meaning, it is the content. Without this grounding, all the LLM is doing is creating eloquent formal structures which are completely void of meaning. These structures are void of meaning because they are not grounded by any content within the mind of the LLM. For analogy consider learning formal logic with the use of symbols. Take "if X then Y" for example. This would be just an example of a formal rule. It has no meaning unless X, Y, if, and then, stand for something, are grounded in content. We can go further and say "X therefore Y", but this still has absolutely no meaning unless X, Y, and therefore stand for something. That's all that the LLM is doing, moving symbols around according to a bunch of rules which allow for variability ("learning"). There is no meaning here because there is no content, only symbols which get applied to content when interpreted by human beings. The meaning is in the human interpretation.
The right view isn't that a child arrives with fully-formed interpretive capacity and then engages socially. — Pierre-Normand
There is no interpretive capacity which qualifies as "fully-formed", because none is perfect. So this statement as no bearing. The fact is that the child is born with interpretive capacity, therefore it is not something which is learned through social engagement. That a person can hone one's interpretive capacity in a specific way, through education of social conventions, does not negate the fact that the interpretive capacity is preexisting.
But fully articulated linguistic systems like spoken and written language derive their communicative power (and their power to support rational deliberation as well) from socially instituted norms that create fields of expectation enabling transparent communicative uptake. — Pierre-Normand
This is meaningless though, because it completely disregards all the underlying requirements. It's like saying "dynamite gives us the power to blow up rocks". It appears like you are saying something meaningful, but unless you know what dynamite is, and where it comes from, it really says nothing. It's just useless drivel. Likewise, saying 'spoken and written language derive their power from socially instituted norms' is also useless drivel, because it doesn't tell us anything about what social norms are, how they come into existence, and how they get that special position of providing power. You are just naming something, "socially instituted norms", and asserting that whatever it is that this name refers to, it is the source of power
This is what distinguishes them from both natural affordances and private marks. This distinction helps understand both what LLMs have accomplished by internalizing the normative patterns that structure their training texts, and the linguistic fields of expectation that we perceive (or enact) when we hear (or produce) speech, and where LLMs characteristically fail. — Pierre-Normand
So that statement, which is actually useless drivel, is what allows you to compare LLMs to human beings. Human beings get their communicative power from social norms, and surprise, LLMs get their communicative power from internalizing normative patterns. Notice the big difference though, human beings create the social norms, LLMs do not create the normative patterns they copy. So the creative aspect is completely missing from the LLM, and that's because it's a machine, not living.
And that is why it can seem creative and robotic at the same time. — apokrisis
The LLM can imitate creativity but imitation is not creativity.