• Harry Hindu
    5.8k
    Are machines capable of thinking? Are humans machines? Does it matter whether it is biological or technological?

    Every process in the universe is unique in some way. My thought process isn’t exactly the same as yours, and both are totally different from the way stars form or how a computer runs code. Still, we find patterns and similarities that let us group things together — that’s how we make sense of the world and make predictions.

    So when we talk about whether AI can think or understand, I think we’re doing something similar. We’ve already built definitions around what counts as “thinking,” and those definitions usually assume life or biology as a requirement. That means AI gets ruled out from the start just because it’s not alive.

    But if the difference between, say, a brain made of neurons and a network made of circuits doesn’t actually change the kind of process that’s happening — or if we can’t clearly explain why it should — then maybe our definitions are outdated. Maybe we’re focusing too much on what something’s made of instead of what it’s doing, and especially when what it is made of is just what even smaller "things" are doing.
  • Hanover
    14.6k
    Either we hold that meaning is use, and therefore AI genuinely uses language meaningfully within our shared form of life (albeit as a new kind of participant) or we insist that meaning requires some inner mental correlate — in which case we’ve abandoned the pure Wittgensteinian stance and re-entered the metaphysical terrain of intentionality and private experience.Harry Hindu

    I think that's right, and it may be AI truly engages in a language game in the Wittgensteinian analysis. There is the pesky question of what is meant by "form of life," which is subject to debate. I do think though that Witt could not possibly have suggested in order to share a form of life we must have the same mental states because that would entirely collapse the meaning is use into meaning is attached to internal mental states. So, to your claim whether AI genuinely uses language, the answer is probably that it does under a meaning is use analysis, but of what damage does that do to Witt's theory generally? I think nothing largely because I do not think the purpose of his project was to describe what true human interaction consists of, but he looked upon his project as an attempt to deliniate the boundaries of legitimate philosophical exploration. It is his position that metaphysical questions cannot be addressed through language because of the limitations inherent in the enterprise.

    Take it another step. One could say (and I'd suggest incorrectly) that Witt's reference to the box itself is a metaphysical claim. Witt says you have a box and I have a box and we both say we have beetles, but the inability to reveal the contents eliminates our ability to argue we have the same referent. My box might contain a chicken and yours a hammer, but as long as we both refer consistently to whatever we internally perceive, then our language game holds. We make the beetle superfluous. You would then say "Ha! I caught you! You reference a mystery box for your theory and a mystery is a mystery, so you have a metaphysical anchor to your whole theory." That is, AI differs from human language because humans have a box, albeit containing something we can't prove to the other person, we still have a box, and that distinguishes us from AI and we therefore have a different "form of life."

    I think that's an under-read of Witt and literalizes the abstract box he references. It might be that we have a box and AI has no box, but the key is that the box, existent or not within humans, is irrelevant for the entirety of the analysis to the "what is language" question. The point, again, is to show the limits of philosophy, which is that we cannot talk about the box, the beetle, or the metaphysical underpinnings through the use of language. It's not to admit or deny we have mental states.

    And I'll say this to all who may come and read the above, I find Witt extremely unclear in his original text and can't keep straight the multitude of interterpretations I've encountered, so if I've said something confused, please feel free to correct me. I have no purpose in discussing this than in trying to figure out what he means.
  • Harry Hindu
    5.8k
    So, to your claim whether AI genuinely uses language, the answer is probably that it does under a meaning is use analysis, but of what damage does that do to Witt's theory generally?Hanover
    I wasn't trying to disprove Witt here - just point at the contradiction of those on this forum that align with "meaning-is-use" and also claim that AI's responses are not as valid as a human's. AND if the forum's official position is that the output of AI is not valid content on the forums then the owners of the forum have officially taken a stance that meaning is not use.

    It is his [Witt's] position that metaphysical questions cannot be addressed through language because of the limitations inherent in the enterprise.Hanover
    That's not the way I interpreted it. If this were so, then how can we obtain scientific knowledge? Science starts with hypothesizing and theorizing. If we only ever start with a limited framework for explaining reality, then how is it that we humans have become the shaper of the landscape rather than just a fixture in it?

    Religious and metaphysical ideas are relegated to obscurity or amplified when science is able to get to the point of testing them. One might say that science clarifies our use of language by establishing a referent (evidence).

    Symbol-use (language) is arbitrary and therefore adaptable. We can make new words that explain new concepts. The limitation lies in the way we interact with the world - our senses, not our language. This is why we developed technology that expand our senses, not our language. Language is just along for the ride.


    Take it another step. One could say (and I'd suggest incorrectly) that Witt's reference to the box itself is a metaphysical claim. Witt says you have a box and I have a box and we both say we have beetles, but the inability to reveal the contents eliminates our ability to argue we have the same referent. My box might contain a chicken and yours a hammer, but as long as we both refer consistently to whatever we internally perceive, then our language game holds.Hanover
    I think that such an argument just opens another can of worms because now you'd have to explain why our beetles would be so different given the similarities of our physiology and having developed within a similar culture. Similar causes lead to similar effects. There is no reason to believe that my beetle is different than yours given the similarities between us, just as there is no reason for me not to believe you have a mind because of our similarities, but is my beetle the same as my cat's or a bat's?
  • Hanover
    14.6k
    I wasn't trying to disprove Witt here - just point at the contradiction of those on this forum that align with "meaning-is-use" and also claim that AI's responses are not as valid as a human's. AND if the forum's official position is that the output of AI is not valid content on the forums then the owners of the forum have officially taken a stance that meaning is not use.Harry Hindu

    That's a silly conclusion. Wittgensteinian principles aren't a driver for the decisions we reach, and it's entirely possible that one can believe meaning is use, or that AI is fully cognizant and aware as humans, but we still don't allow AI on the forum. If we can ban a person who is fully aware, we can ban a bot. Given that bots don't have civil rights, we can ban a bot just for being a bot. Computers can also be banned from chess websites as well, just because they want room for humans and humans alone to interact.

    Even if you think this all inconsistent, the best you can conclude is that it is all inconsistent, but not that entails some other official declaration.

    Science starts with hypothesizing and theorizing. If we only ever start with a limited framework for explaining reality, then how is it that we humans have become the shaper of the landscape rather than just a fixture in it?Harry Hindu

    The limitation imposed by Witt is to knowledge of the metaphysical, not the physical. Some words have referrants. I'm not arguing idealisim.

    I think that such an argument just opens another can of worms because now you'd have to explain why our beetles would be so different given the similarities of our physiology and having developed within a similar culture. Similar causes lead to similar effects. There is no reason to believe that my beetle is different than yours given the similarities between us, just as there is no reason for me not to believe you have a mind because of our similarities, but is my beetle the same as my cat's or a bat's?Harry Hindu

    We can assume that our perceptions are similar for all the reasons you say. That doesn't mean we need refer to the private state for our use of language. What fixes language under his theory is the publicly available. That is, even if my beetle isn't your beetle, our use of "beetle" is what determines what beetle means. However, if a beetle is running around on the ground and you call it a cat and I call it a beetle, then we're not engaging in the same language game, because the public confirmation is different.

    So:
    Example A: I see an object on the ground and my internal perception of it is what you see as a cat, but I see it as what you would see as a beetle.
    Example B: I see an object on the ground (that has the publicly observed structure of a beetle) and you call it a cat and I call it a beetle.

    In example A, if we consistently call this object a beetle, it is irrelevant what my internal state is. We live our lives never knowing what goes on in our heads, but we engage in the same language game. What happens in my head is irrelevant for this analysis. It does not suggest I don't have things going on in my head. It just says for the purposes of language it is irrelevant.

    In example B, if you call that object scampering across the floor a cat and me a beetle, we are not engaging in the same langauge game. When I say I petted my cat, you would wonder why someone would pet an object that you observe as a beetle.

    The point is that the private is irrelevant, not whether it might happen to be the same.
  • NOS4A2
    10k


    No it's not. The example I provided had dissimilar methods for acheiving the same result. The submarine example has dissimilar methods for acheiving dissimilar results.

    The example you provided had two water pumps, but I was speaking about an organism versus a machine.

    The question is whether Z can result from method X or Y. Your argument is that it cannot because Z will necessarily be different if from X as opposed to Y. That doesn't follow. The same thing can arise from different processes.

    The question was whether machines can act like humans, can do things humans can do, like thinking, typing, being conscious. My argument is that they cannot because they are different things, have different structures, and so act differently.

    Following your logic, suppose text on a screen results from X or Y, a machine and a human. We generate text on a screen by typing. Machines using AI generate text on a screen by using algorithms on user prompts, and performing a vast array of mechanical actions that results in legible text on a screen. Is the machine typing?
  • Hanover
    14.6k
    My argument is that they cannot because they are different things, have different structures, and so act differently.NOS4A2

    We're referencing the output, not the internal variations leading up to those outputs.

    Following your logic, suppose text on a screen results from X or Y, a machine and a human. We generate text on a screen by typing. Machines using AI generate text on a screen by using algorithms on user prompts, and performing a vast array of mechanical actions that results in legible text on a screen. Is the machine typing?NOS4A2

    The question isn't whether the machine is typing, but it's whether the final product is the same. But, if you want to focus on the typing itself (and not the characters that arise) as the end product, then if you have a machine that presses the keys on a keyboard randomly, then it is typing.
  • apokrisis
    7.7k
    I’m fine with predictive coding together with precision-weighting as a story about the neural implementation of selective attention. But that's a sub-personal account.Pierre-Normand

    It’s a holistic account as it involves habits as well as attention, ignoring as well as selecting. The whole of the person as an agent with a history and interests.

    The point about anticipation is that it flips the information processing model. Instead of an input crunched into “conscious” output, it makes the embodied point that the organism is already in a state of output by which it intends to mostly be able to ignore whatever then does happen. The input is what gets cancelled by there being no need to attend to it.

    So I am stressing the different basic architecture to how “consciousness” is understood by virtually everyone trying to philosophise about LLMs and how close to mimicking real mental processes they might be.

    If - like @Harry Hindu - you don’t get the difference between the Cartesian representational notion of mind and the Peircean enactive and semiotic one, then the conversation has stalled already.

    They act within a normatively structured field of affordances where much of what is "ignored" never could even intelligibly shows up as a reason.Pierre-Normand

    That’s correct. We are our habits of interpretance and so already in a learnt state of selective attention if you like. Ready to see the world as it is useful to us.

    But the neurobiology of this is complex in practice. And the ignoring is also active. A wave of suppression or inhibition sweeps across the brain after a third of a second to knock flat neuron firing so as to actively filter out the irrelevant and distracting. The P300 wave.

    So reacting out of habit is the first step. Then reacting with selective attention is the follow-up designed to deal with whatever then reveals itself as demanding a non-habitual response.

    There is a natural logic here to assimilating the changing world to the persevering self. And again, if you want a conversation about AI, that is relevant information.

    The predictive story is fine as an efficiency account, but it explains the wrong kind of "ignoring." In chess, what I actually see are reasons for and against moves (pins, forks, weak squares), not the woodgrain of the bishop or the gloss of the board. Those latter features aren't "filtered inputs'. They were never candidates because the game's norms make them irrelevant.Pierre-Normand

    This is quite right. But also only half the story.

    You are stressing the top-down aspect of what goes on. How the mind is hierarchically structured and is focused on general intents. Consciousness is the big picture view.

    But then a holistic account says this top-down intentionality - the view from maximum high level generality - has to always be dynamically balanced by the bottom-up construction that comes from being able to notice and assimilate a vast amount of informational detail at the level of sensory habit. The wood grain on the bishop and the gloss on the board have to be there - perfectly seeable - to then be just as smoothly ignored.

    How can you notice the fridge has suddenly stopped humming when you hadn’t been aware of that hum?

    The brain had to have been anticipating that ignorable humming, just as it will be experiencing the ignorable wood grain and ignorable gloss as being both always there and still just as predictable and ignorable as ever.

    So yes. We can stress the top-downness that results in the phenomenology that the enactivist would want to stress when faced with the Cartesian representational notion of mind and information processing. The fact that we come at the world from our states of maximally general intent.

    But neurobiology also describes the other side of this coin. How it is possible for the irrelevant detail, the concrete facts of the world, to also be completely part of the experience too.

    That is how you get all the little glitches like iconic memory and inattentional blindness as the tell tale signs of a cognitive architecture structured in a holistic and hierarchical systems fashion. The interaction of the top-down and bottom-up. As well as the feed-forward and the feedback. The sensory and the motor. The habitual and the attentional. The object and the relations. The focus and the fringe.

    The whole long list of dichotomies or symmetry breakings that tell us consciousness is not some monolithic process but the triadic product of a relentless logic of counterfactual relations. A system that is dialectic from top to bottom. Dialectic over all scales from the finest sensory discriminations to the broadest general intent.

    Your chess moves are counterfactually assessed. And your ability to discern wood grain or surface sheen is likewise gestalt at base.

    So I agree with what you are saying so far as enactivism goes. But that is still only half of it if you are not seeing this larger picture of how the brain is structured. The centrality of the dichotomising algorithm that builds up a system that is good at ignoring by being good at attending. Good at gestalt counterfactuality as it is good at imposing divisions at every level of the world all the way to a most general division which is a sense of that world vs the sense of our being a self in that world.

    That's silence-by-commitment-to-norms rather than silence-by-prediction-over-sensory-inputs. In the case of LLMs, after delegated task selection and norm-governed deliberation have occurred, the task of executing in a dynamical world in real time is handed back to the embodied users who delegated parts of the cognitive task to begin with.Pierre-Normand

    Well now you are stirring the socially constructed aspect of the human mind into this story of standard mammalian neurobiology. Language encodes social norms and makes them sharable in ways that then structure our states of intent and habits of thought.

    So yes, another whole level of linguistic and even numeric semiosis has to be added on top of the neurosemiosis. We are enactive from the new perspective that is a socialcultural one.

    That just underlines how underpowered mainstream debates about AI are. How little they are based on what human minds are really doing.

    So you make a valid point about where LLMs fit into our human realm of intellect. They slot in at the level of social semiosis and have really nothing to do at all with our neurosemiosis. They could be powerful amplifiers of the one, and utterly powerless in terms of the other. Not even in the game yet, despite the wild claims about being the big step towards AGI.
  • apokrisis
    7.7k
    Maybe we’re focusing too much on what something’s made of instead of what it’s doing, and especially when what it is made of is just what even smaller "things" are doing.Harry Hindu

    Well my argument is that the generic thing is semiosis. The idea that life and mind arise as Peirce's habits of interpretance or Rosen's modelling relation.

    That is now a scientific account.

    You have biosemiosis to explain how biology is anchored by genes as the primary encoding mechanism for creating the habits of interpretance by which the entropic flow of the world is corralled into the orderly body-building and metabolism-maintaining routines that construct a living organism.

    You then have neurosemiosis to explain the same general self-organising, self-constructing, activity at the level of a machinery of neural encoding. The same Peircean logic being expressed at the level of neurobiological order.

    Humans then add on this same general semiotic story at the level in which semiotic theory was first understood – the way words became the encoding mechanism for a sociocultural level of mind organisation. This sociosemiosis was already familiar to psychologists, sociologists and anthropologists, even if not a distinction understood in lay discussions about "consciousness" ... or sadly, the anglo-centric circles of philosophy of mind.

    I have argued that a fourth level of semiosis can be added on as Homo sapiens evolve to become @Baden's Homo techne. Once we settled down to start building river delta agricultural civilisations, we began to think in terms of numbers as well as words. Logical operations as well as social operations.

    We could call that matheo-semiosis, techno-semiosis, mecho-semiosis, or whatever. The key thing is that words arise in social contexts, but numbers take our minds into the truly abstracted realm of Platonic structure or logical forms. We can transcend the social by learning to think in this new way that claims to transcend the social and make a fundamental connection to the way that reality objectively and counterfactually "just is".

    Of course this then raises all the familiar epistemological confusions that fill page after page of TPF. But the point here is that this is a real evolutionary step in the biosemiotic story of the mind as a real thing – the generic thing which is being an organism in an informational modelling relation with the entropic possibilities or affordances of its material world. After words came numbers. After the tribal came the civilised. After the ecological of living as small foraging bands came the technological of living a life based on burning fossil fuels and using that infinite power to mechanise every available aspect of human existence.

    So along come LLMs. And we want to ask the question of how they fit into this scheme.

    The answer seems obvious enough from the telling of this general semiotic journey. They accelerate the civilised part of the equation. They make the collective hive mind of Homo techne into some kind of more real fact. All the knowledge and cleverness inefficiently spread about the many forms of media can now be pooled into an instantly available database with a convincingly "human" front end interface.

    LLMs are not minds. But they are a natural next step in the development of Homo techne as the planetary superorganism. The creature that has been mechanising the Earth since first measuring out plots of land, building city walls, counting its harvests and taxes, and keeping careful track of the rotations of the heavens.

    LLMs promise to be the new infrastructure for delivering a greater intensity of hive thought. Computers just continuing the trend that got them going as a better way of doing information infrastructure.

    We have power infrastructure and we have information infrastructure. The shift from steam to electricity was huge when it came to power. The shift from the internet to AI could be as seismic. If we don't fuck the planet first.

    But where is this weird idea that the plan needs to be about "building consciousness". Or even "building intelligence". If LLMs are not delivering what humans are already doing, then suddenly everyone would just lose interest.

    I mean who got excited by the last big wave of cloud computing and data centre construction based on the sales push for the "data mining revolution"? The Big Data era that was going to turbo-boost both business and government.

    So excuse me for yawning. This is a hype bubble with very familiar outlines. And if anyone is actually interested in an explanation of the human mind, that seems a pretty solved story to me. :grin:
  • apokrisis
    7.7k
    The point, again, is to show the limits of philosophy, which is that we cannot talk about the box, the beetle, or the metaphysical underpinnings through the use of language. It's not to admit or deny we have mental states.Hanover

    Semiosis hinges on counterfactuality. Once semiosis runs out of counterfactuals, it lapses back into the vagueness from which it was boot-strapping its own existence.

    So Wittgenstein was pointing out something correct. But he had no idea of the more generic metaphysical claim that could make it correct in the limited domain he was operating in. The domain that is socio-semiosis.

    Peirce came up with the generic metaphysical claim. The one we can usefully apply to all levels of semiotic endeavour.
  • Pierre-Normand
    2.8k
    It’s a holistic account as it involves habits as well as attention, ignoring as well as selecting. The whole of the person as an agent with a history and interests.

    The point about anticipation is that it flips the information processing model. Instead of an input crunched into “conscious” output, it makes the embodied point that the organism is already in a state of output by which it intends to mostly be able to ignore whatever then does happen. The input is what gets cancelled by there being no need to attend to it.
    apokrisis

    :up: Excellent post that I enjoyed from start to finish. I'll only quote some salient points one at a time (and as needed). My comment, that you just replied to, was betraying a misunderstanding on my part of the selective prediction (or anticipation) based sensory-input silencing model that you were referencing (and that I lacked familiarity with) such that it had seemed to me that it only applied to dynamical in-the-act sensorimotor exploitation of affordances (paradigmatically locomotion) controlled by the mammalian brain and that therefore it neglected the norm-governed structuring of higher-level affordances that rational cognition enables the grasping of.

    I has completely missed how the biosemiotic level of integration manifest in autonomic and endocrine functions can generalize to higher-level cognitive functions and therefore had worried how focusing on it might lead to a neglect of downward-causes of a different nature (rather than rethinking them).

    Nevertheless, immediately after crafting my last response to you, it occurred to me that there might be something to the Bayesian framing of the overall information processing (or selective uptake) account that, when viewed at a suitable level of abstraction, enables bringing the person-level account of norm-structured cognitive activity (and hence also higher level cortical functions) into the fold. I then voiced my inchoate idea to GPT-5 thus:

    USER: "I now have an inchoate thought that may of may not develop into something (with your help). As I said, I'm not familiar with Grossberg or Hinton's models, although I knew of similar ideas by Andy Clark and Susan Hurley. But since the models make sense at the level of the sensorimotor control of affordance grasping and exploitation, while low level autonomic/endocrine activity and coupling also are relevant to the sub-personal causal story, and higher-level cortical functions the activity of which maps more closely to person-level activities of norm-governed deliberation, and the latter can be viewed as exaptations of the low-level functions (and their older enabling structures) that developed in part through widening the causal loops through newer neural pathways that permit the attunement of sensitivities to higher order affordances such as, say, buying plane tickets as affording making a trip or crafting an arguments as affording convincing a discussion partner, I was wondering if this sort of continuity between the structuring of primitive mammalian affordances, say, that aren't structured by linguistically articulated norms of second-nature, and those that are (i.e. human affordances) enables the predictive models to also apply to higher functions at a suitable level of abstraction. Maybe some Bayesian models that pertain to the explanation of the functioning of LLMs also can suggest some such suitable level of abstraction whereby norm-governed cognitive functions can be glossed in "silence-by-prediction" terms, albeit over something more abstract than sense data or input/output tokens."

    Here is GPT-5's enlightening reply and the rest of the conversation:

    Reveal
    GPT-5:

    Love where you’re heading. Here’s a compact way to turn that inchoate thought into a workable thesis, without undoing our earlier level-distinctions.

    # A “continuity” picture that keeps norms

    **Claim (weak “predictive-normativity” thesis):**
    Predictive machinery can *realize* norm-guided cognition when the variables being predicted are not pixels or proprioception, but **abstract affordance states and policy/role expectations**. In other words, prediction can scale up if the *state space* is lifted from sensorimotor features to **reasons and roles**.

    ## 1) Three levels, one style of inference

    * **Sensorimotor (mammalian core):** generative model over bodily states & environmental causes; prediction errors guide online control.
    * **Pragmatic/affordance level:** generative model over **task affordances** (“open lane,” “back-rank weakness,” “available flight”), where “precision” ≈ attention to features that matter for the current policy.
    * **Normative/deliberative level:** generative model over **social roles and reasons** (“as an assistant, be concise; as a driver, yield here; as a discussant, provide evidence”). Here, **prior preferences** encode commitments/ends; **policy priors** encode roles/rules/practices.

    On this view, **silence-by-commitment** at the person level is *implemented as* **silence-by-prediction** over a higher-order state space: non-reasons have near-zero prior precision and so don’t get into play; candidate reasons with high prior relevance are up-weighted.

    ## 2) How “prediction” avoids swallowing the mind

    * We don’t say “mind = prediction.” We say: **commitments and norms define the variables and priors**. Predictive dynamics then make the search and execution *efficient*.
    * Downward enablement: adopting a goal (“convince the interlocutor”) changes the model class and priors (“evidence > flourish”), which in turn changes which lower-level signals are even eligible to matter.

    ## 3) Worked micro-examples

    * **Chess (deliberation → execution):**

    * *Normative level:* “Find a safe improving move” sets **prior preferences**; pins/forks are high-precision hypotheses; woodgrain has ~zero precision.
    * *Predictive realization:* saccades & working memory sample lines where expected informational gain is highest; errors (a line that fails a tactic) redirect search.
    * **Trip planning:**

    * *Normative:* end = “visit sister next weekend”; constraints = “budget, time windows.”
    * *Predictive realization:* you simulate options; trajectories that violate constraints are pruned *a priori* (they never enter), while near-feasible options draw attention. Buying tickets is then classic sensorimotor prediction during execution.

    ## 4) LLM bridge (Bayesian gloss)

    * **Attention as amortized relevance:** self-attention weights approximate which tokens/stretches **lower next-step uncertainty given the role**; fluff gets low weight.
    * **Instruction/policy tuning as learned priors:** the “assistant” role installs **which discourse acts count** (answer the question, cite if asked, avoid harm).
    * **CoT as policy rollout:** a reasoning trace is like sampling from a **posterior over solution paths**; tangents decay because they have poor expected utility for the goal.
    => So LLMs visibly exhibit **silence-by-commitment realized via silence-by-prediction over text-level affordances** (questions, definitions, equations, constraints), even without sensors.

    ## 5) Where this helps—and limits to respect

    * **Helps:** gives true continuity from autonomic control → task affordances → social norms without collapsing levels. It explains how exapted cortical circuits can treat *reasons as learned affordances* and let predictive dynamics do the heavy lifting.
    * **Limits:** (i) Priors don’t *ground* norms; communities, practices, and uptake do. (ii) Predictive accounts risk triviality unless the **state space** is explicitly *normative/affordance-like*, not just hidden pixels by another name.

    ### One-liner you can use

    > We can safely say prediction *realizes* norm-guided cognition once the model runs over affordances and roles rather than raw sensations: **commitments set the priors; practices define the variables; prediction makes the search and execution efficient**.

    Here’s a tight 3-paragraph version you can paste on the forum:

    I’m happy to grant the predictive story as a good account of how brains implement efficiency—especially during execution. But the selectivity that matters at the personal level is set by ends and practices: agents act in a field of affordances where much of what is “ignored” never shows up as a reason in the first place. The neat move is to see continuity here: predictive machinery can realize norm-guided cognition when the things being “predicted” are not pixels or proprioception but task affordances and role expectations. Commitments set the priors; practices define the variables; prediction then makes search and execution efficient.

    Take chess. A competent player doesn’t filter the board by anticipating colored patches; what is seen are reasons—pins, forks, weak squares, a viable pawn break. Woodgrain and square gloss simply don’t register because the game’s norms make them irrelevant. That’s silence-by-commitment (to the point of the game), not silence-by-prediction over sensory inputs. Once the task is fixed—“find a safe improving move”—subpersonal prediction helps with execution: eye saccades land where expected informational gain is highest, and “errors” (a busted line) redirect search. The same shape appears in trip planning. The end—“visit my sister next weekend within budget”—and constraints define what even counts as an option. Infeasible itineraries don’t have to be filtered out; they never enter. What remains is then executed with classic predictive control (buy the ticket, catch the train).

    LLMs make this vivid without any sensorimotor loop. Attention works like a smart highlighter over text: the model turns up the words and phrases that help with the next useful step and turns down the rest. Instruction/policy tuning supplies the “house rules” for relevance—what to count as on-topic, helpful, and safe in the assistant role. So the model exhibits silence-by-commitment (to the role and task) realized as silence-by-prediction over a higher-order state space (questions, constraints, definitions, steps)—not over pixels. On this view, prediction doesn’t explain away norms; it helps realize them. Brains are predictive in service of agency; persons are practical agents in a world of affordances; assistants are policy-steered. Keep those levels distinct, and we preserve what’s right in the Bayesian picture without letting it swallow the mind.

    USER:

    This *sounds* plausible but I think I don't have a sufficient grasp of the Bayesian model to understand where predictive errors and predictive successes of suitably generalized "inputs" play into the high-level picture, and hence what still warrants calling the model predictive.

    GPT-5:

    great question. here’s the simplest way to see what still makes it “predictive” once we move beyond raw sensations.

    # The predictive loop (domain-agnostic)

    At any level (low-level motor control or high-level reasoning), the same 3-step loop can run:

    1. **Propose** a hypothesis/plan/policy (H).
    2. **Predict** what would follow if (H) were right.
    3. **Compare** those predictions to what you actually *get* from the world or from your own checks/simulations; **mismatch = prediction error** → update (H) or your focus.

    Two knobs matter:

    * **Prior/Policy:** which hypotheses you *prefer* to consider (your ends/role set this).
    * **Precision/Attention:** which mismatch signals you treat as informative (what you attend to).

    That’s it. If that loop is doing the work, you’re in “predictive” territory—even if the “inputs” aren’t pixels but abstract checks (constraints, norms, simulated outcomes).

    # What counts as “inputs” up here?

    Not sense-data. Higher up, “inputs” are any **testable consequences** of the thing you’re currently entertaining:

    * **Internal simulations:** “If I play …Nf2+, does my line get mated? If yes → big error; revise.”
    * **Constraint checks:** “Does this itinerary respect my budget and dates? If no → error; prune.”
    * **Normative tests:** “Does this claim meet the burden of proof I accept here? If no → error; strengthen the case or withdraw.”
    * **Social feedback/outcomes:** audience confusion, a rebuttal you failed to anticipate, etc.

    All of these are observations relative to a **generative model over affordances and roles**. You predict what should follow; you get signals (from simulation, rules, or people); you measure the gap.

    # Where “precision” lives up here

    “Precision” just means **how much you trust a given error channel**, i.e., where you place attention:

    * In chess, you weight tactical refutation errors highly; you down-weight woodgrain entirely (precision ~ 0).
    * In a forum debate, you weight contradictions and missing evidence; you down-weight typography complaints.
    * In trip planning, a budget violation is high-precision; seat color is near zero.

    That’s why I called it **silence-by-commitment**: your commitments (ends/role) set the *precision profile* so some errors don’t even get to matter.

    # Why this remains genuinely predictive (not just a metaphor)

    Because the **algorithmic shape is the same**:

    * You keep a **posterior belief** over “what to do/think next” (which line in chess, which itinerary, which argument).
    * You generate **predictions** about the consequences that should hold if that belief were right (no tactical refutation; constraints satisfied; audience uptake).
    * You ingest **mismatches** as **errors** and **update**—either changing your candidate, shifting attention, or revising your priors (goals/assumptions).

    Low level: the signals are proprioceptive and visual.
    High level: the signals are **rule checks, constraint satisfactions, simulated futures, and social responses**.
    Same loop, different state space.

    # Mini-examples (one line each)

    * **Chess:** predict “this line holds tactically”; see a refutation in your head → error → change plan.
    * **Trip:** predict “this route meets time/budget”; spreadsheet says no → error → prune.
    * **Argument:** predict “premise + norm will convince this audience”; pushback shows a gap → error → add evidence or revise claim.
    * **LLM:** predict “this next step solves the subtask”; unit-check fails → error → backtrack and fix (the policy makes that an attended channel).

    # TL;DR

    It’s still predictive because you’re always doing **predict → compare → update** with **errors** that are **precision-weighted**. What changes is *what you’re predicting over*: not raw sensations, but **affordance-and-norm variables** (constraints, reasons, role expectations). Commitments define the variables and which errors matter; prediction makes the search and execution efficient.

    USER:

    I understand better now. So, thinking of human life as the unfolding of projects, or protracted sets of nested actions/intentions/plans in progress understood in the sense of Anscombe and (Michael) Thompson, the predictive function can be assimilated with something like the standing obligation to monitor the progress of those actions and of seeing to it that the progress made remains in accordance with the norms of practical reason. Am I on the right track? What is silenced, then, just is the facts or "data" expected to be received (unnoticed) when thing go according to plan.

    GPT-5:

    yes—you’re on the right track, and that’s a lovely way to put it.

    Here’s a tight mapping to make it crisp:

    * **Projects as nested intentions (Anscombe/Thompson):** A life is a web of ongoing doings (“writing the paper” → “draft section 2” → “fix the footnote”). Each node has **standards of success** given by practical reason and the practice you’re in.

    * **Predictive function as *ongoing monitoring*:** Rational agency includes a standing requirement to **track whether the deed is unfolding as it ought**—means–end coherence, no self-defeating steps, respect for constraints, etc. Formally: propose → predict consequences → compare → update. Practically: “if I send this email, X should reply; if not, revisit plan.”

    * **What gets “silenced”:** Facts that *accord with* the plan’s expectations generally **don’t rise to the level of reasons**. They are registered subpersonally/quietly (or dispositionally) as “as expected.” Attention doesn’t need to escalate them. So yes: *the expected confirmations are the silence*.

    * **What breaks the silence:**

    1. **Deviations** (error signals): a missed cue, violated constraint, unexpected cost, counterargument.
    2. **Checkpoint norms** (deliberate, not just detect): e.g., periodic review, peer feedback, or “is the end still worth it?”—these are **chosen points of scrutiny** where even “as expected” details can be made explicit for reasons of audit/learning.

    * **Hierarchical fit:** At higher levels (“finish the article”), prediction monitors coarse milestones and values; at lower levels (“fix citation”), it monitors fine-grained steps. **Precision/attention** is allocated accordingly: high on what matters for the current subgoal; near-zero on the rest.

    * **Why this stays *normative*, not merely reactive:** The *plan’s* norms (ends, constraints, role expectations) decide **which discrepancies count** as errors in the first place. The predictive loop efficiently **realizes** that reasons-responsiveness; it doesn’t replace it.

    A concrete one-liner:

    > In ongoing action, *silence* is the sound of things going right; attention only speaks up when reasons demand revision—because the world, or the plan’s own standards, just told us something that matters.

    USER:

    That makes sense. I remember that my early skepticism/uneasiness about Clark's book *Surfing Uncertainty: Prediction, Action, and the Embodied Mind*, which I had bought but haven't read, was it's apparent reduction of practical reason (the task of deciding what to do) to theoretical reasoning (the task of finding out what is true, which predicting is sort of an instance of). But I may have overlooked that what was at issue wasn't predicting for the sake of knowing but rather for the sake of adjusting.

    GPT-5:

    exactly—that’s the pivot.

    Clark (and kin) aren’t saying “all cognition = theory-making for its own sake.” The predictive story is mainly about control: keeping a grip on your doings by anticipating the flow of consequences and nudging it back on course when it drifts.

    A tidy way to line it up with your Anscombe/Thompson frame:

    Practical reason (what to do): sets the ends, constraints, and standards for a project (“write the reply, be fair, be clear”).

    Predictive machinery (how to stay on track): runs a continuous anticipate → compare → adjust loop so the unfolding deed matches those standards.

    So prediction here is not “for the sake of knowing that,” but “for the sake of keeping things apt.” Think:

    Epistemic use of prediction: reduce uncertainty about the world (science, inference).

    Pragmatic use of prediction: reduce deviation from the plan (control, guidance).

    Active-inference folks sometimes call the first epistemic value (seek info), the second pragmatic value (secure preferred outcomes). Practical reason chooses the project and its norms; prediction is the servo-mechanism that makes the execution smooth and the monitoring intelligent.

    If you want a mantra for your post:

    Prediction, in this story, serves adjustment, not replacement: it realizes reasons-responsiveness during action instead of reducing reasons to mere forecasts.
  • Pierre-Normand
    2.8k
    @apokirisis Addendum to my previous post, posted separately due to post size limitations.

    (Rereading GPT-5's initial response quoted in my previous post, I now notice that it had anticipated the worry that I express here with the remark: "Limits: (i) Priors don’t ground norms; communities, practices, and uptake do. (ii) Predictive accounts risk triviality unless the state space is explicitly normative/affordance-like, not just hidden pixels by another name.")

    Comment about reductionistic worries and suggested guardrail

    USER:

    There still appears to be a potential reductionist pitfall in the vicinity of this abstract account; one that may have both a cure and a curse in separate moves of subsumption. The cure lay in nested subsumtions under eudemonia and the curse lay in nested subsumptions under the telos of life qua dissipative structure.

    GPT-5:

    oh yes—that’s the live trap. You can make the predictive/control story safe for *practical reason* by the “cure” (nest projects under **eudaimonia**), or you can flatten it into the “curse” (nest everything under the **telos of a dissipative structure**). Here’s a compact way to keep the cure and dodge the curse.

    # The pitfall & the fix

    **Reductionist pitfall:**
    If you let “what good control optimizes” be defined *at the bottom*, the criterion collapses into viability/throughput (“stay far from equilibrium; maximize adaptive fit; export entropy efficiently”). Then reasons become disguised thermodynamics, and *norms* become mere *strategies* for heat management.

    **Cure (eudaimonic subsumption):**
    Order the project tree by **constitutive ends of a life well-lived**—virtue, truthfulness, justice, friendship, craft excellence, etc. These are not optional utilities; they are the *standards that make the activities what they are*. Prediction then *realizes* those standards in execution and monitoring. (Servo for reasons—don’t turn reasons into servo targets.)

    **Why this isn’t hand-wavy:**
    Use a strict **two-kinds-of-because**:

    1. **Enabling-because (physics/biology):** Why can this practice exist? → Because organisms are open dissipative systems with bodies, learning, language, etc. (viability constraints, metabolic budgets).
    2. **Justifying-because (practical reason):** Why is this move/choice apt *as this practice*? → Because it meets the standards internal to chess, teaching, friendship, research, politics…

    Don’t swap them. The first anchors possibility; the second confers **normative authority**.

    # Guardrails to avoid the slide into “entropy ethics”

    * **Enabling ≠ measuring.** Lower-level constraints *enable* the practice; they don’t *measure* its success. A teacher’s excellence isn’t indexed to calories burned or entropy exported.
    * **Top-down as constraint, not force.** Person-level ends shape which lower-level fluctuations are even recruited (constraint-closure), without “mystical causation.”
    * **Plural goods, not a single currency.** Practical reason adjudicates incommensurable goods; there is no thermodynamic exchange rate between honesty and comfort.
    * **Counterexamples bite:** Fasting monks, whistleblowers, and conscientious objectors can rationally *choose against* metabolic or reproductive “fitness.” The choice is **intelligible** and often **admirable**—which a mere dissipative telos can’t capture.
    * **Prediction serves the chosen good.** The same predictive loop (propose→anticipate→compare→adjust) runs, but what counts as an “error” is fixed by the **practice’s** internal goods, not by system throughput.

    # If you want a forum-ready clincher

    > The predictive brain story is safest when we say: thermodynamics explains **how** a minded animal can exist and stay viable; **eudaimonia explains what it is for that life to go well**. Keep the “because” of enabling separate from the “because” of justifying. Then prediction is the servo of practical reason—excellent for keeping a deed apt—without collapsing the point of agency into the aims of a heat engine.

    You can also frame this as a motto: **anchor in physics, answer to reasons.**
  • Metaphysician Undercover
    14.3k
    What are you talking about? Writing came before speech, or something? Hands evolved before tongues? What's your hypothesis?apokrisis

    I am saying that I believe that writing and talking, originally developed completely distinct from one another, being completely different things for completely different purposes. I am not saying that one is older or prior to the other, or anything like that, I am proposing that they first developed in parallel, but completely distinct from one another.

    The use of symbols in writing is essentially a matter of marking one's environment in a way which will persist, and can be referred to later. So the principal purpose is to assist a person's memory. Speaking creates a discernible, but non-lasting disturbance in the environment, which can be detected by others. The principle purpose of speaking is to communicate with others. So the fundamental difference between the two is that writing was very personal while speaking was community oriented. This difference is explored by Wittgenstein in his investigation into "private language".

    So my hypothesis is that when these two distinct forms came together and were united, this resulted in an explosive evolution of intelligence. The union probably has its roots in monoliths, things like Stonehenge, pyramids, etc.. I believe it was discovered that the markings of environment which lasted through time and served as a memory aid, could be interpreted by others, just like speaking could be, and instead of it being just a personal memory aid, it served to unite distinct memories in a communal way.
  • apokrisis
    7.7k
    I am saying that I believe that writing and talking, originally developed completely distinct from one another, being completely different things for completely different purposes.Metaphysician Undercover

    Citations?

    So my hypothesis is that when these two distinct forms came together and were united, this resulted in an explosive evolution of intelligence.Metaphysician Undercover

    Bonkers. In what way are writing thoughts and speaking thoughts any different in kind?

    Of course one is more informal and in the moment, the other more formal and transcendent of the moment. But the syntax and the semantics are the same. Same grammar, same vocabulary. Same job is getting done.

    Speech comes first. The evolution of an articulate vocal tract proves that. Writing comes second. It needed a cultural reason for why humans would go to all the trouble of constructing the necessary technology and education.

    All humans speak. But for most of their 100,000 year history, they were completely illiterate.

    So how can you come up with a thesis so flagrantly as wrong as this? I am honestly flummoxed.
  • apokrisis
    7.7k
    I’m happy to grant the predictive story as a good account of how brains implement efficiency—especially during execution. But the selectivity that matters at the personal level is set by ends and practices: agents act in a field of affordances where much of what is “ignored” never shows up as a reason in the first place. The neat move is to see continuity here: predictive machinery can realize norm-guided cognition when the things being “predicted” are not pixels or proprioception but task affordances and role expectations. Commitments set the priors; practices define the variables; prediction then makes search and execution efficient.GTP-5

    I'm almost pleased at how useless an answer GTP-5 gives. No one can accuse me of being the opaque one here. :grin:

    If you are playing chess, then that is semiosis at the logico-mathematical level, even through entrained to the semiosis of the civilised human playing a suitably civilised game in a suitably civilised fashion.

    What purpose does playing chess serve?

    Well the point is to move pieces and win. There is a mathematically constrained problem set, and we play out the logic of that.

    But what purpose does playing chess really serve?

    Well the point is to act the part of being someone who is demonstrably this civilised. And not some lout down the pub playing darts, or gamer blasting away in the dimly glowing solitude of their bedrooms.

    Chess is the prestige sport of the intelligensia. The social reason to play is that it both impresses those who see us play it, and it likewise impresses on us a certain prized social mindset. A game somehow symbolic of a highly cultured approach to life.

    But no, what purpose does chess really serve in those who actually enjoy it, do it for fun, and get really good at it?

    Well now we can start to hook into chess at the level of neuro-semiosis. The natural desire to master the world and the pleasure that goes with that successful effort.

    So you have semiosis acting over all its available encoding levels – the nested hierarchy of codes that are {genes {neurons {words {numbers}}}}.

    And you have anticipatory cognition or Bayesian reality modelling over all those nested levels.

    The predictable motor-sensory familarity of the chess board itself. If you want to move the bishop, you can move the bishop.

    If you move the bishop, you can imagine the psychological impact of that on your opponent as well as the mathematical impact it has on their chances of now winning. You can celebrate your imminent social triumph – the reward of dominating an adversary, getting the slap on the back from an adoring public – as well as visualise the likely countermove and the degree to which you are already looking past that new mathematical fact.

    So we function as a nested hierarchy of norms, each evolved as levels of semiosis within their own "worlds" as general game being played. The worlds of biology, neurology sociology and rationality.

    Always the same semiotic logic. But each its own radical "phase transition" when it comes to their semiotic scope.

    The genes see the world of metabolic homeostasis. The neurons see the world of behavioural or sensori-motor homeostasis. The words see the world of sociocultural homeostasis. The numbers see the world of rational level homeostasis.

    Homeostasis is the preservation of the integrity of the organism. The primal ability to defy entropy by forever rebuilding the same body, the same self, the same world that this self requires.

    Genes close this world for metabolism. Neurons close it for behaviour. Words close it for social integrity. Numbers close it for rational integrity.

    It's all as simple as that. And now what does GTP-5 have to say?

    Take chess. A competent player doesn’t filter the board by anticipating colored patches; what is seen are reasons—pins, forks, weak squares, a viable pawn break. Woodgrain and square gloss simply don’t register because the game’s norms make them irrelevant. That’s silence-by-commitment (to the point of the game), not silence-by-prediction over sensory inputs. Once the task is fixed—“find a safe improving move”—subpersonal prediction helps with execution: eye saccades land where expected informational gain is highest, and “errors” (a busted line) redirect search. The same shape appears in trip planning. The end—“visit my sister next weekend within budget”—and constraints define what even counts as an option. Infeasible itineraries don’t have to be filtered out; they never enter. What remains is then executed with classic predictive control (buy the ticket, catch the train).GTP-5

    This might be a good moment to prod GTP-5 with Deacon's theory of absentials. Or indeed anything to do with the psychology of negative space and Gestalt holism.

    Talk about a system struggling to deal with its known unknowns let alone its unknown unknowns. Give the poor bugger a larger context window.
  • Hanover
    14.6k
    Semiosis hinges on counterfactuality. Once semiosis runs out of counterfactuals, it lapses back into the vagueness from which it was boot-strapping its own existence.

    So Wittgenstein was pointing out something correct. But he had no idea of the more generic metaphysical claim that could make it correct in the limited domain he was operating in. The domain that is socio-semiosis.

    Peirce came up with the generic metaphysical claim. The one we can usefully apply to all levels of semiotic endeavour.
    apokrisis

    I don't see where Pierce and Wittgenstein are at odds or where Pierce advanced upon Wittgenstein"s ideas. Pierce offers an explanation of how we might use ordinary events as symbolic and describes how we might derive meaning of our world without the necessity of language, but Wittgenstein doesn't deny this (or really address it). It's not his goal to explain how language comes to be, but just to say linguistic expression cannot occur absent social use.

    That you see a fire and have associated that with danger and you now consider fire a symbol for danger, says nothing about speaking about fire. We'd expect deer to do the same, but that's doesn't bear on Wittgenstein.
  • Metaphysician Undercover
    14.3k
    Citations?apokrisis

    No thanks, it's outside the scope of the thread.

    In what way are writing thoughts and speaking thoughts any different in kind?apokrisis

    The two are based in completely different types of intentions, writing having the basic intent of something very personal and private, to assist one's memory, and speaking having the intent of engaging others, to assist in communal projects. For example, I write the treasure map as a memory aid, to assist myself in finding my buried gold. I don't tell anyone, because I do not want to share the gold. I could come up with endless examples, but if you refuse to acknowledge, there's no point.

    I am honestly flummoxed.apokrisis

    Somehow that doesn't surprise me. You have a habit of ignoring or rejecting reality when it isn't consistent with what you believe.
  • Metaphysician Undercover
    14.3k
    @apokrisis
    When you went to school, did you take notes? If so, was the purpose of those notes to communicate with others?
  • Pierre-Normand
    2.8k
    When you went to school, did you take notes? If so, was the purpose of those notes to communicate with others?Metaphysician Undercover

    Before you began taking notes, you spent a couple years learning how to read and write, using a writing systems that piggybacks on spoken language. Both the rules for speech and writing are rules of a norm governed public practice that is taught and learned (while the rules for using the words/signs are taught primarily through speaking them). For sure, you can then make use of this system for your own personal purposes but that doesn't render very plausible the idea that such a complex system had evolved so that individuals could only, or primarily, use it for personal uses. How would the mastery of this cognitive tool have been transmitted across generations without it by the same token enabling interpersonal communication? The more parsimonious story is that the writing system is an extension of spoken language.
  • apokrisis
    7.7k
    I don't see where Pierce and Wittgenstein are at odds or where Pierce advanced upon Wittgenstein"s ideas.Hanover

    You sound like Banno now. If you can't see it, then nothing to see. Ipso facto.

    Pierce offers an explanation of how we might use ordinary events as symbolic and describes how we might derive meaning of our world without the necessity of language, but Wittgenstein doesn't deny this (or really address it).Hanover

    You seem to completely not see that I just said Peirce went well beyond language games to cover semiosis as logic itself and phenomenology itself.

    If I have a beef with Wittgenstein it is that he had Ramsey whispering Peircean pragmatism in his ear and then never properly acknowledge this intellectual debt. Something the analytic fanboys perpetuate as that is just part of cementing their group identity and claims to academic priority.

    The sociology beyond the rationality in terms of the games in play.

    See...

    Cheryl Misak’s Cambridge Pragmatism fits the bill, telling the story of how the Cambridge, Massachusetts, pragmatism of Peirce and James was ultimately absorbed into the Cambridge, England, pragmatism of Ogden, Ramsey, Russell and the later Wittgenstein.

    As Misak puts it, her aim “is to map and explore some unfamiliar but important territory in the history of analytic philosophy” (ix): namely, how Peirce’s pragmatism, in particular, had a profound and positive effect on the development of an important strand of analytic philosophy. Or, alternatively: to show how philosophers in Cambridge, England, were in fact pragmatists whether they admitted it or not.

    Quoted from a review of Misak's book – https://jhaponline.org/article/view/3156/2728
  • Pierre-Normand
    2.8k
    I don't see where Pierce and Wittgenstein are at odds or where Pierce advanced upon Wittgenstein"s ideas.Hanover

    The spirit of their enterprises may be at odds while their contributions aren't. Here is how I put it in a query to GPT-5 about your last response to @apokrisis

    USER: "I would have to look back to more context to see where the roots of the disagreements and/or misunderstandings lay but I wondered if Peircean semiotic considerations might be something that does advance Wittgenstein ideas, though in a direction Wittgenstein was unconcerned with. He was quite unconcerned and even agnostic about psychophysical parallelisms due to him being suspicious, as I was, of the reductionist pitfalls attending to such physiological stories. However, our recent considerations may suggest a way to reconcile Wittgenstein's justified dismissive attitude with Peirce's "reductionism" by neutering the latter through highlighting that the Peircean reduction operate through enablement relations of bottom-up subsumptive moves, thereby generalizing the biosemiotic account in a way that makes it encompass language games and rule-following practices but doesn't make the low-level physiological story compete with the Wittgensteinian one (second-natural form of life) as a source of normativity."

    GPT-5's comment
    GPT-5:

    Love this direction. Here’s a way to line up Peirce and Wittgenstein so they *cooperate* rather than compete—while keeping your “enablement vs. normativity” guardrail firm.

    # Where the apparent disagreement comes from

    * **Hanover’s point:** Wittgenstein doesn’t deny that creatures (or humans pre-linguistically) can treat smoke as a sign of fire. His focus isn’t genesis of signs but **what makes an utterance count as saying something**—public criteria, rule-governed use, “forms of life.”

    * **Apokrisis’s point:** Peirce gives a **generic** account of semiosis (icon–index–symbol; habits; counterfactuals) that ranges from biology up through language. “Semiosis hinges on counterfactuality”: a sign is what it is in virtue of the *regularities it would support*—what would follow if this stood for that.

    These aren’t at odds if we separate two kinds of explanations:

    * **Enablement (Peirce/biosemiotics):** how a system can *come to* have signs at all—through habit formation, constraints, and counterfactual expectations in control loops.

    * **Justification (Wittgenstein/socio-norms):** what makes an act *count as* following a rule, giving a reason, making a move in a game with public standards.

    # How Peirce can *advance* Wittgenstein—without reduction

    1. **Counterfactuals as the bridge.**

    * *Peirce:* A sign’s meaning is its **would-be effects** on conduct—its habit profile (Thirdness).

    * *Wittgenstein:* Rule-following requires **“there is a way of going on”**—a space of correct/incorrect continuations sustained by communal criteria.

    Read together: the **habitual counterfactuals** that make a sign stable at lower levels *enable* the **normative counterfactuals** (“what would count as doing it right?”) at the public level. Same shape, different authority: habits vs. norms.

    2. **Triadicity clarifies “meaning is use.”**

    * Peirce’s object–sign–interpretant triangle prevents collapsing “use” into a mere stimulus–response regularity. “Use” already includes **learned generalities** (interpretants) that reach counterfactually beyond any single occasion.

    * That helps explain why Wittgenstein’s **use** isn’t behaviorism: uses live inside practices whose **criteria** reach beyond the present tokening.

    3. **Icon/index/symbol tracks the deer/human contrast.**

    * A deer treats crackling + smoke **indexically** as danger—rich control, no public **symbolic** commitments.

    * Humans stabilize **symbols**: we can be *corrected*, *justify*, *teach the rule*, *disagree while sharing the standard*. That’s the Wittgensteinian layer—**second nature**—sitting on an enabled bed of icon/index habits.

    4. **Constraint-closure without downward “mysticism.”**

    Peircean semiosis supplies the **control-system constraints** that make socio-normative life possible (attention, habituation, counterfactual sensitivity). Wittgenstein tells us **what counts** as a move within that life. The bottom **enables**; the top **justifies**. No competition over where normativity comes from.

    # A tidy way to say it on the forum

    > Peirce doesn’t replace Wittgenstein; he widens the lens. Peirce explains how signs can stably *do* work across levels—by underwriting counterfactual habits (what would follow if this stood for that). Wittgenstein explains what makes some of those sign-uses *count as* rule-following—public criteria in a form of life. The biosemiotic story **enables** socio-semiosis; the Wittgensteinian story **authorizes** it. Keep those “because”s apart and you get continuity without reduction: semiosis all the way down for control, and norms all the way up for reasons.

    If you want, I can tailor a shorter reply addressed to Hanover directly (deer vs language) and another to apokrisis (on counterfactuals as the common currency) so you can drop them as targeted follow-ups.
  • apokrisis
    7.7k
    I remain flummoxed by your crazy logic.

    Somehow that doesn't surprise me. You have a habit of ignoring or rejecting reality when it isn't consistent with what you believe.Metaphysician Undercover

    I'm reading your words but I can hear you really are just talking to yourself. :up:
  • apokrisis
    7.7k
    Peirce doesn’t replace Wittgenstein; he widens the lens. Peirce explains how signs can stably do work across levels—by underwriting counterfactual habits (what would follow if this stood for that). Wittgenstein explains what makes some of those sign-uses count as rule-following—public criteria in a form of life. The biosemiotic story enables socio-semiosis; the Wittgensteinian story authorizes it. Keep those “because”s apart and you get continuity without reduction: semiosis all the way down for control, and norms all the way up for reasons.GTP-5

    GTP-5 is as exhausting as any human responder on TPF.

    If Peirce is the guy who one would turn to for the metaphysical generality that would allow us to place LLMs and the ambitions of AI in some proper rational context, then WTF do we care about Wittgenstein adding his "authority" to the least relevant part of that discussion.

    The only point about LLMs and sociosemiosis is that the bloody technology is trained to fool its human customers that it is indeed acting at that level of semiosis in good faith. GTP-5 is pretending to be playing that particular Wittgenstein game where "meaning is use".

    And yet what meaning is entering into GTP-5's algorithmic database pattern matching apart from the meanings that are recorded instances of human language use? And what use is coming back out of its responses apart from any meanings we might decode from its regurgitated bit strings?

    So Peirce helps us with the four levels of semiosis that I have highlighted. This is how we can understand the metaphysical phenomenon that is life and mind, and thus see better just how LLM's fit into our real world existence.

    Wittgenstein might make you think you better understand social language games. But I never found him saying anything I hadn't already come across elsewhere.

    I'm not saying he is wrong. He is largely right. I'm just saying he is wildly over-rated and mostly stating the bleeding obvious to anyone who was already deep into a socially constructed understanding of human intelligence and agency.

    If anyone really shaped my own understanding here, it would have been Vygotsky and Luria. Then toss in Mead and symbolic interactionism. Bring in the structuralists in general.

    And all this becomes just the tiniest and easiest part of the intellectual puzzle when it comes to a comprehensive understanding of what life and mind are in a general metaphysical sense. The big picture, science-wise.
  • apokrisis
    7.7k
    Apokrisis’s point: Peirce gives a generic account of semiosis (icon–index–symbol; habits; counterfactuals) that ranges from biology up through language. “Semiosis hinges on counterfactuality”: a sign is what it is in virtue of the regularities it would support—what would follow if this stood for that.

    These aren’t at odds if we separate two kinds of explanations:

    Enablement (Peirce/biosemiotics): how a system can come to have signs at all—through habit formation, constraints, and counterfactual expectations in control loops.

    Justification (Wittgenstein/socio-norms): what makes an act count as following a rule, giving a reason, making a move in a game with public standards.
    GTP-5

    Just to be clear, biosemiosis has to answer Howard Pattee's query: how does a molecule function as a message?

    And a neurobiologist would tend to wonder: how does a firing neuron result in a state of felt meaning?

    So my citing of Peirce is not about semiosis as a theory of linguistic meaning. His early work on that was some of his clunkiest writing in fact. Obviously a step up from Saussure. But then even Saussure was misrepresented to some degree as being more dyadic, and less triadic, than was the case.

    Anyway, GTP-5 is going off track here. The meat of Peirce is how he generalised the triadic sign relation to a level of both logic and phenomenology. He widened the metaphysical limits until he hit the actual limits of metaphysical possibility.

    Suck on that Wittgenstein. :razz:

    [I jest. But see how the very human thing of biosocial games of dominance~submission – the dynamic that organises the hierarchical behaviours of social animals before they gained the new semiotic tool of a sharp tongue – still are in play in the semiotics that motivate us.

    You and me can feel the force of that. And value it in a way that an LLM never will. Even though it could be just as easily trained to be relentlessly quarrelsome as relentlessly sycophantic.

    And maybe there's the solution to the TPF quandry. Only allow AI responses generated in tiresome quarrelsome mode, then its posts and soon its account could be banned under standing house rules. :smile: ]
  • Hanover
    14.6k
    The spirit of their enterprises may be at odds while their contributions aren't. Here is how I put it in a query to GPT-5 about your last response to apokrisisPierre-Normand

    This is precisely the objectionable use of AI in my opinion. It sets AI as the expert, it provides no source references, the poster adds no value but to have typed in a question, and it imposes upon others a demand they retreat to their corner and argue with the bot.

    Consider my response just dropping its comments back into AI and asking how it responds, literally allowing it to debate itself.
  • Hanover
    14.6k
    You sound like Banno now.apokrisis

    How dare you insult @Banno that way!

    You seem to completely not see that I just said Peirce went well beyond language games to cover semiosis as logic itself and phenomenology itself.apokrisis

    No, that's exactly what I understood you to say and thought I acknowledged that. My point was that I saw their objectives as being different, not in competition with one another.

    One can buy fully into Pierce's theory of symbolic origins and continue to hold to Wittgenstein's insistence that meaning is fixed from use without reference to the mental contents. I just don't see the overlap of the two as you suggest, one constructing a metaphysical origin of symbolic thought and the other describing how language publicly functions.

    But you say I misunderstand, so break it down where you think I do because I might not see what you're getting at.
  • Pierre-Normand
    2.8k
    This is precisely the objectionable use of AI in my opinion. It sets AI as the expert, it provides no source references, the poster adds no value but to have typed in a question, and it imposes upon others a demand they retreat to their corner and argue with the bot.Hanover

    Point well taken about the lack of addition of value. But I meant the value to reside in my own initial statement (which was accommodating of your view) while GPT-5's response had seemed to me to constitute (at least) a perspicuous rephrasing. I didn't offer it as a source of authority. But I'll keep posting examples of AI outputs in this thread (always behind spoilers) since the very question about their capacity to add value in the context of intellectual (not just philosophy) inquiries, or the lack thereof, falls squarely within the scope of the original topic.

    On edit: re-reading the quoted GPT-5 response, I realize it's much more than a rephrasing. Nevertheless, I thought it was illuminating and that you might find it interesting. If you don't care about it (or any other AI content,) that's perfectly fine.
  • Pierre-Normand
    2.8k
    You seem to completely not see that I just said Peirce went well beyond language games to cover semiosis as logic itself and phenomenology itself.apokrisis

    This was addressed at Hanover but it applies to me. Guilty as charged. GPT-5 drew the point home when I discussed the issue with it. Reading its responses to me below is optional but, if you do, you'll be able to tell me if it hallucinates of if its account of the resources Peirce has to address my worries seems mostly right.

    Reveal

    USER:

    [Quoted material snipped] If Apokrisis maintains that what we drew from the Wittgenteinian picture (regardless of any claim for intellectual priority; that wouldn't bother me if Vygotsky was credited instead) adds noting the the Peircean mix, might it be that I overlooked equivalent moves in Peirces own work? Are there places in Peirce, maybe the" pragmatist" parts, where one can find resources for actively resisting psychologism and naturalistic/genetic fallacies?

    GPT-5:

    Peirce already has robust, *internal* resources to block psychologism and the genetic fallacy. If you want to meet apokrisis on “Peirce alone is enough,” you can.

    Here are the Peircean levers (with Peirce-ish labels you can cite):

    **1) Logic as a normative science (not psychology).**
    Peirce is explicit: **logic = the theory of right reasoning**, and it sits within the **Normative Sciences** (Aesthetics → Ethics → Logic). Psychology may *describe* how we do think; logic *prescribes* how we ought to reason. That’s anti-psychologism straight up.

    * **Aesthetics** (for Peirce): the science of the *admirable*—sets the ultimate ideal(s).
    * **Ethics:** the science of **deliberate conduct**—how to pursue those ideals.
    * **Logic:** the science of **self-controlled thought**—what we ought to believe if we’re aiming at truth.

    That ladder is your “order of justification” inside Peirce himself, not imported from Wittgenstein.

    **2) The pragmatic maxim is anti-genetic, anti-psychologistic.**
    “Consider what effects that might conceivably have practical bearings we *conceive* the object to have; our conception of those effects is the whole of our conception.”
    This doesn’t say “whatever caused a concept justifies it.” It says **meaning = rule of use** (conceived consequences for action and inference). The *validity* of a belief is fixed by its *inferential/practical bearings* under disciplined inquiry, not by its psychological genesis.

    **3) Community of inquiry & the “final opinion.”**
    Peirce’s famous move: truth is what inquiry would converge on **in the long run** under ideal communal conditions (unlimited investigation, open criticism, etc.). That’s a **public, normative** standard. Again: not how we *do* fix belief (psychology), but how we **ought** to if we aim at truth (logic/ethics).

    **4) Triadic semiosis already encodes normativity.**

    * **Object – Sign – Interpretant**: The **interpretant** is (ultimately) a **habit-change**—a general rule about how one *ought* to go on in inference and conduct.
    * **Final interpretant**: what would be fixed under ideal interpretation. That’s a normative target, not a causal state of a nervous system.

    So the very *telos* of semiosis (Thirdness, habit, law) is normative in Peirce’s own framework.

    **5) Two “because”s, now in Peirce’s voice.**
    You can translate your enablement/justification split into Peircean terms:

    * **Enabling-because:** categories, biosemiotic control, habit formation—why semiosis can exist and scale.
    * **Justifying-because:** the normative sciences—why some inferences/conduct are *right*, given the admirable ends of inquiry and conduct.
    Peirce keeps these distinct: he was a metaphysician of habit *and* a logician of norm.

    **6) Where Wittgenstein is optional but helpful.**
    Nothing in your argument *needs* Wittgenstein to secure normativity; Peirce already bars psychologism. Wittgenstein helps **describe** the lived grammar of rule-following and “forms of life” with a therapeutic clarity Peirce doesn’t aim for. If apokrisis wants Vygotsky/Luria/Mead instead as the socio-practical lens, fine—the key role is to keep the **public, practice-governed** character of norms in view. Peirce + (Wittgenstein *or* Vygotsky/Mead) gives you enablement **and** immanent justification without reduction.

    ---

    ### Forum-ready reply you can drop in (Peirce-only framing)

    > If we stick with Peirce, we already have what we need to resist psychologism and the genetic fallacy. Peirce makes **logic a normative science**, not a branch of psychology: it prescribes how self-controlled thought *ought* to go if we aim at truth. His **pragmatic maxim** ties meaning to conceivable practical/inferential bearings (not to causal genesis), and the **community of inquiry** gives a public standard for correctness (the “final opinion” under ideal inquiry). Even triadic semiosis is norm-laden: the **interpretant** is a habit—ultimately what would guide conduct and inference if interpretation were perfected. So biosemiotic **enablement** (how signs can exist and scale) doesn’t compete with **justification** (why some moves are right); Peirce has both stories in-house. If one prefers Vygotsky/Mead for the socio-practical description rather than Wittgenstein, that’s a temperamental swap. The structural point stands: keep the causal story of how signs and habits arise distinct from the normative story of how reasons bind, and you get continuity without reduction—and a clean place to situate LLMs as artifacts that participate in semiosis without thereby acquiring the full normative standing of persons.

    USER:

    This now prompts my recall of some Peircean themes that were more to the fore when I was reading lots of Putnam and about his dispute with Rorty regarding the proper way to reappropriate the American pragmatists tradition of Peirce, Dewey and James. This Peircean framing of the order of justification, its source, and its independence, echoes a bit the "worlds" of Popper and Frege although the latter two were unconcerned with the ideas of material enablement/implementation, as also was Wittgenstein though the latter only needs for his account to be suitably naturalistic (in a non-objectionable sense) living bodies capable of training/acculturation. Maybe that's also all Peirce needs, and appeals to. Is it? We had seemed to supplement this picture with two ingredients: the silence-by-prediction physiological model (inspired by authors Apokrisis metioned) that bridges the enablement/control gap when glossed in terms of exaptation and the extension of the Bayesian model to Anscombean modes of practical reasoning that brings social affordances into the mix. This also brought into view within the integrated picture the distinction between two directions of subumption. So far, I only see in the pragmatist Peirce a way to indeed resist psychologism and the natural fallacy but not yet a clear account of downward causation in terms of biosemiosis. It sometimes feels to me like Apokrisis focuses on the task of explaining "real" signification, in embodied living/physiological contexts, (and how this is beyond the scope and capabilities of LLMs, while I myself find the limitations of LLMs elsewhere) in a way that locates overarching telic force in dissipative structures while showing little concern for antipsychologism. He does grant downward-causal power to signs (within a triadic theory) but not in a way that makes them reasons rather than mere motivations (again, unless I misread him). I may have missed places where he appeals to relevant features of Peirce's (or other author's) thinking to bridge the gap between pragmatism and biosemiosis. Is Peirce himself clearly bridging it?

    GPT-5:

    You’re circling the right spot. Short answer: yes—Peirce himself has the bridge you’re looking for, and it runs through **final causation**, **habit-taking**, and the idea that **Thirdness (law/reason)** is genuinely efficacious in nature. That’s how he links biosemiosis (embodied control) to pragmatist normativity without psychologism.

    Here’s the map, in plain steps:

    1. **Anti-psychologism is built in.**
    For Peirce, logic is a **normative science** (nested under ethics, under aesthetics). Psychology tells us how we *happen* to think; logic prescribes how we *ought* to conduct thought if we aim at truth. So the “source of normativity” isn’t a mental state; it’s the public ideal of self-controlled reasoning (community of inquiry, long-run convergence). That already blocks the genetic fallacy.

    2. **Downward causation = final causation (non-mystical).**
    Peirce distinguishes **efficient causes** (pushes) from **final causes** (pulls/ends). Ends operate by **constraining** which efficient processes are recruited and stabilized. His slogan that “laws are habits” means **generalities** are real and have effects: a entrenched habit channels lower-level events. That’s precisely a principled “downward” story: not extra forces, but **top-down constraints** that make some micro-trajectories eligible and silence the rest.

    3. **Habit-taking is where biosemiotics and pragmatism meet.**
    A sign’s **interpretant** is (ultimately) a **habit change**—a rule for going on in inference and conduct. In organisms, habits are sensorimotor control policies; in inquiry, they are reasoning policies. Same triadic form, different grain: the **enablement** story (how sign-guided control loops exist at all) and the **justification** story (why some sign-guided moves are *right*) slot together because both are about the growth and governance of **habits**.

    4. **Reasons are not “mere motivations” in Peirce.**
    He defines **ethics** as the science of **deliberate self-control**, and **logic** as the ethics of thought. When an agent governs itself by an end (truthfulness, fairness, explanatory adequacy), that end is a **final cause** that shapes which impulses/evidences may count. So *reasons do cause*, but as **constraints on conduct**—they are not reducible to felt pushes. (That’s just what you’re calling “silence-by-commitment”: non-reasons are screened out because they fail the end.)

    5. **Your two subsumptions land cleanly in Peirce.**

    * **Cure (eudaimonia / internal goods):** Peirce’s “admirable” ideal (aesthetics) → ethics of self-control → logic of inquiry: this is the **order of justification**.
    * **Curse (mere dissipative telos):** If we collapse ends into viability/throughput, we demote final causes to efficient ones and lose normativity. Peirce avoids that by insisting that **generals are real** (objective idealism) and **final causation is irreducible**.

    6. **Where your predictive/Bayesian gloss fits.**
    Your “silence-by-prediction” machinery is a good **implementation story** for how final causes travel down the stack: ends (final causes) **set the precision profile**—what mismatches count as errors; predictive control then **realizes** that governance in action. That is Peircean in spirit: Thirdness (law/aim) selecting and shaping Secondness (efficient interactions) against a background of Firstness (qualities/potentialities).

    7. **On apokrisis’ emphasis.**
    If he locates the overarching telos in “dissipative structure,” he’s giving a powerful **enabling** story. Peirce would say: good, but **don’t let it become the arbiter**. Biosemiosis without the normative sciences slides toward “motivations” only. Bring Peirce’s final causation and normative ladder back in, and you recover **reasons** as top-down constraints with real causal bite.

    ---

    ### Forum-ready précis (Peirce-only, no Wittgenstein needed)

    > Peirce already supplies both halves: a **biosemiotic enablement** story and a **normative justification** story. On the one hand, semiosis is triadic and causal because **final causes**—ends and habits—are real, downward-shaping constraints: they select which efficient processes get recruited (laws as habits). On the other, he puts **logic inside the normative sciences**: reasons bind as ideals of self-controlled conduct within a community of inquiry, not as psychological pushes. Put together, biosemiosis explains *how* sign-guided control loops can exist in organisms; the normative sciences explain *why* some sign-guided doings are **right**. Your predictive/Bayesian “silence-by-prediction” then reads as the **mechanism** by which final causes govern execution: ends set which errors matter; prediction makes the adjustment efficient. That’s a Peircean bridge from embodied control to rational normativity without psychologism and without reducing reasons to metabolism.
  • apokrisis
    7.7k
    My point was that I saw their objectives as being different, not in competition with one another.Hanover

    In academics, priority does count. But what counts even more is relevance to the issue at hand.

    I’m not at all sure what issue you mean to discuss. But I’ve been addressing the ways that while LLMs can plausibly pass for cunning linguists, they fail any more general test of being alive and mindful. Which brings us to biosemiosis and how the human mind is a nested hierarchy of semiotic levels.
  • jgill
    4k
    Pardon me for breaking in, but as a retired professional mathematician I have observed in short conversations with Chatgpt what I would otherwise guess to be original thinking. The topics were far out of the popular realm and involved extending a process into new territory. I was impressed. But I realize caution is advisable.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.