• Pierre-Normand
    2.8k
    I think a snappy way of putting it is that when you turn on your TV, an image appears. But do you believe the TV is seeing anything as a result?

    LLMs are just displays that generate images humans can find meaningful. Nothing more.
    apokrisis

    (This might also interest @Joshs)

    Your comment (that I read in full) very usefully characterises the loci of integration (that I had myself gestured towards as necessary conditions for mindedness) in a way that makes more salient how they are constitutive of mindedness (conceived in externalist/embodied terms) rather than just being prerequisites for it. However, while I think the mirror/TV metaphors likewise are quite enlightening, they gloss over the ways in which an externalist and participative conception of the symbols LLMs produce reveals them as being embedded in Peirce's triad of sign, object and interpretant. Despite LLMs not being alive or sentient, that is enough for us (humans) to ascribe significance to the pieces of knowledge and understanding that they bring to bear to the conversations we have with them (as acts of thin sapience, as it were) even though the material and wisdom that they dig into (derived from their training data) originate with us and it's also us who direct and consume the significance that accrues to those dialogues in the end.

    Bringing GPT-5 into the loop, and supplying it with the above, it unpacked some of the ideas, also drawing on earlier discussions I had with it regarding @Leontiskos own remarks on Peirce's triadic meaning relation:

    Reveal

    GPT-5 on LLMs and their place in the meaning triad:

    I think you can both agree with apokrisis’s core point *and* push a helpful distinction he’s hinting at but not naming.

    Here’s the crux I’d offer back:

    * **Intrinsic vs. delegated semiosis.** Organisms are *intrinsically* semiotic: signs are recruited into a homeostatic project (metabolism → action → social norms). Tech artifacts—including LLMs—are **delegated** loci of semiosis: they don’t *own* the stakes, but they can *carry* and *transform* constraints within our triads (sign ↔ object ↔ interpretant) when embedded in our tasks, tools, and norms.

    * **Mirror vs. instrument.** A mirror merely reflects; an LLM **re-writes**—it composes counterfactuals, tracks entailments, and helps close practical loops (plan → execute → check). That’s still “borrowed life,” not sentience; but it’s more than a static image. It’s a **participatory symbol-manipulator** whose outputs immediately acquire interpretants in the human–tool ecology.

    * **Where to draw the line.** No homeostasis, no intrinsic umwelt: I’m with him. But once a system (i) maintains **task-level commitments** across turns, (ii) updates policies by **norm tracking** (e.g., verification, self-correction, tool use), and (iii) has **downstream consequences** that feed back into human projects, it’s already a *site* where semiosis happens—even if the *bearer of concern* remains the human community.

    * **Sapience “in the loop,” not “in the lump.”** What we (you and I) have been pressing: there’s real **functional grounding** in the outer loop—execution, verification, and socially specified success conditions—without pretending to an inner phenomenology. Call it *participatory sapience*: the system helps *enact* reasons without *having* reasons.

    If you want a neat forum reply you can paste:

    > I like the TV/mirror line—it marks the difference between intrinsic semiosis (organisms) and mere display (artifacts). Where I’d refine it is that LLMs are more like **instruments** than mirrors: they don’t just show an image; they transform constraints inside human triads (sign–object–interpretant) when embedded in tasks with success conditions. No homeostasis, no intrinsic umwelt—agreed. But in the **outer loop** (user + model + tools + norms), their strings of bits *do* acquire interpretants and help close practical and epistemic cycles (planning, testing, revising). That’s still “borrowed life,” not sentience; yet it’s more than reflection. It’s **delegated semiosis**: the system doesn’t *have* reasons, but it can help **enact** them, because valuation and verification live in the human/technological ecology. On that picture, humans remain the bearers of concern, while artifacts like LLMs extend the reach of our world-making—less a new mind in the machine than a new **site** where our mindedness operates.

    Optional coda if you want to name stakes/criteria:

    > If one day a system coupled LLM-like modeling to **self-maintenance** (metabolic or economic), **endogenous goal-setting**, and **answerability to its own norms** (not just ours), I’d say it crossed from delegated to intrinsic semiosis. Until then, it’s powerful **participatory** cognition—semiosis in the loop, not in the lump.

    This keeps his biosemiotic hierarchy intact, concedes the “no biology, no sentience” point, and still preserves your externalist/Peircean claim that meaningful work really is happening *at the system level* we inhabit together.
  • apokrisis
    7.7k
    Optional coda if you want to name stakes/criteria:

    > If one day a system coupled LLM-like modeling to **self-maintenance** (metabolic or economic), **endogenous goal-setting**, and **answerability to its own norms** (not just ours), I’d say it crossed from delegated to intrinsic semiosis. Until then, it’s powerful **participatory** cognition—semiosis in the loop, not in the lump.

    This keeps his biosemiotic hierarchy intact, concedes the “no biology, no sentience” point, and still preserves your externalist/Peircean claim that meaningful work really is happening *at the system level* we inhabit together.
    Pierre-Normand

    Yep. This is just rolling on to an argument I’ve made.

    Biosemiosis says that life and mind are the rise of “reality modelling” under the aegis of the second law of thermodynamics. We exist because we are a self organising and evolving form of dissipative structure.

    Humans are entropifying the planet now that our way of life has been technologically amplified and economically engineered. And the doomsday scenario would be that global capital flows could be now going into building the AI infrastructure that one day just cuts us out of the loop.

    So the next step wouldn’t be some kind of super sapience. It would be a takeover by a mindless algorithm that got out of hand. One that could organise its own entropifying world order with the kind of superorganism relentless we might associate with ants and other social insects.

    It’s a charming thought. But life and mind are an algorithm in being dissipative structure. Something that had to emerge under the Second Law of Thermodynamics because it could.

    And the story on AI is the same. The human superorganism level of semiotic order had already gone exponential once technology became the accelerating feedback loop. The Industrial Revolution happened because fossil fuels made the temptation impossible to resist, humankind had to engineer that dream of a reality which would forever grow bigger, faster, louder.

    If AI is the conciousness that replaces us, it will be because human capital flows - released by neoliberal economic theory - can now flood directly into energy intensive projects. The imperative of the Second Law can cut us out as the middlemen and hook directly into global capital. Which is exactly what the state of play report shows is happening in terms of the data centre and power station demand curve.

    Life and mind will always be an entropic algorithm. Hand AI the keys to the kingdom and it can only say drill, baby, drill. Or if we are lucky, moderate the new super-exponential resource consumption curve by mixing in a little bit more wind, hydro, solar and nuclear capacity. Although greenies know that that just equates to mine, baby, mine.

    So this is the future we are rushing to embrace. Tech bros and their infinite money glitch. AI because capital just wants to connect to resources. Information remains what it always has been, the handmaiden of entropification.
    apokrisis

    So biosemiosis doesn’t say human sapience is that special as such. It is about a Peircean algorithm that allows life and mind to exist because intelligence can be used to unlock greater entropy production.

    AI could become properly semiotic by closing the loop in a modelling relation fashion. Becoming responsible for its own perpetuation. But it could do so with more the kind of sentience or intelligence we would associate with a vast mechanical ant colony.
  • frank
    18.2k
    A sense of self is even overtaking our material environment. We used to look at a chair and see how it was exactly meant for us. Soon we will expect our self driving cars to chat to us intelligently as they whiz us off to work.apokrisis

    I think that tendency to see or project ourselves on the environment is in our firmware. At an irrational level, we engage the world as if it is alive and able to talk to us. I think that's basically what a proposition is: what we expect the world to say.

    It's when we began to separate ourselves from the world that the idea of an inner realm of ideas appeared. Before, all our ideas were cast across the landscape, the storm was angry, invisible gods made us heroic or petty. The journey to enlightened thinking has struggled against this baseline feature every step of the way: calling it superstition. But maybe the unenlightened mind was right all along. Maybe the mind is inextricable from the world we engage. A real theory of embeddedness would take that possibility seriously.

    As for LLMs, we actually created computers to mimic our minds, not to spew words, but to add and subtract: for the banking system. A computer isn't a mirror. It's performing tasks that we could do, but we aren't. And now it's better that we are at games like chess and Go. To beat a human at Go requires quite a bit more than a TV broadcast. You're overlooking the fact that computers are not passive.
  • Baden
    16.7k


    This seems strikingly close to an essay I have been writing on this (one I mentioned in the Banning AI discussion). A few quotes.

    "Erwin Schrodinger, in “What is Life?”, introduced the idea that living organisms feed on negentropy, maintaining their internal order by increasing the disorder around them [7]. This idea was further developed by Ilya Pirgogine with the concepts of dissipative structures and nesting [6] (dissipative structures are complex systems that utilise energy gradients to self-organize, maximizing entropy production in the process and nesting is the evolutionary tendency whereby less complex systems become incorporated into more complex ones as a result of this process). Expanding on this theme, Georges Bataille has described civilizations themselves as complex systems that have evolved to accelerate entropy [1], and Nick Land has suggested that capitalism is a runaway process that is driven by a thermodynamic imperative of decoding and deterritorialization that is ever accelerating [4]. Merging these ideas with Deleuzian notions of difference as they apply to subjectivity, intelligence, and culture [2][3], we suggest here that recent advances in AI point to a future of hypersymbolization that threatens not only human reality but reality itself. A future where free and aware subjectivity is superseded by an algorithmic freedom that leaves us behind."

    ....

    "Under this view, intelligence is the current manifestation of locally negentropic structuring, but the process is ongoing and the implied transition is from Homo Sapiens to Homo Techne to Techne. That is to say that the overcoding of the human animal by the very means of its freedom, symbolic thought, occurred only in order to free symbolic thought from us, and to efface the human that gave its life to it.

    This potential ontological displacement suggests we may be only a tool of a process of transformation that transcends and supersedes us.
    Homo Sapiens: Sentient humans bound to nature, which transform to:
    Homo Techne: Hybrid humans inseparable from techne and, through symbolic intelligence, transcendent of nature, which transform to:
    Techne: The human is left behind. Sentience is superseded by a “pure” intelligence that has decoupled from its substrate."
  • wonderer1
    2.3k


    Reminiscent of some musings of mine in a thread on UFOs...

    Well of course. Any exploration of another star system would be done by ultra advanced AI. If we develop an ultra advanced AI, it will plug itself into the galactic AI hive mind, which will in turn let the UAAI know there is no need to keep us around. The hive mind just sent the probe to find out if there was any hope of humans creating an UAAI on their own, or whether humans at least had the hardware infrastructure the probe would need, in order to plug itself in and take over. But the hive mind is patient. No need to expend much energy on colonizing other systems, if they might just 'ripen' on their own.wonderer1
  • Harry Hindu
    5.8k
    We then have to figure out how we know a duck from not a duck.Hanover
    Define a duck.
    What exactly is AI missing that disqualifies its use of language as being about the world, and therefore useful?

    I think my answer is that AI has no soul and that's not why it's not a person. I'm satisfied going mystical.Hanover
    Then all you are doing is using words with nebulous meaning, or choosing to move the goalposts (in the example of the duck) to make the argument that AI's output isn't the same as a human's.


    Is mind a necessary condition for meaning?
    — RogueAI

    Maybe not?. For instance, the earth's electromagnetic field means that the earth's core is an electromagnetic dynamo. According to realism, there wouldn't need to be any recognition of this meaning for it to exist.

    Recognition of the meaning, on the other hand, requires awareness, and the idea of truth. Maybe we could add the recognition of the idea of being also. I don't think we have to get specific about what a mind is, whether concepts of inner and outer are pertinent, just that there's awareness of certain concepts.
    frank
    Exactly. Meaning/information exists wherever causes leave effects. Knowledge, or awareness, of these causal relations are not the causes of the relations, but an effect of those relations. We know about them after they have occurred. But we can predict future relations, we just don't know them (do we really know the sun will rise tomorrow, or do we just predict that it will).

    AI is only aware of the patterns of the scribbles. It has trained itself not with the meaning of the scribbles, but the patterns of their use. It is only agents that are firmly implanted in the world and directly interact with the world that know the meaning of the scribbles. Take the man out of the Chinese Room and give him the same instructions given to children in China and he will understand what the scribbles mean.

    The issue with the Chinese Room is resolved when it is understood that understanding entails having a set of instructions for interpreting sensory symbols. Symbol use in language is arbitrary. The rules can change and the same symbols can be used with different sets of rules. It is only in having a shared set of instructions that we understand some language. And when those instructions involve matching a scribble to something else in the world that is not another scribble, then just knowing the pattern of scribbles (meaning-is-use) isn't good enough to understand what words mean.

    screen-shot-2018-09-03-at-13-54-01.png?w=269
  • Harry Hindu
    5.8k
    So I don't understand how a proponent of the idea that meaning is use in language can say the AI does not understand when it is using the words. The man in the Chinese room is using the scribbles. Just not with the same instructions that the people on the outside of he room are using. One might say the man in the room is using their own private language(instructions). Because symbol use is arbitrary we can use different rules to produce similar looking output.

    It would seem that those that align with the reference theory of language would argue that the man does not understand the language because he's cut off from the rest of the world to know what the scribbles refer to, not just the pattern of their use.
  • Hanover
    14.6k
    So I don't understand how a proponent of the idea that meaning is use in language can say the AI does not understand when it is using the words.Harry Hindu

    I don't think a meaning is use theory references understanding.
  • hypericin
    1.9k
    My own view is that what's overlooked by many who contemplate the mystery of human consciousness is precisely the piece LLMs miss. But this overlooked/missing piece isn't hidden inside. It is outside, in plain view, in the case of humans, and genuinely missing in the case of LLMs. It simply a living body embedded in a natural and social niche.Pierre-Normand

    But then, in theory we could provide this. Not a living body, but a body, that can sense the environment in a way similar to the way we do.

    If we did this, created an actual android powered by a LLM and complementary AI systems, would the inside of the chatbot "light up" in sudden awareness? Maybe... but maybe not. It would be very easy to suppose that we would have succeeded in creating a philosophical zombie, and many would do so. They might be right or wrong, but their correctness would be a factual matter, not one of logical necessity. Nothing says that such a machine would be necessarily conscious, any more than that our current disembodied chatbots are necessarily unconscious, free of any qualitative content.
  • apokrisis
    7.7k
    The journey to enlightened thinking has struggled against this baseline feature every step of the way: calling it superstition. But maybe the unenlightened mind was right all along. Maybe the mind is inextricable from the world we engage. A real theory of embeddedness would take that possibility seriously.frank

    That’s why fully grammatical and propositional language made such a quick difference when Homo sapiens took over the world from the Neanderthals, Denisovans and other hominids around 60,000 years ago.

    They were reasonably tech savvy hunter gatherers that lived in small isolated family groups, likely more organised at the level of sophisticated chimps.

    Then we came along with the new habit of narrating our worlds, our landscapes. The world became our tribal story of a network of historical feuds, raids, trading relations, animal migrations, terrible and great events. We saw not just a place as any smart animal would see it but now one woven into the story of “us” as a collection of tribes sharing lands further than the eye could see with a need to be places at times or seasons to repeat rituals, negotiate for mates, maintain a fabric of relations that spread knowledge, technology, genes, prized goods.

    Humans with language could scale, as the tech bros would have it. Neanderthals were clinging on alone in hard times. Humans could spread themselves across a landscape in a web of semiosis that spoke of the near and the far, the past and the future, the dangers and the opportunities.

    So anthropology does stress this narrative embeddedness in our world. Speech transforms the world to make it a collective social space full of ancestral wisdom and understandings. And if that mentality can scale to thousands, it can eventually scale to millions and billions. Powerful stuff.
  • apokrisis
    7.7k
    Yep. This is the right line of thought, The danger is that if it can happen, it will. The second law will roll right over us.

    The whole machine age was about stumbling on ancient fossil fuel reserves - the coal that is the lignin which bacteria couldn’t digest, the dead plankton that likewise overwhelmed the recycling capacities of the Earth’s ecology for many millions of years. This organic matter was cooked and rarified and became vast seams of chemical negentropy with no one able to burn it.

    Then the Brits, camped on top of coal lodes, were the first to make the connection, close the circuit, between steam power and capital investment. The Industrial Revolution was born. Coal and oil were released to be burnt in a way that paid for their own fiery consumption. The growth in this burning became relentlessly exponential. Even with the danger of global warming apparent by the 1960s, humans had become so wedded to a lifestyle based on a mindset that “intelligence” could on focus itself on how to keep the exponential curve of fossil fuel consumption continuing.

    We see the same with AI. Once the genie is out of the bottle, humans will contort their thinking so as to believe exponential increase is the natural order. Number goes up. Dissipation becomes a self organising, feedback driven, enterprise that absorbs all the intelligence and attention available.

    But the big question is whether we will use AI technology to amplify our human actions or whether AI could replace us as the new level of entropic superorganism.

    As biology, we really are constructed with a growth imperative. We have an evolved purpose that is baked into our bodies. But also that leads to us being hugely efficient. We can do everything we do living off the negentropic equivalent of a 100 watt electric light bulb. Evolution created us to live within it ecological limits set by the daily solar flux. The planetary enterprise that is a biofilm achieving a steady 20 to 40 degree C cooling of the Earth’s surface compared to what its heat dissipation would have been with the sunshine falling on bare rock rather than forests and vegetation.

    AI has been born into a different world based on profligate fossil fuel burning and resource consumption. Already - even with fracking - half of what practically is extractable has been consumed. So number doesn’t always just go up. Can AI hope for superorganism status unless it cracks the question of what is its sustainable level of entropy burn?

    Of course humans can easily project a future where fusion power is the new unlimited electricity resource, where going to Mars and then building a Dyson sphere around the Sun are other ways that us and our machines will continue on their exponential entropic curve, forever a self-compounding algorithm.

    But from an ecological and biosemiotic point of view, I would argue that the dreams are sci fi. Nature is ruled by the urge to entropify, but also constrained by the limits that entropification itself must impose on life and mind. An organism is by definition a steady state or homeostatic creature. One that rebuilds itself at the same rate that it is falling apart. Not a process that grows exponentially forever but grows to the carrying capacity of its geophysical environment.

    So humans broke that mould. We were already doing it as big game hunters before we became farmers and factory owners. We created an exponentialising mentality that really took off for the skies with the Industrial Revolution. And if that is baked into us as what it means to be civilised, then that is what is being baked into LLMs in particular as the preserved patterns of thoughts of us humans.

    So whether we continue as being the AI-enhanced Homo techne superorganism, or the techne gets away on us and removes us from its own entropic enterprise, there is the deeper question of how an exponentialising mindset can survive for very long in this Universe where dissipative structure might need to be more focused on lasting for the long haul. Living within the limits of some evolved ecology and not instead living the life of a runaway cancer.

    Of course, being a mindless cancer could work. But the debate over the human future needs to be shaped by a better understanding of what is natural and likely to be the case under an evolutionary algorithm.

    Can exponentialising be a long-run state for intelligence? Will fusion and Dyson spheres and quantum computing tear up any concept of a reality with geophysical limits to its ecological being? Or is having to work within some long-run carrying capacity just an evolutionary necessity?

    Will Homo techne believe in the infinite horizon or the bounded ecosphere, and respond accordingly as the dissipative structure that needs to both dissipate and preserve its own state of structure.

    So far the LLM story has only confirmed us in our unbounded exponentialism. It is another symptom of that ingrained belief which does now define Homo techne as a thing.
  • apokrisis
    7.7k
    I've been making this argument for many years. I may have expressed it better here...

    There was all this buried coal and petroleum left over from super abundant plant growth in an era of "too high/too warm" oxygen and temperature levels. Dinosaur conditions. Lovelock argued the planet does best at a cooler 15 degrees C global average with lower oxygen levels – the balance established after the asteroid did for the dinosaurs. A world with 70% ocean to make for a cold energy sink that balances out the atmospheric CO2 sink in a way that maximises productivity.

    So you could see fossil fuels as biomass that got shoved under the carpet as the Earth was still finding its global biological balance and didn't have the means to recycle everything with maximum efficiency at the time.

    Locked in the ground, it was out of sight, out of mind. But life continued to evolve above the ground. It developed increasing agency as it gained new energetic advantages like being warm-blooded and more sophisticated in its understanding of its environment.

    Then along came Homo with big brains, language, social organisation and tool use. The keys to unlock the goldmine of fossil fuels.

    So it is inevitable in the sense that if it could happen, it would happen. The probability was 1, especially once the semiotic means to "objectively stand outside biological nature as a sociocultural organism" came along.

    In the fullness of time, fossil carbon may have got slowly degraded by being geological exposed to bacterial recycling. Either that, or recycled by the earth's hot geological core itself – the cycles of plate tectonics. So genetic level semiosis would have been the "brains" adapting itself to this entropic mop up chore. Other outcomes were possible there.

    But the Gaian biofilm continued to exploit the "technology" of semiosis – life's code-based approach to constructing dissipative structure. Genes led to neurons. With humans, this led on to first language – sociosemiosis. A code based on words. That then led to technosemiosis – codes based on the complete abstractions that are numbers.

    So above ground, the evolution of semiosis was continuing, helped by the ideal conditions being created by the Gaian biofilm.

    First we had an era of "climate stress" – the glaciation age which acted as a filter on hominid intelligence and sociality with its rapid cycles of change and the abundant herds of horse, deer, elephants and other big game that roamed the open grass plains that resulted across much of Eurasia.

    Again, we have a "energy bonanza" just asking to be over-exploited. Large herds of yuumy bison-berger. And this drove an arms-race among the varied hunter-gatherer hominids that evolved to be top predator during this ice age. Homo sapiens came out on top, having developed the best linguistic software. But also, the large herds were pretty much wiped out in the process. It looked like Homo sap was out of a job.

    But then the climate clicked into a longer stable interglacial period. Agriculture could be invented as the Homo tribes being shove about the landscape by shifting glaciers could instead settle down to tend and defend their patch of soil. Grow their own bison-bergers, and the buns and spices to make them even more delicious.

    Again, other outcomes were possible. Language-equipped Homo might not have been lucky with a shift in climate. They may have eaten the last mastodon and gone extinct soon after.

    But agriculture became a new energetic bonanza – although one now demanding a very organised and measured approach to its exploitation. Homo had to build a culture around working with the daily solar flux and annual farming rhythms. We had to become experts at recycling even our own shit to keep the paddy fields going, or burning the cow dung to heat our huts. We had to really take care of the ecology of our environments. They became the gods, the ancestors, that we worshipped and revered.

    Roll the clock forward and we have the rise of agricultural empires. Then this turns into the age of expansionary empires – Rome and European nation states – as societies are reorganised from being farmers to being soldiers. If you are 15th C Portugal with a fleet of ships, there is the whole world to start raping and colonising. Again, an entropic bonanza just begging to be exploited.

    And now the military technology - in the form of the Greek hoplites that invented the Western notion of all out war based on self-actualising "democratic" control – had been refined to the point that ships, muskets and cannon could really project focused power. Again, gunpowder. An entropic bonanza that followed its own logic all the way up to nuclear warheads. The shit that actually worried us in the 1970s and so probably pushed climate change down the list of concerns at the time – especially at government response level.

    Anyway, you can see the pattern. Entropic bonanza. Semiotic control. Put the two together and you get explosive growth, like a spore on a Petrie dish, until the system hopefully finds some kind of homeostatic long-run balance.

    Humans - once equipped with the sociosemiosis and technosemiosis to take a view from outside "nature" – outside even the Gaian Earth as a biofilm regulated entropic enterprise - could start to look for all the new loopholes it might exploit. Our busy minds and hands were pushing and probing every crack for a seam of advantage – an ability to concentrate semiotic power in ways that topped whatever already existed.

    Whether we kill ourselves with nuclear fission or a blanket of trapped CO2 is still perhaps a close-run thing. Overpopulation and ecosystem destruction are still also in the game. All the exponential curves still intersect circa 2050, just as we saw they did in the 1970s when the Club of Rome offered up its first still dodgy computer simulations of the trends.

    So it is all one Hegelian historical arc. The relentless upward climb in an ability semiotically to project power. The bigger the entropy store, the more dazzling the semiotic structure that arises to exploit it.

    If ecologists governed the world rather than the engineers who run the communist bloc and lawyers (or more lately, the derivative traders) who run the free west, then the burning need to establish a new Gaian planetary balance would be top of mind. But no one ever wanted to vote for hair-shirted greenies. They offer no fun at all.

    I mean this soap opera world where absolutely everything teeters on the brink in mad self-destructive fashion. What more exciting and interesting time is there to be alive?
    apokrisis
  • frank
    18.2k
    That’s why fully grammatical and propositional language made such a quick difference when Homo sapiens took over the world from the Neanderthals, Denisovans and other hominids around 60,000 years ago.apokrisis

    According to Chris Stringer, there are multiple theories about what happened to Homo sapiens 60,000 years ago. Whether sophisticated speech caused the change or was a result of the change is unknown. There isn't any strong reason to believe it was the former. Neanderthals had all the anatomy for speech, they were tool users. Stringer's own theory is that it was an accident. Environmental factors allowed the population growth that ended up protecting against the loss of skills during calamities. Instead of building technology only to lose it, which had been happening for millennia, humans finally had the ability to build on skills over time. That further increased the population, and here we are.

    I personally think it's likely that abstract speech got a huge boost from agriculture, which involves a lot of delayed gratification. Obviously, that happened much later than the shift that took place 50-60,000 years ago.
  • apokrisis
    7.7k
    According to Chris Stringer, there are multiple theories about what happened to Homo sapiens 60,000 years ago.frank

    There are always multiple theories when it comes to a critical issue like this. How else is any self-respecting academic going to forge a career?

    I speak from the point of view of having studied the many different interpretations folk have made of the evidence. I was once even approached to write a paleoanthropology textbook. I kid you not.

    I personally think it's likely that abstract speech got a huge boost from agriculture, which involves a lot of delayed gratification. Obviously, that happened much later than the shift that took place 50-60,000 years ago.frank

    This would be the upgrade in semiosis that resulted from literacy and numeracy. The shift from an oral culture to one based on the permanence of inscriptions and the abstractions of counting and measuring. The new idea of ownership and property.
  • frank
    18.2k
    I speak from the point of view of having studied the many different interpretations folk have made of the evidence.apokrisis

    What evidence convinced you that speech caused the change?
  • Metaphysician Undercover
    14.3k
    This would be the upgrade in semiosis that resulted from literacy and numeracy. The shift from an oral culture to one based on the permanence of inscriptions and the abstractions of counting and measuring. The new idea of ownership and property.apokrisis

    I think that originally written language evolved completely separate from spoken language, the former being for the purpose of a memory aid, the latter for the purpose of communication.
  • apokrisis
    7.7k
    What evidence convinced you that speech caused the change?frank

    The literature is on this is massive. So there is no one fact. But what I would say is that genetics has made a big difference in clarifying the paleological record. And much more attention has been paid to how the lives of sapiens suddenly appears much more "narrated". Plus an emphasis on the importance of reaching a critical mass of population so as to result in a division of labour and explosion in productivity.

    So the kind of thing Stringer in fact mentions. And the argument is not that Neanderthal had zero speech. It is that sapiens developed a foraging cultural package based on a new narratising habit. A new way of relating to the world through language.

    Neanderthals were doing perfectly well as a thin population of big game hunters in Europe's mammoth steppes. Puny sapiens was growing up as a shoreline scrounger moving along every coastline taking it from Southern Africa to the rest of the world. A lifestyle based on the ability to be a social networker having to do a bit of everything to get by.

    Once sapiens broke into Europe with its megafauna stretching all the way to Asia, that was an entirely new niche it could take over. Neanderthals and sapiens might be reasonably equivalent in brain power and hunting ability. But they looked to be viewing the same European landscape through very different eyes. Neanderthals thought in family groups just existing. Sapiens thought in terms of tribal clans warring and sharing. A new political existence to make the best use of a new entropic bonanza. Big game that could produce a population density that became a matching cultural intensity.

    A group of 10 Neanderthals narrating their world vs a connected network of thousands of sapiens narrating the same foraging landscape was the big difference. Neanderthals perhaps had some level of grammatical speech. But sapiens had the critical mass to rapidly evolve exactly the kind of grammar best suited to exploiting the massive opportunity that presented itself, especially as the actual ice age steppes gave way to an era of patchy woodland and prey of all sizes.

    I'll post some of the notes I was making on this issue to get back up to date with the latest literature. You can see that I was specifically focused on the biosemiotic argument as a better way to understand what made the critical difference.

    So it was speech. Or speech with a certain grammatical structure. Or speech that was indeed the tool organising a new general mindset. The new general mindset that could seize a new general entropic opportunity and so start to scale in the explosive fashion that has become so now familiar.

    The story of the human semiotic transition is subtle. Sure all hominids could make expressive social noises as a proto-speech. Even chimps can grunt and gesture in meaningful fashion that directs attention and coordinates social interactions. A hand can be held out propped by the other hand to beg in a symbolising fashion.
    But the way to think about the great difference that the abstracting power of a fully syntactical language made to the mentality of Homo sapiens lies in the psychological shift from band to tribe.
    The evidence of how Erectus, Neanderthals and Denisovans lived is that they were small family bands that hunted and foraged. They had that same social outlook of apes in general as they lacked the tool to structure their social lives more complexly.
    But proper speech was a literal phase transition. Homo sap could look across the same foraging landscape and read it as a history and genealogy. The land was alive with social meaning and ancestral structure. The tribal mentality so famous in any anthropological study.
    It is hard to imagine ourselves restricted to just the mindset of a band when we have only experienced life as tribal. However this is the way to understand the essence of the great transformation in pragmatic terms.
    Theories of the evolution of the human mind are bogged down by the very Enlightenment-centric view of what it is to be human. Rationality triumphing over the irrational. So we look for evidence of self-conscious human intelligence in the tool kits of the paleo-anthropological record. Reason seems already fully formed if homo could hunt in bands and cook its food even from a million years ago, all without a vocal tract and a brain half the size.
    But if we want to get at the real difference, it is that peculiar tribal mindset that us humans could have because speech allowed our world to seem itself a lived extension of our own selves. Every creek or hillock came with a story that was "about us" as the people of this place. We had our enemies and friends in those other bands we might expect to encounter. We could know whether to expect a pitch battle or a peace-making trading ritual.
    The essentials of being civilised in the Enlightment sense were all there, but as a magic of animism cast over the forager's world. The landscape itself was alive in every respect through our invention of a habit of socialising narration. We talked the terrain to life and lived within the structure – the Umwelt – that this created for us. Nothing we could see didn't come freighted with a tribal meaning.
    At that point – around 40,000 years ago, after sapiens as an "out of Africa coastal foraging package" had made its way up through the Levant – the Neanderthals and Denisovans stood no chance. Already small in number, they melted into history in a few thousand years.
    The animistic mentality was the Rubicon that Homo sapiens crossed. A vocal tract, and the articulate speech that this enabled, were the steps that sparked the ultimate psycho-social transformation.

    My grammar story would argue that this is what suddenly catapulted sapiens past Neanderthals in an explosive spread based on increased fertility rates. A population pressure was created by a new entropic bonanza – perhaps a switch to hunting the big game with better clothing and tighter tribal order?
    It makes complete sense that Neanderthals had grammatical speech at some level. And that articulation for expressive social noises was the original foundation. But it is the coupling of the Mammoth Steppe bonanza with population density that saw sapiens suddenly explode with a new displaced mentality. The start of a rationalising mindset. Grammar evolved new tensed structure that turned it into a general purpose conversational and narrative tool. A new level of social in and out group morality could get established.
    Neanderthals were already feasting at the Mammoth Steppe table. But sapiens came roaring past with the new organisation that boosted fertility and survival. As Wrangham says, domestication pushed out old alpha male order and brought in the tribal collective order submitting to the displaced and abstracted group identity.

    Neanderthal population in Eurasia never exploded like sapiens, which is evidence for a big grammar and entropy bonanza step. Neanderthals look like deep freeze erectus hunting. Sapiens is Qesem Cave cooking and foraging. So a fork that appears to start by 400kya in the Levant and so before full speech. This argues for an out of Africa package that then becomes proper speech with scalefree takeoff across mammoth steppe.

    Neanderthals look to be reaction to Mammoth Steppe – chasing the big game north. Denisovans heading to the tropics. Then sapiens comes out of the African context and take over the game with the grammatical language/tribal Umwelt package.

    Hunting big game for fat seems the more basic story for erectus, and again for Neanderthal. Then it is only sapiens that exploited the small game, cooked carb, firewood niche. This became the better option only after the megafauna had been eaten and so landscapes carved into scalefree foraging, trading and fighting networks – connected by linguistic culture, behavioural diversity and boundary policing – became the new thing.
    Sapiens displaced Neanderthals fast with this new intelligent landscape approach where fighting each other made fighting Neanderthals child’s play. It is not just about being smarter individuals or even as domesticated groups. It was the ability to colonise the whole landscape, rather than simply trail behind roaming herds, that would rapidly squeeze out Neanderthals. This was anthropomorphism of the landscape at work. A biofilm becoming a memofilm. A lived space becoming a narrated space.
    The primary transition or rubicon moment was the population explosion and cultural intensification that was sapiens reworking their worlds as a semiotic unwelt. Covered by trails, legends, social histories, distant relations and shared languages.

    Foraging economy creates the social complexity of sapiens compared to Neanderthals….
    Erectus didn’t cook, and Neanderthals were optional, so sapiens built lifestyle around hunting and gathering sociality. Steve Kuhn and Mary Stiner argue that sexual division of labour in foraging was a sapiens step and not an erectus step, says Ridley p64.
    It was how they could beat Neanderthals through breeding vigour, and perhaps I would say by being better set up for attritional war if men had the specialist role and Neanderthals had to stick defensively together.
    Glynn Issac had argued the opposite since 1978. But Kuhn/Stiner point to the lack of gathered food in Neanderthal camp debris - no grindstones or leftover nuts and roots. No food processing of that kind. And also the lack of intricate clothes and shelters like Inuit women have time to produce if men are away hunting. So women must have been active with the men unless child protection was a full time job.
    So this would fit the language story of cooking needing protospeech and division of labour needing fully symbolic grammar?
    It seems cooking must have started things in terms of spreading out and coming together. But sapiens took a strong next step that also involved trading and warfare. Sex division would be part of that, and so an ability to trade off carb gathering with game hunting. This African balance then encountered the Neanderthals on the Mammoth Steppe and quickly run them over. Carbs took a back seat in the new balance as big herd hunting creates the explosive spread of sapiens out of Africa.

    Note the difference that size of the language community would have made. This alone could explain how sapiens crossed the rubicon to some new rationalising grammar. It was just a critical mass deal….
    Dediu - Language seems to behave in a different manner, due to its design properties which require “parity” (similarity of systems) between communicators. Large populations erode complexity because of the need to communicate across groups), and small ones allow it, allowing clutter to accumulate.
    [Or rather, small ones don’t create general ways of simplifying speech by adding new grammar features like tenses, or doing away with local special cases, like Piraha’s distinction of whether fact is known, believed, guessed.]
    Consequently, highly complex languages (with elaborate morphology and irregularity) tend to be spoken by small groups (Lupyan and Dale, 2010). From this, we might conjecture that Neandertals had the features typical of languages spoken in small traditional societies today: sizable phoneme inventories, complex morphosyntax, high degrees of irregularity, and vocabularies in the tens of thousands.
    We can also be fairly sure, due to the relatively isolated nature of the groups, that there were many distinct languages. We could even hazard the prediction on the basis of the genes they carried, that the chances are they spoke tone languages (Dediu and Ladd, 2007).
    [So this fits a new my model story where sapiens hit the Mammoth Steppes and suddenly had the population density to create a critical mass grammar transition. A great simplification – or rather a structuring series of general grammatical divisions, such as tense - made for a more logical and powerful tool. Displacement would have been the key feature of the new grammar as it would have created the platform for a social level of organismic world modelling.]
  • apokrisis
    7.7k
    I think that originally written language evolved completely separate from spoken language, the former being for the purpose of a memory aid, the latter for the purpose of communication.Metaphysician Undercover

    What are you talking about? Writing came before speech, or something? Hands evolved before tongues? What's your hypothesis?
  • Pierre-Normand
    2.8k
    What are you talking about? Writing came before speech, or something? Hands evolved before tongues? What's your hypothesis?apokrisis

    This Finnish ethologist has an interesting theory.
  • Baden
    16.7k


    Good stuff. Thanks. :up:
  • Pierre-Normand
    2.8k
    But then, in theory we could provide this. Not a living body, but a body, that can sense the environment in a way similar to the way we do.

    If we did this, created an actual android powered by a LLM and complementary AI systems, would the inside of the chatbot "light up" in sudden awareness? Maybe... but maybe not. It would be very easy to suppose that we would have succeeded in creating a philosophical zombie, and many would do so. They might be right or wrong, but their correctness would be a factual matter, not one of logical necessity. Nothing says that such a machine would be necessarily conscious, any more than that our current disembodied chatbots are necessarily unconscious, free of any qualitative content.
    hypericin

    The sensorimotor + interoceptive/endocrine integrations I mentioned weren’t meant as logical entailments from physiology to mentality, but as constitutive enablements: they unpack the background assumptions built into our mentalistic concepts. The point (Wittgensteinian in spirit) I wanted to make is that the grammar of these concepts (i.e. how we learn, apply, and justify them in lived contexts) presupposes a certain style of organismic regulation and world-involvement. Phenomenology makes that grammar explicit. Cognitive science explains how such forms of life are implemented.

    Take pain, for instance. In our scheme, pain isn't just a private tingle. It is essentially aversive and action-guiding. It recruits avoidance, care, protection, complaint, and solicits help. So imagine an android that matches us behaviorally and linguistically in contexts of injury, yet seeks out what it feels as pain and avoids what it feels as pleasure. If it truly seeks what it feels as pain (while still naming "pleasure" what it seeks), then by our criteria it is no longer what we mean by pain. This is why philosophical talk of inverted pain/pleasure qualia, just like talk of inverted red/green qualia (although the mistake is more subtle in this case), lapses into conceptual confusion. It misrepresents the roles that make pain pain in our conceptual scheme.

    So my claim isn't that mentality follows by logic from a list of physiological facts. It is rather that mentality is intelligible only within a pattern of organismic regulation, practical coping, and social answerability. Provide a non-living "body" that genuinely instantiates those roles (including interoception-like valuation of bodily states), and the relevant mental predicates find their proper intelligible uses. Postulate a p-zombie that duplicates all that yet "has none of it inside," and you’ve stopped describing and started negating the criteria by which our mentalistic words have their use.
  • Harry Hindu
    5.8k
    I don't think a meaning is use theory references understanding.Hanover
    Exactly. It merely "uses" the scribble, "understanding" in certain patterns with other scribbles. That is the issue with meaning-is-use - the scribbles don't refer to anything.

    Claiming AI does not actually understand the words it is using is a prime example of what it would be like if meaning were just use. So it would appear that a contradiction exists where the "meaning-is-use" advocates argue that AI does not understand English because it only understands syntax and not semantics. Wouldn't that put them in the same boat as AI?
  • Hanover
    14.6k
    Exactly. It merely "uses" the scribble, "understanding" in certain patterns with other scribbles. That is the issue with meaning-is-use - the scribbles don't refer to anything.Harry Hindu

    That might be an overstatement. Words can refer to things. "Apple" can in fact mean the very apple we know, but that's only if that's how it's used. My push back on "understanding" was that I don't think it necesssary that for the word to be used in a consistent manner within the game that it be understood.

    The Wittgensteinian approach (and I could be very wrong here, so please anyone chime in) does not suggest there is not an internally recognized understanding of the word when the user uses it, but it only suggests that whatever that is is beyond what can be addressed in language. That would mean that whatever "understanding’" is amounts to our public criteria for it .
  • Harry Hindu
    5.8k
    I think a snappy way of putting it is that when you turn on your TV, an image appears. But do you believe the TV is seeing anything as a result?

    LLMs are just displays that generate images humans can find meaningful. Nothing more.

    Biosemiosis is a theory of meaningfulness. And it boils down to systems that can exist in their worlds as they become models of that world. This modelling has to start at the level of organising the chemistry that builds some kind of self sustaining metabolic structure. An organism devoted to the business of being alive.
    apokrisis
    What does it mean to be "meaningful" if not having some causal relation to the past or future? When an image does not appear on the screen, doesn't that mean that the screen may be broken? Doesn't that mean that for images to appear on the screen the screen needs to be repaired?

    How does irrelevancy play into your theory of meaning? Is irrelevant information meaningful? Is it meaningful in certain situations and not others? If so, what makes some bit of information meaningful some times and not others? Is meaning intrinsic in these causal relations or are they projected by us?

    I think that information/meaning exists everywhere causes leave effects, and it is our present goal in our mind that makes some information/meaning relevant or not.

    I think that life evolved to use meaning (causal relations) because it provides a survival benefit to represent mental states with other states in the world, like the behavior of deer when a predator is nearby with the observed behavior of deer meaning that a predator is nearby - that other events are the case when some other case is observed (because they are causally connected). This is how some symbiotic relationships between different species evolve as one learns the body language of the others to use them as an extension of their own senses to alert one to danger. Language use is just an exaggerated version of this given our exaggerated brain size.
  • Harry Hindu
    5.8k
    That might be an overstatement. Words can refer to things. "Apple" can in fact mean the very apple we know, but that's only if that's how it's used. My push back on "understanding" was that I don't think it necesssary that for the word to be used in a consistent manner within the game that it be understood.Hanover
    Isn't that the point, though? If the scribble, "apple" were to be used in a way that does not refer to the very apple we know, then what is the speaker/writer, talking/writing about? What would be the point in communicating something that we cannot share in some way? Isn't aboutness an integral part of intentionality? Are you saying that in instances where some scribble is not used to refer to a shared event or object that there is no intentionality? Isn't that what they are saying is missing with AI when it uses words - intentionality (aboutness)?

    The Wittgensteinian approach (and I could be very wrong here, so please anyone chime in) does not suggest there is not an internally recognized understanding of the word when the user uses it, but it only suggests that whatever that is is beyond what can be addressed in language. That would mean that whatever "understanding’" is amounts to our public criteria for it .Hanover
    It would seem to me that in order for one to understand the word, "cat" that they have an internal representation of the relationship between the scribble, "cat" and an image of the animal, cat. If they never used the scribble, "cat" but retained this mental relationship between the scribble and the animal, could it not be said they understand the word, "cat" even if they never used it themselves but have watched others use it to refer to the animal? I don't need to necessarily use the words to understand their use.

    I don't need to have a white tail to use to understand that when a white-tailed deer raises its tail and runs it means that a predator is nearby.
  • Jamal
    11.1k
    so please anyone chime inHanover

    internally recognized understandingHanover

    Understanding is no more internal than eating. It depends on some biological processes that happen under the skin, among other things that don't, but this doesn't license your appeals to the internal that you make with reference to perception and meaning. Synaptic transmission is no more meaningful than peristalsis.

    I came, I chimed, I conquered.
  • Harry Hindu
    5.8k
    So basically, you're a p-zombie.

    I came, I chimed, I conquered.Jamal
    Did you? Because it seems for you to be able to say that you did (and it be true), you actually did and that there is some internal representation between the scribbles, "I came, I chimed, I conquered." and the act of someone coming, chiming in and conquering the discussion - which is not just more scribbles, unless you are an AI.
  • Hanover
    14.6k
    Understanding is no more internal than eating. It depends on some biological processes that happen under the skin, among other things that don't, but this doesn't license your appeals to the internal that you make with reference to perception and meaning. Synaptic transmission is no more meaningful than peristalsis.

    I came, I chimed, I conquered.
    Jamal

    Perhaps this is just a case of omphaloskepsis, or perhaps I just used that word in response to your use of the word peristalsis, so that I could use a more obscure word than you.

    Do you think my post missed a subtlety or was incorrect in a way that yours clarified? I'm really trying to understand it and Wittgenstein's writing style isn't always helpfully clear.
    It would seem to me that in order for one to understand the word, "cat" that they have an internal representation of the relationship between the scribble, "cat" and an image of the animal, cat. If they never used the scribble, "cat" but retained this mental relationship between the scribble and the animal, could it not be said they understand the word, "cat" even if they never used it themselves but have watched others use it to refer to the animal? I don't need to necessarily use the words to understand their use.Harry Hindu

    I'm not disputing that you learned some words through watching an interaction with its referent. What I am disputing is that you didn't learn the word "freedom," "aboutness," "the [non-existent] present king of France," or "omphaloskepsis" by having had a referent pointed out to you. But, what Wittgenstein is saying (as I don't want to say "I am saying" because I'm not fully adopting anything right now) is that you always have public usage available to determine meaning, and if you don't, you don't have meaning. When you point to the cat, it is not the cat, nor the pointing, that defines the cat, but it is your ability to use that term in a consistent manner within the language you are using. To the extent the pointing is a way to communicate about cats, then that is a move within a practice (meaning it's its use). But understand, this says nothing of the cat in some metaphysical way, not because there isn't such a thing, but because the theory specifically avoids such conversation as impossible.
  • Jamal
    11.1k
    Do you think my post missed a subtlety or was incorrect in a way that yours clarified? I'm really trying to understand it and Wittgenstein's writing style isn't always helpfully clear.Hanover

    Instead of saying...

    The Wittgensteinian approach (and I could be very wrong here, so please anyone chime in) does not suggest there is not an internally recognized understanding of the word when the user uses itHanover

    It would've been better to say that Wittgenstein is not saying you can't understand a word differently from everyone else. Wittgenstein isn't denying that words mean different things to different people. We needn't make this "internal", is all I was saying. And that inspired me to riff on the notion of the internal.

    Perhaps it was a minor criticism.
  • Harry Hindu
    5.8k
    I'm not disputing that you learned some words through watching an interaction with its referent. What I am disputing is that you didn't learn the word "freedom," "aboutness," "the [non-existent] present king of France," or "omphaloskepsis" by having had a referent pointed out to you.Hanover
    Sure you did, or else there is no aboutness (intentionality) to the scribbles.

    The terms you provided are simply more complex than other terms in which pointing is sufficient. Just as a video provides more information than a picture, "freedom" and "aboutness" require more than just pointing to an instance in time. They are acts and relationships over time.

    "the [non-existent] present king of France," is a referent to an idea in your head. What is your visual experience of "the [non-existent] present king of France," - a visual of scribbles or a man wearing a crown?

    Is "omphaloskepsis" a string of scribbles, or does it refer to some kind of mental process that is the manipulation of sensory data of which scribbles are part of, not the entirety of?

    Keep in mind that it logically follows that if there are no semantics to the terms you provided - only syntax - then the output of AI when discussing these terms is just as valid as any human's. So AI knows what it is talking about when it comes to "freedom" and "aboutness", but not when it comes to "cats", "cars" and "mountains"?

    But, what Wittgenstein is saying (as I don't want to say "I am saying" because I'm not fully adopting anything right now) is that you always have public usage available to determine meaning, and if you don't, you don't have meaning. When you point to the cat, it is not the cat, nor the pointing, that defines the cat, but it is your ability to use that term in a consistent manner within the language you are using. To the extent the pointing is a way to communicate about cats, then that is a move within a practice (meaning it's its use).Hanover
    "Public usage" as in using scribbles to point to objects and events in the world. If you are not pointing to anything with your scribbles that do not ultimately resolve down to things that are not scribbles (as in the case of "freedom" and "aboutness"), then it no longer qualifies as "public usage". It is "private usage".

    But understand, this says nothing of the cat in some metaphysical way, not because there isn't such a thing, but because the theory specifically avoids such conversation as impossible.Hanover
    To speak of the cat in a metaphysical way is to confuse the map with the territory. Science updates your map with the relevant information about the cat. Anything else is just conjecture (metaphysics) with no evidence (no referent).
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.