• wonderer1
    2.2k
    Yeah. I mean what can one say? You've reminded me of being back in the lab where we slowed down bird calls so as to discover the structure that is just too rapid for a human ear to decode. And similar demonstrations of human speech slowed down to show why computer speech comprehension stumbled on the syllabic slurring that humans don't even know they are doing.

    Do you know anything about any of this?
    apokrisis

    Sure, I posted something along similar lines in the shoutbox a year ago:

    Psychologists solve mystery of songbird learning by taking into account the higher flicker-fusion rate of birds.wonderer1

    Anyway, I see from all your posturing that your ego is still as fragile as it was before you took your sabbatical from the forum. That's unfortunate.
  • Lionino
    2.7k
    You could start by Googling Everett’s G1/G2/G3 classification of grammar complexity if you are truly interested.apokrisis

    I did, I didn't find the phrase "fully modern" anywhere. Are you referring to G3 by that?

    Or Luuk and Luuk (2014) "The evolution of syntax: signs, concatenation and embedding" which argues like Everett that word chains become recursive.apokrisis

    Or you mean a language that is recursive?

    I did find this though:

    Those kinds of grammars are found commonly in the world’s languages, but you can express anything from a G3 grammar in a G1 grammar; mathematically they’re all of equal power. So, once you have symbols and a G1 grammar you have language, full-blown human language. We find those today. Was Homo erectus capable of that? Yes, they were. Did they show the kinds of communication, correction, cooperation, planning that would have required human language? Yes, they did.https://singjupost.com/how-language-began-dan-everett-full-transcript/?singlepage=1

    It seems kinda contrary to what you are saying.
  • apokrisis
    7.3k
    It seems kinda contrary to what you are saying.Lionino

    Perhaps if you haven't properly delved into what I said.
  • Jaded Scholar
    40
    But in seriousness, apokrisis's arguments kind of rubbed me the wrong way from the outset, because they contained a kind of derision for the notion of homo sapiens not being superior to non-sapiens — Jaded Scholar

    So you project some woke position on to a factual debate? Sounds legit. I shouldn't be offended by your wild presumptions about who I am and what I think should I[?]
    apokrisis

    Good grief. I literally stated - in the very sentence you quoted here! - that my issue was with your naive assertion of the standard dominant narrative. I made no presumptions about who you are, or what you "should" think. Even in the second quote, I put in a very qualified "if you", and did not label you as someone who blindly accepts what he's told to believe, or even necessarily accepts the default narrative of homo sapiens being intrinsically superior because [insert whatever reason fits], and is willing to ignore any and all evidence to the contrary. Though I guess you've very clearly confirmed that that's exactly what you have decided you are.

    It's very telling that you ignored every single piece of evidence I cited, and every actual argument I made, and focussed on dismissing my argument based on how my intro and outro made you feel.

    To indulge you one last time, I'll refocus on the only bits you read:
    If a certain narrative remains dominant within a culture for centuries, despite all of the logic supporting it being demolished over and over, its dominance only supported by whatever new arguments can immediately refurbish its lofty position until they too are empirically disproven within a decade or so, then I think anyone who actually cares what's real and what's an illusion would be happy to have it pointed out that the dominant status of this conclusion is apparently independent of whatever logic currently "supports" it, and therefore is probably complete bullshit.

    I never really thought of this idea as being associated with "woke"ness (maybe because the term post-dates Foucault), but now that you mention it, I suppose that, in a literal sense, the term does kind of apply.

    So yeah, please feel free to disregard my comments, go back to sleep, and I hope you enjoy living the rest of your life in your comfortable, unchallenging, dream world.
  • Jaded Scholar
    40
    Thank you. My ego is appeased.Lionino

    Then I'm glad I accomplished something here, at least. :rofl:
  • apokrisis
    7.3k
    So yeah, please feel free to disregard my comments, go back to sleep, and I hope you enjoy living the rest of your life in your comfortable, unchallenging, dream world.Jaded Scholar

    Again, where do you think you get this right to insult me without making any attempt to engage with me?
  • Jaded Scholar
    40
    Again, where do you think you get this right to insult me without making any attempt to engage with me?apokrisis

    Oh, wow, I'm so sorry, now that you mention it, I really wish that I'd actually started our dialogue with something like a well-researched 1200-word comment where I genuinely tried to engage with you. If only I'd...
    ...
    ...hang on on a sec. :chin:
  • apokrisis
    7.3k
    like a well-researched 1200-word commentJaded Scholar

    :lol:
  • Jaded Scholar
    40

    You're right, that "well-researched" bit was silly of me to qualify. It's very clear how negatively you react to actual evidence, and citing published academic research was probably very rude within ... whatever worldview you're trying to conform to.
  • apokrisis
    7.3k
    Getting things back on track, you will remember my semiotic point was how language had to evolve to the point (and I mean culturally evolve more than neuroanatomically) where it could sustain a new kind of social Umwelt – the mind that sees its landscape as its world. Every creek, every hillock, freighted with cultural meaning.

    Here is a good article that touches on this in terms of what you argued about the San and click languages being somehow a sign of unbroken antiquity.

    Human History Written in Stone and Blood – American Scientist

    You can see that there was a sharp transition around 70kya. Blombos cave marks early evidence of symbolic culture.

    But then if this was neurological, then why was it swiftly followed by a collapse back to cultural simplicity so soon after? Brains didn't devolve. So it had to be a social structure collapse.

    The coastal package produced a population boom and so powered a growth in tribal complexity. You had a widespread trading economy emerge in the manner I described. A complexity of language and thought that organised the landscape into an extended network of human contact.

    But perhaps the climate changed. Social interactions frayed and populations shrank back to isolated gene pools. Southern Africa was pushed into a lower level of hierarchical development. It only came back again when the "out of africa" mob returned with the level of linguistic and cultural sophistication to fire up things once more.

    So the anthropologist has to speculate on the available evidence. But this is the kind of considered story that emerges. The real Rubicon is the way language transforms the experienced world into a shared fabric of social relations.

    If you don't have the population, you don't have the interactions that produce the structural complexity. And language is going to be matchingly simplified when it is no longer useful in everyday life.

    So we have an evolutionary account that has to include reaching a critical mass in terms of populations and the intensity of social interactions. The crucial shift from living as a band to living as a tribe as I said.

    Neuroanatomy isn't even under a selective pressure for a tribal mentality given that even Neanderthals struggled to exist as more than very thinly spread bands of about 10. Grammatical speech and the symbolic thought it enables are a precursor step – a good reason for why sub-Saharan Africa started to see this sapiens offshoot gathering some steam from 200kya.

    But then comes the population density to properly spark the human transition to being a socially constructed animal. We mixed in numbers and so formed hierarchical networks across "owned" landscapes. We fought and traded so needed kinships structures and genealogy stories. Chieftains and agreements. Raiding parties. The trading of goods, wives and slaves.

    Becoming political and economic creatures created the population growth that fed back into even more intense political and economic activity.

    Anyway read the article as it puts the click languages into the larger context of what was going on in the world. Click phonemes may seem all cool and weird, but they don't really tells us anything about archaic language. They are not some primitive linguistic feature as far as I can see.
  • Jaded Scholar
    40

    Nah, mate. I'm finished. You can enjoy your anthropological fan-fiction in peace. :up:
  • Gnomon
    3.8k
    Bits don't really work well as "fundamental building blocks," because they have to be defined in terms of some sort of relation, some potential for measurement to vary. IT does seem to work quite well with a process metaphysics though, e.g. pancomputationalism. But what about with semiotics? I have had a tough time figuring out this one.Count Timothy von Icarus
    Shannon's "bits" were basic to his theory, but can't be absolutely fundamental, because they are composed of two elements (1 & 0) plus the relationship between them. So, I think the metaphysical concept of Relation*1 (relativity) may be more essential, in that it is neither a composite material object nor a member of a mathematical series : the number line, of which 1 & 0 are the end points, the ideal brackets that enclose reality, not real in themselves. The only alternative to a Relation is an Absolute.

    I'm not well-versed in academic Semiotics*1, but I don't think any particular sign, or even the general concept of sign, is fundamental, because signs/symbols always point to something else. You might say that Semiotics is also about Relations. And knowledge of relationships is the essence of Information & Reason & Logic : all metaphysical. As the link below*2 reveals, it's hard to even define Relation without referring to multiple non-fundamental entities, or to self-reference. :smile:


    *1. Information general, Semiotics specific
    Information is a vague and elusive concept, whereas the technological concepts are relatively easy to grasp. Semiotics solves this problem by using the concept of a sign as a starting point for a rigorous treatment for some rather woolly notions surrounding the concept of information.
    https://ris.utwente.nl/ws/portalfiles/portal/5383733/101.pdf

    *2. What is a relation in metaphysics?
    Relations are ways in which several entities stand to each other. They usually connect distinct entities but some associate an entity with itself. The adicity of a relation is the number of entities it connects. The direction of a relation is the order in which the elements are related to each other.
    https://en.wikipedia.org/wiki/Relation_(philosophy)
  • Apustimelogist
    584
    they are composed of two elementsGnomon

    You can use as many "elements" as you want!
  • Gnomon
    3.8k
    they are composed of two elements — Gnomon
    You can use as many "elements" as you want!
    Apustimelogist
    I was referring to electronic computer processing of information. In principle the registers could use any voltage, but in practice the voltage is ideally all or nothing --- 1 or 0, 100% or Zero, On or Off, 3.3V or 0V --- to minimize errors in communication. In any case, its the logical relationship between elements (1:0 or 1/0) that is interpreted as information. You could say that the rounded-off 1s and 0s are signs, with lots of space in between, that are not likely to be confused with each other, unlike 1.032 and 1.023.

    The "elements" of Shannon information are typically limited to 1s & 0s. That's why it's called Binary Code. In Nature, including the human brain, information processing may not be that precise, hence non-binary --- just like some Hollywood celebrities. :grin:


    hello.png?w=490&h=300&crop=1
  • Metaphysician Undercover
    13.1k
    It is relevant to the OP in that Everett follows Peirce in arguing for an evolution of language where indexes led to icons, and icons moved from signs that looked like the referents, to symbols where the relation was arbitrary.apokrisis

    I think you ought to notice that "signs that looked like the referents" indicates written language. And written language is viewed, while spoken language is heard. The two are very different, and have very different uses, so it is quite reasonable to consider that they evolved independently. At some time, in the relatively recent past, the two began to be united, when spoken words were given written symbols, and vise versa. This unification may have produced the "explosion" you refer to, but it in no way signifies the beginning of language use.

    In an analysis of many different spoken words, Plato shows that this distinction, between spoken words that sound like the referent, and spoken words which have an arbitrary relation, is not a useful distinction. The ones which appear to have an arbitrary relation may be just so old that the word has evolved so as not to reveal its origins in some sort of similarity. So unless the history of the symbol is clearly known, the distinction may be completely misleading.
  • wonderer1
    2.2k
    I think you ought to notice that "signs that looked like the referents" indicates written language. And written language is viewed, while spoken language is heard. The two are very different, and have very different uses, so it is quite reasonable to consider that they evolved independently.Metaphysician Undercover

    I'm not sure what is meant by "evolved independently" when we are talking about things evolving in one species.

    However, having a greater number of neurons available, to associate in more complex ways, things going on in visual cortex and things goings on in auditory cortex, might have been rather important.
  • Apustimelogist
    584
    The "elements" of Shannon information are typically limited to 1s & 0sGnomon

    I'm just saying any number of elements are valid since its the logarithmic base and you can choose any base you want without affecting the properties of bits(or trits or nats).
  • Metaphysician Undercover
    13.1k
    I'm not sure what is meant by "evolved independently" when we are talking about things evolving in one species.

    However, having a greater number of neurons available, to associate in more complex ways, things going on in visual cortex and things goings on in auditory cortex, might have been rather important.
    wonderer1

    The point is that spoken language and written language have fundamentally different purposes. The principal use for spoken language is communication, so it would have evolved in ways to facilitate that end. The principal use for written language is to serve as a memory aid, so it would develop to facilitate that end . Therefore the written language is fundamentally personal, rather than communicative. Consider Wittgenstein's private language example. Being personal, there may be aspects of written language which are designed to make it intentionally non-communicative, like secret code is for example. In any case, you ought to be able to see how the two are fundamentally different, as the principle purpose of one is to communicate with others, and the principle purpose of the other is to communicate with oneself at a later time.

    This means that it is very probable that the essential aspects of one are not essential aspects of the other, so the development of each of the two, needs to be considered separately. History and archeology may be misleading to us because what persists from ancient times, and is available to us now as evidence, is only the written material. So anthropology. to the extent that it relies on archeology, cannot provide an accurate history of the development of spoken language.
  • Treatid
    54
    I am not sure where you got that fromCount Timothy von Icarus

    https://youtu.be/DIL37Rkt4m0?si=8veEEon_DGHD7uNb&t=183

    With specific reference to human language.

    It seems to me one can dispense with theism, recognize that More is Different and that humans have more cortical neurons than any other species, and thereby have a basis for recognizing a uniqueness to humans.wonderer1

    At which point the question becomes how much difference qualifies as distinct vs mere degree of difference.

    Would you prefer to believe that Random Evolution "gave" some higher animals the "mechanism" of Reasoning?Gnomon

    I think that "reasoning" is a lot less special than many humans suppose.

    Perhaps more isnt so different after all.Joshs

    Yes - this is a significant part of where I'm coming from.

    Emergent

    Human reasoning isn't special. Recursive thought (thinking about thinking) is the same mechanism as all other thought.

    So - Everything is relationships. Neural networks are physically and notionally, networks of relationships.

    I'd like to compare a single relationship to a brush stroke on a paint canvass.

    A single stroke of paint tends not to have much in the way of inherent meaning.

    Many strokes of paint form the portrait or landscape that we find compelling/moving/boring...

    The whole picture forms a shape that we find meaningful.

    A single relationship is largely meaningless. Many relationships with multiple loops of connection form a compelling shape.

    In a universe composed of relationships - there is little value in comparing individual relationships. Whether a relationship is the signifier or signified is irrelevant.

    The Mona Lisa isn't any one brush stroke.

    When we perceive the face of a smiling woman - it doesn't matter that the shape is composed of brush strokes, lit LEDs or atomic particles. What matters is that we recognise the shape.

    Patterns

    One of the neat features of patterns is that they are scale agnostic. Language can describe the patterns we see from sub-atomic up to the visible universe.

    Similarly, patterns are material agnostic. We can see a face in the moon and animals in clouds. Okay - those specific examples aren't terribly useful... but it does mean we can look at a bunch of RGB LEDs and perceive Arnold Schwarzenegger telling the hapless victim "I'll be back!".

    The downside (if you are invested in human thought being unique) is that learning mathematics and learning to walk are the same process.

    We can argue that the "attempt-fail-pain-retry"/"attempt-succeed-reward-reinforce" loop of mathematics is longer (more abstract) and humans are especially good at delayed reward/failure.

    's description of specialness is plausible when a small difference of degree can make the difference between success and failure at a given task.

    Clearly humans are special enough to have accomplished (*gestures broadly*) all this. But when running away from the bear, you just need to be faster than your companions.

    Humans probably are better at learning, retaining and applying patterns - but I see no reason to think that wolves don't have much the same appreciation of rolling hills as apokrisis describes for humans.

    No definitions

    The interpretant need not be an "interpreter." We could consider here how the non-living photoreceptors in a camera might fill the role of interpretant (or the distinction of virtual signs or intentions in the media).Count Timothy von Icarus

    One of the defining features of a relational universe is that perception is unique to each observer. In General Relativity, space appears time-like and time appears space-like under certain conditions.

    It isn't merely that semiotics can't define the signifier - the signifier changes according to the observer. It isn't that there is a fixed thing that we just happen not to be able to define - there are no fixed things.

    Truth is, literally, in the eye of the beholder.

    The Mona Lisa doesn't exist. Each person viewing the Mona Lisa experiences their relationships with the Mona Lisa.

    [accusative]
    You know that the two people looking at the Mona Lisa are seeing different things. And then you assume that there is one single correct perception that all the others are distortions of.
    [/accusative]

    The Mona Lisa is your perception of it. Your experiences are real.

    You, semiotics, mathematics and the majority of philosophers are trying to demote experience to a second class citizen subservient to some special reality that can't be directly observed, can't be described but gives rise to each individual's perception.

    Wrap up

    When you only have a hammer... H'mm wrong aphorism.

    When we can only describe (networks of) relationships; describing relationships is trivial. Describing anything else is impossible.

    Human thought is manipulation of relationships. We don't have anything else to manipulate.

    There is a possibility that there are processes outside the ability of language to describe. In which case - tough titty. Nothing we can do about that.

    We know that we can describe (but not define) relationships.

    Language is an aspect of the universe. manipulating language is manipulating the universe.

    To understand the nature of language is to understand the nature of the universe.

    We can't define semiotics, or mathematics, or humanity.

    But we can manipulate the bejeezus out of relationships.

    Perception as pattern matching isn't new.

    However, we can go beyond it being a nice theory. A relationship based description of cognition is the only game in town.
  • Bodhy
    26
    To OP,


    Deely is an absolute gift and his philosophy is incredibly rich. I've met and talked with some of his former colleagues and friends at length.

    What Deely hoped for IMO is for a recognition that there were two major paths philosophy could have gone down, and Newtonianism and Cartesianism were only one. There was also the budding path John Poinsot was developing which was the sign-based semiotic philosophy, the one which easily naturalizes mind and doesn't plague philosophy with dualisms everywhere.

    Unfortunatley, Poinsot was too late for his thought to take off so it was lost to history if not for this reconstructive work.

    The information revolution is a promising start, although there is now this kind of schism between matter-information that is still too Cartesian/modernist. Shannon's information theory is a far cry from a full-orbed semiotic theory since it operationalizes information and meaning. There is still a way to go.

    I think eventually, the problems of relevance realization and framing will become so pressing that contemporary philoophy/science will need to take on board semiotic philosophy.
  • Lionino
    2.7k
    John PoinsotBodhy

    João Poinsot, Portuguese theologist. Interesting, first time hearing of him.

    What else can you say?
  • Count Timothy von Icarus
    2.8k


    :up:

    I've found Robert Sokolowski to be a great updater/rehabilitator of these sorts of ideas too, although he unfortunately doesn't delve into the semiotic side much, sticking more to phenomenology and an updating of Aristotle and St. Thomas.



    He's a very important figure in the development of semiotics. Nathan Lyons has a pretty interesting book called "Signs in the Dust" on him, sort of a updating. I thought the most interesting part is the final section on the application of semiotics to non-living things, but he has a good intro on him at the start.
  • Patterner
    984
    Only humans have languageJoshs
    I just started reading The Symbolic Species, by Terrence Deacon. Literally only the Preface so far. In it, he tells us about giving a talk about the brain to his son's elementary school.
    I was talking about brains and how they work, and how human brains are different, and how this difference is reflected in our unique and complex mode of communication: language. But when I explained that only humans communicate with language, I struck a dissonant chord.

    “But don’t other animals have their own languages?” one child asked.

    This gave me the opportunity to outline some of the ways that language is special: how speech is far more rapid and precise than any other communication behavior, how the underlying rules for constructing sentences are so complicated and curious that it’s hard to explain how they could ever be learned, and how no other form of animal communication has the logical structure and open-ended possibilities that all languages have. But this wasn’t enough to satisfy a mind raised on Walt Disney animal stories.

    “Do animals just have SIMPLE languages?” my questioner continued.

    “No, apparently not,” I explained. “Although other animals communicate with one another, at least within the same species, this communication resembles language only in a very superficial way—for example, using sounds—but none that I know of has the equivalents of such things as words, much less nouns, verbs, and sentences. Not even simple ones.”
    — Deacon
    I guess the rest of the book extensively expands on this.
  • Joshs
    5.7k
    This gave me the opportunity to outline some of the ways that language is special — Deacon

    Michael Tomasello argues that rather than there being something called language as a special phenomenon in itself differentiating humans from other animals, what separates humans from animals is a cognitive complexity that leads to the sophisticated social interaction making language possible.
  • Patterner
    984

    I believe Deacon would agree:
    ...language is not merely a mode of communication, it is also the outward expression of an unusual mode of thought—symbolic representation. Without symbolization the entire virtual world that I have described is out of reach: inconceivable. My extravagant claim to know what other species cannot know rests on evidence that symbolic thought does not come innately built in, but develops by internalizing the symbolic process that underlies language. So species that have not acquired the ability to communicate symbolically cannot have acquired the ability to think this way either. — Terrence Deacon
  • Joshs
    5.7k


    ↪Joshs
    I believe Deacon would agree:
    ...language is not merely a mode of communication, it is also the outward expression of an unusual mode of thought—symbolic representation. Without symbolization the entire virtual world that I have described is out of reach: inconceivable. My extravagant claim to know what other species cannot know rests on evidence that symbolic thought does not come innately built in, but develops by internalizing the symbolic process that underlies language. So species that have not acquired the ability to communicate symbolically cannot have acquired the ability to think this way either.
    — Terrence Deacon
    Patterner

    I wonder how Deacon would distinguish between human use of word concepts and , for instance, the way a dog responds to a sentence. If a symbol represents a meaning by integrating information from disparate sense modalities, what is a dog doing when it recognizes a word, or an object , on this same basis? And I wonder how he would respond to this recent article in the New York Times?

    “Language was long understood as a human-only affair. New research suggests that isn’t so.

    Can a mouse learn a new song? Such a question might seem whimsical. Though humans have lived alongside mice for at least 15,000 years, few of us have ever heard mice sing, because they do so in frequencies beyond the range detectable by human hearing. As pups, their high-pitched songs alert their mothers to their whereabouts; as adults, they sing in ultrasound to woo one another. For decades, researchers considered mouse songs instinctual, the fixed tunes of a windup music box, rather than the mutable expressions of individual minds.

    But no one had tested whether that was really true. In 2012, a team of neurobiologists at Duke University, led by Erich Jarvis, a neuroscientist who studies vocal learning, designed an experiment to find out. The team surgically deafened five mice and recorded their songs in a mouse-size sound studio, tricked out with infrared cameras and microphones. They then compared sonograms of the songs of deafened mice with those of hearing mice. If the mouse songs were innate, as long presumed, the surgical alteration would make no difference at all.

    Jarvis and his researchers slowed down the tempo and shifted the pitch of the recordings, so that they could hear the songs with their own ears. Those of the intact mice sounded “remarkably similar to some bird songs,” Jarvis wrote in a 2013 paper that described the experiment, with whistlelike syllables similar to those in the songs of canaries and the trills of dolphins. Not so the songs of the deafened mice: Deprived of auditory feedback, their songs became degraded, rendering them nearly unrecognizable. They sounded, the scientists noted, like “squawks and screams.” Not only did the tunes of a mouse depend on its ability to hear itself and others, but also, as the team found in another experiment, a male mouse could alter the pitch of its song to compete with other male mice for female attention.

    Inside these murine skills lay clues to a puzzle many have called “the hardest problem in science”: the origins of language. In humans, “vocal learning” is understood as a skill critical to spoken language. Researchers had already discovered the capacity for vocal learning in species other than humans, including in songbirds, hummingbirds, parrots, cetaceans such as dolphins and whales, pinnipeds such as seals, elephants and bats. But given the centuries-old idea that a deep chasm separated human language from animal communications, most scientists understood the vocal learning abilities of other species as unrelated to our own — as evolutionarily divergent as the wing of a bat is to that of a bee. The apparent absence of intermediate forms of language — say, a talking animal — left the question of how language evolved resistant to empirical inquiry.

    When the Duke researchers dissected the brains of the hearing and deafened mice, they found a rudimentary version of the neural circuitry that allows the forebrains of vocal learners such as humans and songbirds to directly control their vocal organs. Mice don’t seem to have the vocal flexibility of elephants; they cannot, like the 10-year-old female African elephant in Tsavo, Kenya, mimic the sound of trucks on the nearby Nairobi-Mombasa highway. Or the gift for mimicry of seals; an orphaned harbor seal at the New England Aquarium could utter English phrases in a perfect Maine accent (“Hoover, get over here,” he said. “Come on, come on!”).

    But the rudimentary skills of mice suggested that the language-critical capacity might exist on a continuum, much like a submerged land bridge might indicate that two now-isolated continents were once connected. In recent years, an array of findings have also revealed an expansive nonhuman soundscape, including: turtles that produce and respond to sounds to coordinate the timing of their birth from inside their eggs; coral larvae that can hear the sounds of healthy reefs; and plants that can detect the sound of running water and the munching of insect predators. Researchers have found intention and meaning in this cacophony, such as the purposeful use of different sounds to convey information. They’ve theorized that one of the most confounding aspects of language, its rules-based internal structure, emerged from social drives common across a range of species.

    With each discovery, the cognitive and moral divide between humanity and the rest of the animal world has eroded. For centuries, the linguistic utterances of Homo sapiens have been positioned as unique in nature, justifying our dominion over other species and shrouding the evolution of language in mystery. Now, experts in linguistics, biology and cognitive science suspect that components of language might be shared across species, illuminating the inner lives of animals in ways that could help stitch language into their evolutionary history — and our own.

    For hundreds of years, language marked “the true difference between man and beast,” as the philosopher René Descartes wrote in 1649. As recently as the end of the last century, archaeologists and anthropologists speculated that 40,000 to 50,000 years ago a “human revolution” fractured evolutionary history, creating an unbridgeable gap separating humanity’s cognitive and linguistic abilities from those of the rest of the animal world.

    Linguists and other experts reinforced this idea. In 1959, the M.I.T. linguist Noam Chomsky, then 30, wrote a blistering 33-page takedown of a book by the celebrated behaviorist B.F. Skinner, which argued that language was just a form of “verbal behavior,” as Skinner titled the book, accessible to any species given sufficient conditioning. One observer called it “perhaps the most devastating review ever written.” Between 1972 and 1990, there were more citations of Chomsky’s critique than Skinner’s book, which bombed.

    The view of language as a uniquely human superpower, one that enabled Homo sapiens to write epic poetry and send astronauts to the moon, presumed some uniquely human biology to match. But attempts to find those special biological mechanisms — whether physiological, neurological, genetic — that make language possible have all come up short.

    One high-profile example came in 2001, when a team led by the geneticists Cecilia Lai and Simon Fisher discovered a gene — called FoxP2 — in a London family riddled with childhood apraxia of speech, a disorder that impairs the ability of otherwise cognitively capable individuals to coordinate their muscles to produce sounds, syllables and words in an intelligible sequence. Commentators hailed FoxP2 as the long sought-after gene that enabled humans to talk — until the gene turned up in the genomes of rodents, birds, reptiles, fish and ancient hominins such as Neanderthals, whose version of FoxP2 is much like ours. (Fisher so often encountered the public expectation that FoxP2 was the “language gene” that he resolved to acquire a T-shirt that read, “It’s more complicated than that.”)

    The search for an exclusively human vocal anatomy has failed, too. For a 2001 study, the cognitive scientist Tecumseh Fitch cajoled goats, dogs, deer and other species to vocalize while inside a cineradiograph machine that filmed the way their larynxes moved under X-ray. Fitch discovered that species with larynxes different from ours — ours is “descended” and located in our throats rather than our mouths — could nevertheless move them in similar ways. One of them, the red deer, even had the same descended larynx we do.

    Fitch and his then-colleague at Harvard, the evolutionary biologist Marc Hauser, began to wonder if they’d been thinking about language all wrong. Linguists described language as a singular skill, like being able to swim or bake a soufflé: You either had it or you didn’t. But perhaps language was more like a multicomponent system that included psychological traits, such as the ability to share intentions; physiological ones, such as motor control over vocalizations and gestures; and cognitive capacities, such as the ability to combine signals according to rules, many of which might appear in other animals as well.

    Fitch, whom I spoke to by Zoom in his office at the University of Vienna, drafted a paper with Hauser as a “kind of an argument against Chomsky,” he told me. As a courtesy, he sent the M.I.T. linguist a draft. One evening, he and Hauser were sitting in their respective offices along the same hall at Harvard when an email from Chomsky dinged their inboxes. “We both read it and we walked out of our rooms going, ‘What?’” Chomsky indicated that not only did he agree, but that he’d be willing to sign on to their next paper on the subject as a co-author. That paper, which has since racked up more than 7,000 citations, appeared in the journal Science in 2002.

    Squabbles continued over which components of language were shared with other species and which, if any, were exclusive to humans. Those included, among others, language’s intentionality, its system of combining signals, its ability to refer to external concepts and things separated by time and space and its power to generate an infinite number of expressions from a finite number of signals. But reflexive belief in language as an evolutionary anomaly started to dissolve. “For the biologists,” recalled Fitch, “it was like, ‘Oh, good, finally the linguists are being reasonable.’”

    Evidence of continuities between animal communication and human language continued to mount. The sequencing of the Neanderthal genome in 2010 suggested that we hadn’t significantly diverged from that lineage, as the theory of a “human revolution” posited. On the contrary, Neanderthal genes and those of other ancient hominins persisted in the modern human genome, evidence of how intimately we were entangled. In 2014, Jarvis found that the neural circuits that allowed songbirds to learn and produce novel sounds matched those in humans, and that the genes that regulated those circuits evolved in similar ways. The accumulating evidence left “little room for doubt,” Cedric Boeckx, a theoretical linguist at the University of Barcelona, noted in the journal Frontiers in Neuroscience. “There was no ‘great leap forward.’”

    As our understanding of the nature and origin of language shifted, a host of fruitful cross-disciplinary collaborations arose. Colleagues of Chomsky’s, such as the M.I.T. linguist Shigeru Miyagawa, whose early career was shaped by the precept that “we’re smart, they’re not,” applied for grants with primatologists and neuroscientists to study how human language might be related to birdsong and primate calls. Interdisciplinary centers sprang up devoted specifically to the evolution of language, including at the University of Zurich and the University of Edinburgh. Lectures at a biannual conference on language evolution once dominated by “armchair theorizing,” as the cognitive scientist and founder of the University of Edinburgh’s Centre for Language Evolution, Simon Kirby, put it, morphed into presentations “completely packed with empirical data.”

    One of the thorniest problems researchers sought to address was the link between thought and language. Philosophers and linguists long held that language must have evolved not for the purpose of communication but to facilitate abstract thought. The grammatical rules that structure language, a feature of languages from Algonquin to American Sign Language, are more complex than necessary for communication. Language, the argument went, must have evolved to help us think, in much the same way that mathematical notations allow us to make complex calculations.

    Ev Fedorenko, a cognitive neuroscientist at M.I.T., thought this was “a cool idea,” so, about a decade ago, she set out to test it. If language is the medium of thought, she reasoned, then thinking a thought and absorbing the meaning of spoken or written words should activate the same neural circuits in the brain, like two streams fed by the same underground spring. Earlier brain-imaging studies showed that patients with severe aphasia could still solve mathematical problems, despite their difficulty in deciphering or producing language, but failed to pinpoint distinctions between brain regions dedicated to thought and those dedicated to language. Fedorenko suspected that might be because the precise location of these regions varied from individual to individual. In a 2011 study, she asked healthy subjects to make computations and decipher snatches of spoken and written language while she watched how blood flowed to aroused parts of their brains using an M.R.I. machine, taking their unique neural circuitry into account in her subsequent analysis. Her fM.R.I. studies showed that thinking thoughts and decoding words mobilized distinct brain pathways. Language and thought, Fedorenko says, “really are separate in an adult human brain.”

    At the University of Edinburgh, Kirby hit upon a process that might explain how language’s internal structure evolved. That structure, in which simple elements such as sounds and words are arranged into phrases and nested hierarchically within one another, gives language the power to generate an infinite number of meanings; it is a key feature of language as well as of mathematics and music. But its origins were hazy. Because children intuit the rules that govern linguistic structure with little if any explicit instruction, philosophers and linguists argued that it must be a product of some uniquely human cognitive process. But researchers who scrutinized the fossil record to determine when and how that process evolved were stumped: The first sentences uttered left no trace behind.

    Kirby designed an experiment to simulate the evolution of language inside his lab. First, he developed made-up codes to serve as proxies for the disordered collections of words widely believed to have preceded the emergence of structured language, such as random sequences of colored lights or a series of pantomimes. Then he recruited subjects to use the code under a variety of conditions and studied how the code changed. He asked subjects to use the code to solve communication tasks, for example, or to pass the code on to one another as in a game of telephone. He ran the experiment hundreds of times using different parameters on a variety of subjects, including on a colony of baboons living in a seminaturalistic enclosure equipped with a bank of computers on which they could choose to play his experimental games.

    What he found was striking: Regardless of the native tongue of the subjects, or whether they were baboons, college students or robots, the results were the same. When individuals passed the code on to one another, the code became simpler but also less precise. But when they passed it on to one another and also used it to communicate, the code developed a distinct architecture. Random sequences of colored lights turned into richly patterned ones; convoluted, pantomimic gestures for words such as “church” or “police officer” became abstract, efficient signs. “We just saw, spontaneously emerging out of this experiment, the language structures we were waiting for,” Kirby says. His findings suggest that language’s mystical power — its ability to turn the noise of random signals into intelligible formulations — may have emerged from a humble trade-off: between simplicity, for ease of learning, and what Kirby called “expressiveness,” for unambiguous communication.

    For Descartes, the equation of language with thought meant animals had no mental life at all: “The brutes,” he opined, “don’t have any thought.” Breaking the link between language and human biology didn’t just demystify language; it restored the possibility of mind to the animal world and repositioned linguistic capacities as theoretically accessible to any social species.

    This summer, I met with Marcelo Magnasco, a biophysicist, and Diana Reiss, a psychologist at Hunter College who studies dolphin cognition, in Magnasco’s lab at Rockefeller University. Overlooking the East River, it was a warmly lit room, with rows of burbling tanks inhabited by octopuses, whose mysterious signals they hoped to decode. Magnasco became curious about the cognitive and communicative abilities of cephalopods while diving recreationally, he told me. Numerous times, he said, he encountered cephalopods and had “the overpowering impression that they were trying to communicate with me.” During the Covid-19 shutdown, when his work studying dolphin communication with Reiss was derailed, Magnasco found himself driving to a Petco in Staten Island to buy tanks for octopuses to live in his lab.

    During my visit, the grayish pink tentacles of the octopus clinging to the side of the glass wall of her tank started to flash bright white. Was she angry? Was she trying to tell us something? Was she even aware of our presence? There was no way to know, Magnasco said. Earlier efforts to find linguistic capacities in other species failed, in part, he explained, because we assumed they would look like our own. But the communication systems of other species might, in fact, be “truly exotic to us,” Magnasco said. A species that can recognize objects by echolocation, as cetaceans and bats can, might communicate using acoustic pictographs, for example, which might sound to us like meaningless chirps or clicks. To disambiguate the meaning of animal signals, such as a string of dolphin clicks or whalesong, scientists needed some inkling of where meaning-encoding units began and ended, Reiss explained. “We, in fact, have no idea what the smallest unit is,” she said. If scientists analyze animal calls using the wrong segmentation, meaningful expressions turn into meaningless drivel: “ad ogra naway” instead of “a dog ran away.”

    An international initiative called Project CETI, founded by David Gruber, a biologist at the City University of New York, hopes to get around this problem by feeding recordings of sperm-whale clicks, known as codas, into computer models, which might be able to discern patterns in them, in the same way that ChatGPT was able to grasp vocabulary and grammar in human language by analyzing publicly available text. Another method, Reiss says, is to provide animal subjects with artificial codes and observe how they use them.

    Reiss’s research on dolphin cognition is one of a handful of projects on animal communication that dates back to the 1980s, when there were widespread funding cuts in the field, after a top researcher retracted his much-hyped claim that a chimpanzee could be trained to use sign language to converse with humans. In a study published in 1993, Reiss offered bottlenose dolphins at a facility in Northern California an underwater keypad that allowed them to choose specific toys, which it delivered while emitting computer-generated whistles, like a kind of vending machine. The dolphins spontaneously began mimicking the computer-generated whistles when they played independently with the corresponding toy, like kids tossing a ball and naming it “ball, ball, ball,” Reiss told me. “The behavior,” Reiss said, “was strikingly similar to the early stages of language acquisition in children.”

    The researchers hoped to replicate the method by outfitting an octopus tank with an interactive platform of some kind and observing how the octopus engaged with it. But it was unclear whether such a device might interest the lone cephalopod. An earlier episode of displeasure led her to discharge enough ink to turn her tank water so black that she couldn’t be seen. Unlocking her communicative abilities might require that she consider the scientists as fascinating as they did her.

    While experimenting with animals trapped in cages and tanks can reveal their latent faculties, figuring out the range of what animals are communicating to one another requires spying on them in the wild. Past studies often conflated general communication, in which individuals extract meaning from signals sent by other individuals, with language’s more specific, flexible and open-ended system. In a seminal 1980 study, for example, the primatologists Robert Seyfarth and Dorothy Cheney used the “playback” technique to decode the meaning of alarm calls issued by vervet monkeys at Amboseli National Park in Kenya. When a recording of the barklike calls emitted by a vervet encountering a leopard was played back to other vervets, it sent them scampering into the trees. Recordings of the low grunts of a vervet who spotted an eagle led other vervets to look up into the sky; recordings of the high-pitched chutters emitted by a vervet upon noticing a python caused them to scan the ground.

    At the time, The New York Times ran a front-page story heralding the discovery of a “rudimentary ‘language’” in vervet monkeys. But critics objected that the calls might not have any properties of language at all. Instead of being intentional messages to communicate meaning to others, the calls might be involuntary, emotion-driven sounds, like the cry of a hungry baby. Such involuntary expressions can transmit rich information to listeners, but unlike words and sentences, they don’t allow for discussion of things separated by time and space. The barks of a vervet in the throes of leopard-induced terror could alert other vervets to the presence of a leopard — but couldn’t provide any way to talk about, say, “the really smelly leopard who showed up at the ravine yesterday morning.”

    Toshitaka Suzuki, an ethologist at the University of Tokyo who describes himself as an animal linguist, struck upon a method to disambiguate intentional calls from involuntary ones while soaking in a bath one day. When we spoke over Zoom, he showed me an image of a fluffy cloud. “If you hear the word ‘dog,’ you might see a dog,” he pointed out, as I gazed at the white mass. “If you hear the word ‘cat,’ you might see a cat.” That, he said, marks the difference between a word and a sound. “Words influence how we see objects,” he said. “Sounds do not.” Using playback studies, Suzuki determined that Japanese tits, songbirds that live in East Asian forests and that he has studied for more than 15 years, emit a special vocalization when they encounter snakes. When other Japanese tits heard a recording of the vocalization, which Suzuki dubbed the “jar jar” call, they searched the ground, as if looking for a snake. To determine whether “jar jar” meant “snake” in Japanese tit, he added another element to his experiments: an eight-inch stick, which he dragged along the surface of a tree using hidden strings. Usually, Suzuki found, the birds ignored the stick. It was, by his analogy, a passing cloud. But then he played a recording of the “jar jar” call. In that case, the stick seemed to take on new significance: The birds approached the stick, as if examining whether it was, in fact, a snake. Like a word, the “jar jar” call had changed their perception.

    Cat Hobaiter, a primatologist at the University of St. Andrews who works with great apes, developed a similarly nuanced method. Because great apes appear to have a relatively limited repertoire of vocalizations, Hobaiter studies their gestures. For years, she and her collaborators have followed chimps in the Budongo forest and gorillas in Bwindi in Uganda, recording their gestures and how others respond to them. “Basically, my job is to get up in the morning to get the chimps when they’re coming down out of the tree, or the gorillas when they’re coming out of the nest, and just to spend the day with them,” she told me. So far, she says, she has recorded about 15,600 instances of gestured exchanges between apes.

    To determine whether the gestures are involuntary or intentional, she uses a method adapted from research on human babies. Hobaiter looks for signals that evoke what she calls an “Apparently Satisfactory Outcome.” The method draws on the theory that involuntary signals continue even after listeners have understood their meaning, while intentional ones stop once the signaler realizes her listener has comprehended the signal. It’s the difference between the continued wailing of a hungry baby after her parents have gone to fetch a bottle, Hobaiter explains, and my entreaties to you to pour me some coffee, which cease once you start reaching for the coffeepot. To search for a pattern, she says she and her researchers have looked “across hundreds of cases and dozens of gestures and different individuals using the same gesture across different days.” So far, her team’s analysis of 15 years’ worth of video-recorded exchanges has pinpointed dozens of ape gestures that trigger “apparently satisfactory outcomes.”

    These gestures may also be legible to us, albeit beneath our conscious awareness. Hobaiter applied her technique on pre-verbal 1- and 2-year-old children, following them around recording their gestures and how they affected attentive others, “like they’re tiny apes, which they basically are,” she says. She also posted short video clips of ape gestures online and asked adult visitors who’d never spent any time with great apes to guess what they thought they meant. She found that pre-verbal human children use at least 40 or 50 gestures from the ape repertoire, and adults correctly guessed the meaning of video-recorded ape gestures at a rate “significantly higher than expected by chance,” as Hobaiter and Kirsty E. Graham, a postdoctoral research fellow in Hobaiter’s lab, reported in a 2023 paper for PLOS Biology.

    The emerging research might seem to suggest that there’s nothing very special about human language. Other species use intentional wordlike signals just as we do. Some, such as Japanese tits and pied babblers, have been known to combine different signals to make new meanings. Many species are social and practice cultural transmission, satisfying what might be prerequisite for a structured communication system like language. And yet a stubborn fact remains. The species that use features of language in their communications have few obvious geographical or phylogenetic similarities. And despite years of searching, no one has discovered a communication system with all the properties of language in any species other than our own.

    For some scientists, the mounting evidence of cognitive and linguistic continuities between humans and animals outweighs evidence of any gaps. “There really isn’t such a sharp distinction,” Jarvis, now at Rockefeller University, said in a podcast. Fedorenko agrees. The idea of a chasm separating man from beast is a product of “language elitism,” she says, as well as a myopic focus on “how different language is from everything else.”

    But for others, the absence of clear evidence of all the components of language in other species is, in fact, evidence of their absence. In a 2016 book on language evolution titled “Why Only Us,” written with the computer scientist and computational linguist Robert C. Berwick, Chomsky describes animal communications as “radically different” from human language. Seyfarth and Cheney, in a 2018 book, note the “striking discontinuities” between human and nonhuman loquacity. Animal calls may be modifiable; they may be voluntary and intentional. But they’re rarely combined according to rules in the way that human words are and “appear to convey only limited information,” they write. If animals had anything like the full suite of linguistic components we do, Kirby says, we would know by now. Animals with similar cognitive and social capacities to ours rarely express themselves systematically the way we do, with systemwide cues to distinguish different categories of meaning. “We just don’t see that kind of level of systematicity in the communication systems of other species,” Kirby said in a 2021 talk.

    This evolutionary anomaly may seem strange if you consider language an unalloyed benefit. But what if it isn’t? Even the most wondrous abilities can have drawbacks. According to the popular “self-domestication” hypothesis of language’s origins, proposed by Kirby and James Thomas in a 2018 paper published in Biology & Philosophy, variable tones and inventive locutions might prevent members of a species from recognizing others of their kind. Or, as others have pointed out, they might draw the attention of predators. Such perils could help explain why domesticated species such as Bengalese finches have more complex and syntactically rich songs than their wild kin, the white-rumped munia, as discovered by the biopsychologist Kazuo Okanoya in 2012; why tamed foxes and domesticated canines exhibit heightened abilities to communicate, at least with humans, compared with wolves and wild foxes; and why humans, described by some experts as a domesticated species of their ape and hominin ancestors, might be the most talkative of all. A lingering gap between our abilities and those of other species, in other words, does not necessarily leave language stranded outside evolution. Perhaps, Fitch says, language is unique to Homo sapiens, but not in any unique way: special to humans in the same way the trunk is to the elephant and echolocation is to the bat.

    The quest for language’s origins has yet to deliver King Solomon’s seal, a ring that magically bestows upon its wearer the power to speak to animals, or the future imagined in a short story by Ursula K. Le Guin, in which therolinguists pore over the manuscripts of ants, the “kinetic sea writings” of penguins and the “delicate, transient lyrics of the lichen.” Perhaps it never will. But what we know so far tethers us to our animal kin regardless. No longer marooned among mindless objects, we have emerged into a remade world, abuzz with the conversations of fellow thinking beings, however inscrutable.

    (Sonia Shah is a science journalist and the author, most recently, of “The Next Great Migration: The Beauty and Terror of Life on the Move.”)
  • Patterner
    984

    Well I'll get to your post later. Yard work today. But I don't think there's any possibility that any other animal has any language that approaches human language. Because they can't think in the kinds of ways we do. If they were talking, we'd be able to learn each others' languages, and have conversations. We would have been doing this since the time we and any species capable of it found ourselves in the same place. Our cultures and societies would be much different if we had been been coexisting with animals that could communicate like us for the last several thousand years, if not hundreds of thousands.

    There are many people who have put great effort into communicating with various other species. Apes and dolphins are big ones. The octopus is supposed to be an intelligent animal, also. But we cannot have a conversation with any of them. They just don't have the ability.

    Also, I suspect they'd wipe us out if they could think in those ways.
  • Bodhy
    26


    Thanks for the reminder to read Signs in the Dust! I found out about that book some time ago, meant to read it, fell by the wayside.



    Personally, I prefer semiotics or a semtioic phenomenology over classical phenomenology. I think why people took to phenomenology as being a research programme without merit was because phenomenologists themselves have had a tendency to devalue any phenomenological perspective that wasn't derivative from Husserl.

    Classical phenomenology I believe is too anthropocentric and arguably, at least in the Husserlian vein might steer too close to a transcendental idealism. Husserl's bracketing means speculation about external world is left out but this may leave it vulnerable to accusations of idealism - but I don't think that's a fair thing to accuse phenomenology as an enterprise of.

    I'd want phenomenology to yield metaphysical insight too and I think Deely is correct that we want semiosis to be foregrounded over classical phenomenology. Idealism is not only a worry phenomenology is accused of, but also I don't see the classical perspective working out a mechanism for how creatures actually make sense of their world. The semiotic perspective Deely outlines illuminates the "how" of sense-making and also the "how" of the phenomenon qua appearance. Conscious phenomena are a scaled up and special case of semiosis.

    Some clear benefits there - we don't need to put the metaphysical reality of the appearance into abeyance, nor do we have a consciosuness centric method that limits an inquiry into the forms of meaning-making.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.