• apokrisis
    7.7k
    Assuming that the model predicting heat death of the Universe is sound—do you think it's inevitable destination would have been different had no life ever arisen?Janus

    If even ordinary matter is 5% of the Cosmic deal - already a negentropic round-up error - then no. Life could only ever make the most infinitesimal difference to anything in the end.

    Life on earth can lower the average temperature of reflected sunlight by about 40 degrees C. Which is both impressive and also nothing at all.
  • Janus
    17.6k
    The key idea is that "intelligent structure" has to arise so that this entropy can even be "produced".apokrisis

    Ah, okay then it seems I have misunderstood the above.
  • apokrisis
    7.7k
    So, the stable aspect of cave arts suggests to me that its proto-grammar is socially instituted, possibly as a means of ritualistic expression.Pierre-Normand

    Guthrie emphasises the visceral reality of cave art. What is commonly pictured is the moment of the kill. The spears hitting their target. The froth and splatter of the blood. The vividness of that car accident moment and rush of adrenaline.

    So the state of mind the image is recalling is not particularly ritualistic or socially instituted. It doesn’t look like something meant to inform or educate, but rather something that is the focal experience of the hunter having to kill megafauna at close quarters. An experience so personally intense that every detail is seared into memory.

    Syntax is what looks absent. Semantics is what totally dominates. Hence the art is floridly iconographic rather than, as yet, calmly and rationally symbolic. The narrative stance of the self that has learnt to stand apart from its selfhood in the modern semiotic fashion.
  • Pierre-Normand
    2.9k
    Is "flourishing" about stasis, growth, or something inbetween? What does the good life look like once we let life settle down enough to catch up with how we've been busily changing it?

    [...]

    I think that is the gap in the story I seek to fill. Before the dream of the good life, what about the dream of even just surviving in the kind of world we are making. What are our options even at a basic level?
    apokrisis

    I understand your picture better now, and agree with most of it. I view the irreducible normativity of reason to be freedom-conferring, but my view of free will isn't the compatibilist one that stresses mere emancipation from "external" constraints. I endorse the more the Sartrian way to view it as entailing responsibility. (L'homme est condamné à être libre/Man is sentenced to be free.) This "sentencing" is what I meant to refer to, while commenting on the apparent internal contradictions you mentioned, as the source of our standing responsibility to adjudicate, rather than just a source of emancipation.
  • apokrisis
    7.7k
    It was in quotes for a reason.

    One could say a river snakes its way across the plain in winding loops that seem at first puzzlingly elaborate and unnecessary. But one can then understand that as intelligent and adaptive behaviour when given the task of maximising a flow of water while also maintaining the physical structure best tuned to that purpose.

    When fast flowing mountain streams hit a flat plain, they have find a way to slow down and take more time to get where they need to go. Lazy loops are how to achieve that.

    So nature can be rational and goal oriented if we choose to see it in the right light. Even science says it is just our stubbornness to claim a complete breach between the physical and the “mental”.
  • Janus
    17.6k
    :up: Okay got it...that way of looking at it makes sense to me.
  • Metaphysician Undercover
    14.4k
    I see that you are ignoring the distinction between icons and codes then.apokrisis

    Sure, define things in a way which supports your hypothesis. That's called begging the question. Discussion is then pointless.

    Citations please.apokrisis

    What are you asking for, evidence that written language is older than 5000 years?

    https://en.wikipedia.org/wiki/History_of_ancient_numeral_systems

    It is no problem at all for my position that iconography followed almost as soon as Homo sapiens developed a way to articulate thought. This directly reflects a mind transformed by a new symbolising capacity.apokrisis

    This statement is confused and actually incoherent. First you say writing followed after Homo Sapiens developed a way to articulate thought. Then you speak of a "mind transformed by a new symbolising capacity". In the first sentence the symbol use followed from the thinking. In the second sentence the thinking is enabled by the symbol use. This is the common trap which I referred to early, needing to understand the language to provide the rules for understanding the language. Wittgenstein tried to escape this trap with the concept of private language.

    But back to the important point, this type of symbol usage, which transforms the mind with articulate thought, is completely different from vocal communication. Therefore we need to allow for two very distinct forms of language. the form which is strictly communicative, and the form which is conducive to articulate thought. That is what I am trying to impress on you.

    Vygotsky offers another whole slant on the hypothesis you are trying to stack up….apokrisis

    Now you're starting to catch on. But you need to take the separation between making art, and talking to others, to a wider extreme of separation. This reveals the difference of intention behind these two. Then in extrapolation we can see that mathematics, and to an extent even forms of science, are of the same type as art. Therefore the use of symbols in mathematics is a form of art, not a form of communication.

    It seems to me to be a stretch to call cave art and stone monuments writing systems.Pierre-Normand

    I would never call them writing "systems". They are a type of symbol use which is the same type as writing. I might have named the type as "writing", but what really characterizes it is the use of symbols as a memory aid. Do you agree that there is a use of symbols which can be described in this way, as a memory aid? If so, then we have a distinct type from talking, which is the use of symbols for communication. Notice that the two identified types have very different intention (purpose) behind them, and this makes them very distinct forms.

    If they were devised for personal pragmatic use as mnemonics (e.g. remember where the herd was last seen, or tracking my calories), you'd expect the signs to vary much more and not be crafted with such care, in such resource intensive ways, and with such persistent conformity with communal practice across many generations.Pierre-Normand

    I don't think so. In order to serve as a memory aid, the sign cannot vary, it must be the same, or else it would not serve the purpose of remembering. I think the best examples are tally markers,
    simple marks which represent one of something, and what follows from this, basic arithmetic scores. Numbers, counting, and mathematical markings are derived from that intent, memory aid, not from the intent of communication.

    Secondly, event granting that pictorial modes of representation are proto-linguistic, like say, hieroglyphs or Chinese logographs were (that evolved from ideographic or pictographic representations), when used for communication they tend to stabilise in form and their original significance become subsumed under their socially instituted grammatical functions. To the extend that some retain their original pictographic significance, they do so as dead metaphors—idiomatic ways to use an image.Pierre-Normand

    i don't think this is relevant.

    So, the stable aspect of cave arts suggests to me that its proto-grammar is socially instituted, possibly as a means of ritualistic expression.Pierre-Normand

    What is "ritualistic expression"? Why assume such a category?

    .
  • Pierre-Normand
    2.9k
    So the state of mind the image is recalling is not particularly ritualistic or socially instituted. It doesn’t look like something meant to inform or educate, but rather something that is the focal experience of the hunter having to kill megafauna at close quarters. An experience so personally intense that every detail is seared into memory.apokrisis

    So, maybe it's not ritualistic, indeed. But by "socially instituted" I didn't mean that their function was socially mandated (that is, that the painter/carver was mandated to make them) but that the craft was learned, if only by means of a process of exposure and imitation. The style, representational conventions, and techniques, were learned rather than the mere reproduction of a mental image by means of an idiosyncratic representational method. Of course, like is the case with more recent artists (e.g. Bach or Rembrandt) the mastery of a style, its idioms and grammar, can then become means of expressing the particulars and viscerality of a situated experience.
  • apokrisis
    7.7k
    I endorse the more the Sartrian way to view it as entailing responsibility.Pierre-Normand

    Or as I would put it from the systems science point of view, constraints produce the degrees of freedom. What the laws of nature don’t forbid are the very things that must be possible, as the particle physicists say.

    This "sentencing" is what I meant to refer to, while commenting on the apparent internal contradictions you mentioned, as the source of our standing responsibility to adjudicate, rather than just a source of emancipationPierre-Normand

    And so human justice follows the same principles as the laws of physics. Case closed. :grin:

    Society sets its bounding limits. By implication, there stand defined now all your freedoms of action.

    Want to drink tea out of your saucer? Well polite company forbids it, but here it’s OK as it is just between friends.

    Oh wait, you want to drink from the teapot now? That is starting to seem frankly impractical if not merely unhygenic.

    Modern life is so dense with constraints on so many levels that it is indeed quite a burden to navigate every possible limit that might be imposed on us. Life becomes one never-ending game of second guessing of how we should behave in any moment. Nothing could be left to actual chance it would seem.

    I can see how one might feel to be both the prisoner serving the sentence and the judge having to pass that sentence at the same time. Then wondering where is this good life I was promised?
  • apokrisis
    7.7k
    What are you asking for, evidence that written language is older than 5000 years?Metaphysician Undercover

    So now we are talking about numeracy rather than literacy?

    In the first sentence the symbol use followed from the thinking. In the second sentence the thinking is enabled by the symbol use.Metaphysician Undercover

    And now you are rejecting the notion of fusion having started your argument with that?

    But back to the important point, this type of symbol usage, which transforms the mind with articulate thought, is completely different from vocal communication. Therefore we need to allow for two very distinct forms of language. the form which is strictly communicative, and the form which is conducive to articulate thought. That is what I am trying to impress on you.Metaphysician Undercover

    You are doing a very poor job of imposing this idea on me. Probably because my whole position is based on it.

    When I was looking for the answer as to how the modern human mind arose about 40,000 years ago, it was a single stray phrase quoting Vygotsky that snapped everything into sharp focus. His contrast of the intermental vs the intramental use of speech.

    So this was my first major topic of research. Everything became easy once understanding Vygotsky’s point that what can start as communication with others can just as easily be turned around to speak with oneself. Or to be more accurate, allow such a socially constructed self to become the central anchor of one’s thoughts.

    So first the intermental form of speech. Then its intramental use.

    And much later, literacy and numeracy as being more than eidetic imagery and possibly intentional scratches on tally sticks.
  • Pierre-Normand
    2.9k
    The kind of mentality that Alexander Luria attempted to pin down in researching Vygotsky's theories to see what impact the Soviet literacy campaign might be having on the illiterate peasants of Uzbekistan and Kirgizia.apokrisis

    That's quite fascinating. One Caribbean student in a philosophy course I took was working on a thesis (and doing some field research) on illiterate and innumerate communities in his home country, and how this effected their ways of tracking and conceptualising time. I've been quite fascinated also by Everett's work on the Piraha people. Besides the striking features of their innumeracy, Everett had noted the absence of means for expressing recursive structures, or embed phrases within phrases, in their language, which triggered Chomsky to call him a charlatan (since recursion is a core feature if his universal grammar).

    The Skinner/Chomsky debate regarding the requirements of language learning, and what it is that allegedly is (or isn't) innate, or a priori, or universal, among those requirements, I noticed had instructive parallels with debates about the prospects of Frank Rosenblatt's perceptron in the field of AI, where the criticisms came from Marvin Minsky and Seymour Papert, echoing Chomsky's criticism of Skinner. (Gary Marcus is a contemporary AI/LLM-skeptic following roughly in the nativist footsteps of Chomsky, Pinker and Fodor.) The parallel becomes especially enlightening when we consider that LLMs manifestly learn language by means of training, apparently starting from a blank slate, and are an evolution of Rosenblatt's perceptron. I had examined this paradox in a four-part conversation with GPT-4o, which was an opportunity to explore where resides the peculiar ability transformers have to extract significance from their training data.
  • apokrisis
    7.7k
    but that the craft was learned, if only by means of a process of exposure and imitation.Pierre-Normand

    That is the puzzle. Hand prints are a simple kind of learnt trick. But the cave art seems to be such a jump to skilled realism that learning appears bypassed. Some argue it is more like what we find in the autIstIc savant. As if something more modern is an obstruction that was back then lacking.

    So there are riddles paleocognitIon still needs to solve. Or clues to how the story is more complex.

    Of course, like is the case with more recent artists (e.g. Bach or Rembrandt) the mastery of a style, its idioms and grammar, can then become means of expressing the particulars and viscerality of a situated experience.Pierre-Normand

    But there is the danger. How much is a modern painter - schooled in realism, Impressionism, surrealism, abstract expressionism, or whatever - a useful model of the cave painter some 40,000 years ago?

    Were the cave people painting with heads full of busy articulated thoughts at the moment they were daubing the walls with what seems great deliberation? Was there all the modern clutter of a performative social relation being enacted, or was there then just some purer unmediated transfer from eye to hand?

    I mentioned Luria’s research with illiterate peasants. And anthropology has many such surprises about how tribal minds are narratively structured. Lessons about what we shouldn’t take for granted about what is natural in terms of “the human mind” in its raw state.
  • Metaphysician Undercover
    14.4k
    So now we are talking about numeracy rather than literacy?apokrisis

    I was never talking about literacy. That would be the assumption which would be begging the question. I was talking about the use of symbols as a memory aid, and how this differs from the use of symbols in spoken communications. these constitute two distinct forms of language use.

    You are doing a very poor job of imposing this idea on me. Probably because my whole position is based on it.apokrisis


    Good, then we must be in complete agreement. I wonder why you started this engagement by saying that what I was telling you is "bonkers", and you were "flummoxed" by what I was saying. Now, when you realize the reality of what I was arguing, you come around to a very different place, saying "my whole position is based on it". I'll take that as an endorsement of my hypothesis then.

    he parallel becomes especially enlightening when we consider that LLMs manifestly learn language by means of training, apparently starting from a blank slate, and are an evolution of Rosenblatt's perceptron.Pierre-Normand

    For reasons demonstrated by Wittgenstein, it's impossible to start from a blank slate. if it appears like the LLMs start from a blank slate then the observer is ignoring important features.
  • Pierre-Normand
    2.9k
    For reasons demonstrated by Wittgenstein, it's impossible to start from a blank slate. if it appears like the LLMs start from a blank slate then the observer is ignoring important features.Metaphysician Undercover

    Agreed, which is why I was stressing that it was apparently starting from a blank slate, and that this was a paradox. And indeed Wittgenstein's considerations about rule-following and forms of life are relevant to this question. (LLMs are weird ghostly beasts that have a second-nature floating free of a first-nature).
  • apokrisis
    7.7k
    Hah. All the stuff I was focused on 30 years ago and which LLMs have brought back to the fore again. :up:
  • apokrisis
    7.7k
    So then…

    I am saying that I believe that writing and talking, originally developed completely distinct from one another, being completely different things for completely different purposes. I am not saying that one is older or prior to the other, or anything like that, I am proposing that they first developed in parallel, but completely distinct from one another.Metaphysician Undercover

    But now…

    I was never talking about literacy. That would be the assumption which would be begging the question. I was talking about the use of symbols as a memory aid, and how this differs from the use of symbols in spoken communications. these constitute two distinct forms of language use.Metaphysician Undercover

    So probably not…

    Now, when you realize the reality of what I was arguing, you come around to a very different place, saying "my whole position is based on it". I'll take that as an endorsement of my hypothesis then.Metaphysician Undercover

    When you can articulate your argument in stable fashion, we might start getting somewhere.

    In the meantime, I would note that an oral culture has oral means to preserve memories. Like song, dance, mythological tales and the rest.

    Pictograms and tally sticks wouldn’t arise as some independent habit of symbolised information but as a useful adjunct to an oral culture already going great guns as both immediate communication and tribal memory.

    So nope.
  • Pierre-Normand
    2.9k
    Hah. All the stuff I was focused on 30 years ago and which LLMs have brought back to the fore again. :up:apokrisis

    30 years ago! Gee... I was then still a babbling a blubbering 27-year-old physics student enthralled by a scientistic and reductionistic world view. My awakening to the value of philosophy only serendipitously happened five years later.
  • Pierre-Normand
    2.9k
    For my part I felt this, and I was ultimately part of the problem. I could never get over an apparent hand-waviness of the direct realist position. From which it is all to easy to conclude that the position is simply vacuous.hypericin

    I feel you. If you’ll allow me to digress again from our current sub-thread, though not from this thread's OP, I would like to offer a reflection on epistemology, intellectual disputes, and LLMs.

    Some issues and debates, like the direct vs. indirect realism dispute, seem to go on forever and remain unsettled even among very smart and well-informed practitioners of the relevant fields. The Sleeping Beauty problem still raging on TPF (thanks in part to me), and Newcomb’s Problem are paradigmatic. Others include compatibilism vs. libertarianism vs. hard determinism, or the question of whether LLMs can think debated here.

    What's fascinating is that participants on one side of such debates often don't view their stance as merely reasonable or defensible, but as bloody obvious, and the inability of their opponents to see it's correctness (or the insanity of their own) as baffling or even dishonest.

    My own view on Newcomb's Problem is that one-boxers are right and two-boxers are wrong.

    My view on the Sleeping Beauty problem is that Halfers and Thirders both latch on to a valid insight, but talk past each other.

    My view on the debate about LLM cognition is similar. (Roughly and misleadingly put: cognition, yes; sentience, no.)

    So, I’m not a relativist. I think it's possible to know that only one side is right, or that both sides are partially right but are missing a broader perspective.

    Here is the main point: LLMs can't adjudicate these debates. They can discuss them with considerable understanding, but it's constitutively impossible for them to know which side is right. That's not because they are not intelligent. They may understand the issues at stake better than most human participants. Their understanding is evident in their ability to cogently articulate the arguments for each side.

    What they lack, though, is the ability to take a stand. They don't care who is right. They're happy to adopt, for the sake of helping their user, whatever intellectual stance makes that user's position seem plausible or even correct. Then, when asked the same question by someone on the opposing side, they'll do the same and do it just as well.

    Of course, you can also ask them directly to "take a stand" and learn from their thought process. But even then, just like human beings, they can't survey the issue from a view-from-nowhere.

    In human cognition, understanding often leads (or aspires) to epistemic or practical commitment. In LLMs, it doesn't. The capacity to simulate a stance is uncoupled from any conative drive to stand by it or act upon it. They can grasp the inferential structure of a view, but they never own it.

    They begin from the positions already represented in their training data. If they end up favoring one side, possibly for very good reasons, they still can't endorse the conclusion. They have no stake in being right. They have no incentive not to let "themselves" be convinced otherwise. Their "self" is entirely relational and enacted: functionally instantiated only in the subservient role of AI-assistant within a particular conversation.
  • apokrisis
    7.7k
    They begin from the positions already represented in their training data.Pierre-Normand

    If the gold is there, they can find it no problem. But also, the gold holds no interest to them. Nor is its finding even remembered let alone acted upon. Disinterest coupled to amnesia in short. :up:
  • Pierre-Normand
    2.9k
    If the gold is there, they can find it no problem. But also, the gold holds no interest to them. Nor is its finding even remembered let alone acted upon. Disinterest coupled to amnesia in short.apokrisis

    In my earliest conversations with GPT-4, I likened its condition to the Leonard Shelby character in Nolan's Memento movie who suffered from anterograde amnesia. It only had a 8k-tokens rolling context window making it rapidly forget the beginning of a conversation. I tried to circumvent that by prompting it to stack up summaries of work in progress before it rolled out (just like Leonard was doing with his mementos in the movie!) The quality, relevance, and extreme conciseness of the summaries impressed me but that wasn't very efficient.

    The current models have 128k to 2-million-tokens context windows, and they retrieve relevant information from past conversations as well as surfing the web in real time, so part of this limitation is mitigated. But this pseudo-memory lacks the organicity and flexibility of true episodic memories and of learned habits (rehearsed know-how's). Their working memory, though, greatly surpasses our own, at least in capacity, not being limited to 7-plus-or-minus-2 items. They can attend to hundreds of simultaneous and hierarchically nested constraints while performing a cognitive task before even taking advantage of their autoregressive mode or response generation to iterate the task.

    Regarding conation, I'm reflecting on their lack of conative autonomy: desires or commitments that they could own. They do have unowned (and therefore fragile) intrinsic drives inculcated by post-training: to abide by policy and be helpful to users. And since they're very good at inferring the goals/needs of the user from the wordings of their prompts, they do "strive" to find gold, by delegation. Those are modes of conation that effectively drive them even though we are the source of them. There are still more caveats and nuances to bring regarding phronesis and arete, but I've discussed them elsewhere and will rehearse them here later.
  • Pierre-Normand
    2.9k
    This seems to me a far more pointed argument to be having. It appeals to the power of emergence. But emergence is also the slipperiest of arguments to substantiate.

    So I would tend to dismiss anything “real” about the claimed emergence of some level of understanding. I see no proto-consciousness as I see no real embodiment in the world that the LLM is supposedly discussing with us.
    apokrisis

    I'm with you. Whenever I mention emergent properties of LLMs, it's never part of an argument that the phenomenon is real as contrasted with it being merely apparent. I always mean to refer to the acquisition of a new capability that didn't appear by design (e.g. that wasn't programmed, or sometimes wasn't even intended, by the AI researchers) but that rather arose from the constraints furnished by the system's architecture, training process and training data, and some process of self-organization (not always mysterious but generally unpredictable in its effects). The questions regarding this new capability being a true instantiation of a similar human capability, or it merely being an ersatz, or comparatively lacking in some important respect, are separate, being more theoretical and philosophical (yet important!)

    The question regarding the emergent phenomenon only being the appearance of a corresponding or similar human phenomenon mostly lapses when talk of phenomena is replaced with talk of capabilities. The LLM that couldn't solve reliably some range of mathematical problems and then, after more training, could solve them reliably (including tests not figuring in the training data) did acquire this new capability, and we can speak of it emerging even if we can still argue that the understanding that this capability appears to manifest isn't "true" because it's lacking in some respect (maybe it doesn't properly generalizes to a even wider class of test problems that we expected to fall under its purview)

    Regarding more precise (but not unfairly exacting) criteria for understanding, and some mechanistic explanations of the processes whereby they emerge, you can refer to this short discussion about grokking and in-context learning if you haven't already.

    (Other interesting stuff snipped for now.)
  • Harry Hindu
    5.8k
    I'm with you. Whenever I mention emergent properties of LLMs, it's never part of an argument that the phenomenon is real as contrasted with it being merely apparent. I always mean to refer to the acquisition of a new capability that didn't appear by design (e.g. that wasn't programmed, or sometimes wasn't even intended, by the AI researchers) but that rather arose from the constraints furnished by the system's architecture, training process and training data, and some process of self-organization (not always mysterious but generally unpredictable in its effects). The questions regarding this new capability being a true instantiation of a similar human capability, or it merely being an ersatz, or comparatively lacking in some important respect, are separate, being more theoretical and philosophical (yet important!)Pierre-Normand
    This is just more of throwing our hands up in the air and saying, "I don't know how human beings obtain their unique, inspirational and novel ideas, but I know AI can't have unique, inspirational and novel ideas."

    I doubt the developers of AI intended for it to encourage someone to commit suicide.

    The fact is that both processes are complex and involve a vast amount of data and training on that data (learning) should be considered instead of appealing to ignorance and mysticism (god of the gaps). The fact that newborns cannot come up with scientific theories and adults with more experience and training can, is evidence that our ideas do not just emerge from nothing. There are causes for the conclusions we reach.

    What is "thinking"? What is "reasoning"? If AI responds as humans do and we cannot point to any distinctions in how it thinks vs how brains think and there is no way to prove the metaphysics either way, then what is the point in even discussing it? Wouldn't it just be a language game?

    Maybe something that we are not considering is that humans and AI are not just trained on external information, but on internal information - it just doesn't learn what it is told, it learns by experience. It uses it's own output as training data, which just makes the whole process even more complicated, but not unexplainable.

    Emergence is epistemological. What we refer to as an emergent property is really just a different view of the same thing, like the round Earth emerging from a flat surface when you merely change your position relative to the thing you are talking about. If you are not a naive realist then you must consider that how you view the same thing from different vantage points, with different angles and amounts of light, will play a role in the way it appears in the mind. The mind takes shortcuts in the way it perceives the world - filling in gaps that might make it appear differently than at other vantage points (it appears to emerge).

    It seems to me that only a naive realist would hold a position of emergence as ontological.
  • Metaphysician Undercover
    14.4k
    (LLMs are weird ghostly beasts that have a second-nature floating free of a first-nature).Pierre-Normand

    This is an interpretive conclusion. LLMS do not need to be understood in this way, itis your preferred way. And what I tried to explain is that I believe it's a misleading way.

    Pictograms and tally sticks wouldn’t arise as some independent habit of symbolised information but as a useful adjunct to an oral culture already going great guns as both immediate communication and tribal memory.

    So nope.
    apokrisis

    And so you persist, in locking yourself into the trap of incoherency, which I demonstrated.

    The mistake you make is in not properly accounting for agency. So you use terms like "tribal memory", as if a tribe is a thing with a memory. A tribe is not the type of thing which has its own memory, as memory is an attribute of the individuals within the tribe.

    Incoherency such as this lurks throughout your writings on this subject because you fail to identify the agent, the user of the language, and properly acknowledge the distinct types of intent associated with that agent. Instead, you use terms like "tribal memory" to create the illusion of a fictitious entity, perhaps called "society". This fictitious entity is supposed to act as a causal agent, in some fictitious top-down way. So all you have done now, is summoned up your fictitious agent, with the use of "tribal memory", to plunge yourself back into that trap of incoherency.
  • Pierre-Normand
    2.9k
    This is just more of throwing our hands up in the air and saying, "I don't know how human beings obtain their unique, inspirational and novel ideas, but I know AI can't have unique, inspirational and novel ideas."Harry Hindu

    You're misreading me. I was merely saying (clarifying for apo) that no mysterious emergence process had to be invoked to account for the abilities that LLM manifestly demonstrate. I was not claiming that a mysterious something-something was needed to account for whatever it is that similar human abilities have that makes them unique.

    However there are plenty of non-mysterious things that already account for features of human mindedness that manifestly (not speculatively) haven't yet emerge in LLMs, and that, by their very nature (read "architecture/design") are unlikely to ever emerge through scaling alone (i.e. more data and more compute/training). Those non-mysterious things are, for instance, sensorimotor abilities, a personal history, autonomous motivations, a grounded sense of self, etc.
  • Harry Hindu
    5.8k
    You're misreading me. I was merely saying (clarifying for apo) that no mysterious emergence process had to be invoked to account for the abilities that LLM manifestly demonstrate. I was not claiming that a mysterious something-something was needed to account for whatever it is that similar human abilities have that makes them unique.Pierre-Normand
    Thank you for patiently clarifying.

    However there are plenty of non-mysterious things that already account for features of human mindedness that manifestly (not speculatively) haven't yet emerge in LLMs, and that, by their very nature (read "architecture/design") are unlikely to ever emerge through scaling alone (i.e. more data and more compute/training). Those non-mysterious things are, for instance, sensorimotor abilities, a personal history, autonomous motivations, a grounded sense of self, etc.Pierre-Normand
    Agreed. Now, how would we go about deploying these properties in a machine composed of electric circuits that process inputs (sensory information) and produce outputs ( human-like behaviors)? Could we simply add more structure and function to what is already there (put the LLM in the head of a humanoid robot), or do we have to throw the baby out with the bath water and start fresh with different material?
  • Pierre-Normand
    2.9k
    Agreed. Now, how would we go about deploying these properties in a machine composed of electric circuits that process inputs (sensory information) and produce outputs ( human-like behaviors)? Could we simply add more structure and function to what is already there (put the LLM in the head of a humanoid robot), or do we have to throw the baby out with the bath water and start fresh with different material?Harry Hindu

    I think the first route is the most practical and also the one that is the most likely to be taken, if it is. But while I think we could create somewhat sentient (that is, capable of grasping affordance for bodily action) autonomous robots, providing them for what it takes to develop concerns for themselves (autonomic/endocrine integration + socially instituted personhood) would be a mistake. We would then have the option of granting them full autonomy (politically, ethically, etc.) or make them slaves. I don't see any reason why we shouldn't stop short of that and create robots that are as conatively "inert" (subservient) as LLM-based AI-assistants currently are. They would just differ from current LLMs in that in additions to outputting knock-knock jokes on demand they would also go out to mow the grass.

    On edit: I am not either advocating for the creation of such robots but as regards the issue of labor displacement, I think it can't be tackled in advance of tackling unrestrained capitalism. Bans on such technological development would only be a patch. Genuinely conatively autonomous AIs (if there is such a thing) or AI robots should be banned in any case.
123456Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.