So now we are talking about numeracy rather than literacy? — apokrisis
You are doing a very poor job of imposing this idea on me. Probably because my whole position is based on it. — apokrisis
he parallel becomes especially enlightening when we consider that LLMs manifestly learn language by means of training, apparently starting from a blank slate, and are an evolution of Rosenblatt's perceptron. — Pierre-Normand
Among the highest achievements of the Kantian deduction
was that he preserved the memory, the trace of what was historical in
the pure form of cognition, in the unity of the thinking I, at the stage of
the reproduction of the power of imagination.
I see that you are ignoring the distinction between icons and codes then. — apokrisis
Citations please. — apokrisis
It is no problem at all for my position that iconography followed almost as soon as Homo sapiens developed a way to articulate thought. This directly reflects a mind transformed by a new symbolising capacity. — apokrisis
Vygotsky offers another whole slant on the hypothesis you are trying to stack up…. — apokrisis
It seems to me to be a stretch to call cave art and stone monuments writing systems. — Pierre-Normand
If they were devised for personal pragmatic use as mnemonics (e.g. remember where the herd was last seen, or tracking my calories), you'd expect the signs to vary much more and not be crafted with such care, in such resource intensive ways, and with such persistent conformity with communal practice across many generations. — Pierre-Normand
Secondly, event granting that pictorial modes of representation are proto-linguistic, like say, hieroglyphs or Chinese logographs were (that evolved from ideographic or pictographic representations), when used for communication they tend to stabilise in form and their original significance become subsumed under their socially instituted grammatical functions. To the extend that some retain their original pictographic significance, they do so as dead metaphors—idiomatic ways to use an image. — Pierre-Normand
So, the stable aspect of cave arts suggests to me that its proto-grammar is socially instituted, possibly as a means of ritualistic expression. — Pierre-Normand
The standard obvious view is that speech came first by at least 40,000 years and then writing split off from that about 5000 years ago in association with the new "way of life" that replaced foraging and pastoralism with the organised agriculture of the first Middle East river delta city states. Sumer and Babylon. — apokrisis
But you instead want to argue some exactly opposite case to this normal wisdom. — apokrisis
What reason is there to doubt the obvious here? — apokrisis
During pre-training (learning to predict next tokens on vast amounts of text), these models develop latent capabilities: internal representations of concepts, reasoning patterns, world knowledge, and linguistic structures. — Pierre-Normand
So when you ask whether LLMs have "crossed into a new category" or merely "gotten better at the same old thing," the answer is: the architectural shift to transformers enabled the emergence of new kinds of capabilities during pre-training, and post-training then makes these capabilities reliably accessible and properly directed. — Pierre-Normand
Had you made this issue bear on the topic of the present thread? — Pierre-Normand
Both the rules for speech and writing are rules of a norm governed public practice that is taught and learned (while the rules for using the words/signs are taught primarily through speaking them). — Pierre-Normand
I remain flummoxed by your crazy logic. — apokrisis
Citations? — apokrisis
In what way are writing thoughts and speaking thoughts any different in kind? — apokrisis
I am honestly flummoxed. — apokrisis
What are you talking about? Writing came before speech, or something? Hands evolved before tongues? What's your hypothesis? — apokrisis
This interpretation is made in the right spirit, but I think it's too reductive. Let's not make the mistake of replacing one reification (the existent) with another (the thing's becoming, or its sedimented history, or its temporal dimension including its future). We don't need to pin down the non-identical as its temporal dimension or its never-ending becoming, and we should not, because there are other dimensions to it: there is a synchronic remainder too, comprised of the thing's unique configuration of characteristics that are never fully captured by concepts, i.e., the thing's thisness. Also, the thing's mediations and relations are not merely understood as temporal. I admit that the temporal cannot be left out of the picture---we cannot analyze the thing as if frozen in time, separating the dimensions in the mode of science---but it's not everything. The hope of the name is that we can fully comprehend the thing, including its temporal dimension. — Jamal
What I always react to in your posts is your apparent wish to pin down the essence, as if you've discovered the secret, the true definition. But this might not be a big disagreement, because except for the reductiveness your understanding here is very Adornian. — Jamal
In my last post I forgot to mention that I think Adorno in this section solves one of our disputes. He admits that the existent as we conceptualize and describe it, e.g., as worker, commodity, society, is a false things-are-so-and-not-otherwise---and yet at the same time the word and concept are indispensible: — Jamal
This would be the upgrade in semiosis that resulted from literacy and numeracy. The shift from an oral culture to one based on the permanence of inscriptions and the abstractions of counting and measuring. The new idea of ownership and property. — apokrisis
This becoming disappears
and dwells in the thing, and is no more to be brought to a halt in its
concept than to be split off from its result and forgotten. Temporal
experience resembles it. In the reading of the existent as a text of its
becoming, idealistic and materialistic dialectics touch. However, while
idealism justifies the inner history of immediacy as a stage of the
concept, it becomes materialistically the measure not only of the
untruth of concepts, but also that of the existing immediacy.
What negative dialectics drives through its hardened objects is the possibility which their reality has betrayed, and yet which gleams from each one of these.
Even the insistence
on the specific word and concept, as the iron gate to be unlocked, is
solely a moment of such, though an indispensable one. In order to be
cognized, that which is internalized, which the cognition clings to in the
expression, always needs something external to it.
,,,the hippie movement was clearly stopped and commodified... — Pussycat
Dialectical discipline, Hegel's (positive) and Adorno's (negative), both sacrifice and reduce the polyvalnece of experience to contradiction. — Pussycat
But it comes at a cost, the reduction of everything unto contradiction means the loss of the richness of lived experience, its immediacy, living in the moment. There is already a contradiction here: the polyvalence of lived experience in a monovalent dominative world. — Pussycat
Changing his mind from day to day with an ideology based around a misunderstanding of the market effect of tariffs. The instability is off the charts and if it does all go off the rails there is a real risk that Trump will impose emergency, or plenary powers to postpone the midterm elections. — Punshhh
Disagree, for reasons and examples I've already posted. There are times when risk is high, but would likely get higher with time, and so confidence is likely to drop if you wait.
Take saving people from a burning building. You can risk your life and charge in there and grab the baby, or you can wait until the fire trucks get the fire more under control so your safety is more assured. That's a hard decision, and there are cases where each option is the best one. — noAxioms
Yes, you can do that, but the result of doing it is qualitatively (and measurably) different from what it is that LLMs do when they are prompted to impersonate a novelist or a physicist, say. An analogy that I like to employ is an actor who plays the role of J. Robert Oppenheimer in a stage adaptation of the eponymous movie (that I haven't yet seen, by the way!) If the actor has prepared for the role by reading lots of source material about Oppenheimer's life and circumstances, including his intellectual trajectory, but never studied physics at a level higher than middle school, say, and has to improvise facing an unscripted questions about physics asked by another actor who portrays a PhD student, he might be able to improvise a sciency sounding soundbite that will convince those in the audience that don't know any better. Many earlier LLMs up to GPT-3-5 often were improvising/hallucinating such "plausible" sounding answers to question that they manifestly didn't understand (or misunderstood in funny ways). In order to reliably produce answers to unscripted questions that would be judged to be correct by PhD physicists in the audience, the actor would need to actually understand the question (and understand physics). That's the stage current LLMs are at (or very close to). — Pierre-Normand
It's relevant to displaying an LLMs successful deployment, with intelligent understanding, of its "System 2" thinking mode: one that is entirely reliant, at a finer grain of analysis, on its ability to generate not just the more "likely" but also the more appropriate next-tokens one at a time. — Pierre-Normand
If the chatbot tells you who the murderer might be, and explains to you what the clues are that led it to this conclusion, and the clues are being explicitly tied together by the chatbot through rational chains of entailment that are sensitive to the the significance of the clues in the specific narrative context, can that be explained as a mere reproduction of the habits of the author? What might such habits be? The habit to construct rationally consistent narratives? You need to understand a story in order to construct a rationally consistent continuation to it, I assume. — Pierre-Normand
You need to understand a story in order to construct a rationally consistent continuation to it, I assume. — Pierre-Normand
Look at this Einstein riddle. Shortly after GPT-4 came out, I submitted it to the model and asked it to solve it step by step. It was thinking about it quite systematically and rationally but was also struggling quite a bit, making occasional small inattention mistakes that were compounding and leading it into incoherence. Repeating the experiment was leading it to approach the problem differently each time. If any habits of thought were manifested by the chatbot, that were mere reproductions of the habits of thought of the people who wrote its training texts, they'd be general habits of rational deliberation. Periodically, I assessed the ability of newer models to solve this problem and they were still struggling. The last two I tried (OpenAI o3 and Gemini 2.5 Pro, I think) solved the problem on the first try. — Pierre-Normand
In order for the model to produce this name as the most probable next word, it has to be sensitive to relevant elements in the plot structure, distinguish apparent from real clues, infer the states of minds of the depicted characters, etc. Sutskever's example is hypothetical but can be adapted to any case where LLMs successfully produce a response that can't be accounted for by mere reliance on superficial and/or short range linguistic patterns. — Pierre-Normand
Getting married is like pulling the trigger. One can put off that choice indefinitely, but once done, it's done.
I used it as a counter for your assertion of 'certainty of success', and 'minimize risk'. Getting married is a risk (something you assert to never be the best option), even ones that seem a very good match. Not getting married is usually not the best option. Sure, it is for some people. I have 3 kids, and only one marriage is expected, thus countering my 'usually' assertion. — noAxioms
One never had freedom to select multiple options. Sure, you can have both vanilla and chocolate, but that's just a single third option. There's no having cake and eating it, so to speak. You have choice because you can select any valid option, but you can't choose X and also not X. — noAxioms
OK, but I don't know how this became a discussion about ignorance of what is food. The comment was in response to your assertion of "the first principle is that nonaction maintains freedom", and my example of nonaction (and not ignorance) will cause among other things starvation, which will likely curtail freedom. — noAxioms
The realization was missed because the hippie movement failed to transform the world in its image, but was commodified and commercialized, liquitated even. What could have been a revolutionary movement, capable of subverting entrenched power and liberating consciousness, was instead absorbed into institutional authority, tamed by it. Much like, as Adorno says, what happened with Hegel's dialectic. — Pussycat
I don't understand, who or what requires the polyvalence of experience? Why then would Adorno say that (negative) dialectics demands the bitter sacrifice of the qualitative polyvalence of experience? — Pussycat
n my view, information is everywhere you care to look — Harry Hindu
Obviously, it's not "the same thing" then.AI can do the same thing ... when prompted — Harry Hindu
An AI is a source of knowledge. — Harry Hindu
Yes, but you can't have a dialogue with language or with a book. You can't ask questions to a book, expect the book to understand your query and provide a relevant response tailored to your needs and expectations. The AI can do all of that, like a human being might, but it can't do philosophy or commit itself to theses. That's the puzzle. — Pierre-Normand
Why was the moment of realization missed? — Pussycat
Do you agree that AI does not do philosophy, yet we might do philosophy with AI? That sems to be the growing consensus. The puzzle is how to explain this. — Banno
That works in some situations, but a not in a fair percentage of them. Such uncertainty prevents some people from ever getting married. Sometimes this is a good thing, but often not. Don't choose poorly, but also don't reject good choices for fear of lack of 'success'.
War is another example where that psychology is a losing one. Risk taking is part of how things are best done. — noAxioms
He was 1, with no concept of embarassment yet. — noAxioms
Not always, and not even particularly often. Not looking for food definitely curtails eventual freedom. — noAxioms
I'm aware that all appearance of agreement on your part is accidental. — frank
It seems like this site will have to perform a balancing act between encouraging well written posts and limiting the use of tools that allow writers to do just that. — Harry Hindu
There is also the possibility of comparing to past posts... — Baden
He's talking about the forced separation between direct experience (which contains no form, no names, no recognition of ideation) and form itself, which is a key component of knowledge (scientia, science). And it just occurred to me that no one is reading this or likely to respond to what I just said, so if I want to discuss it, I need to go to reddit. I don't know which subreddit, though. I don't think they have an Adorno subreddit. I could start one. — frank
The intention of existentialism at least in its radical French form would not be realizable at a distance from substantive content, but in its threatening nearness to this.
This is surely a rod for your own back, and the backs of the other mods. Apart form the most obvious cases, you can't tell. “AI-written” stops being a meaningful category as AI is blended in to the way we operate online, the way we search, research, browse and read is permeated and augmented by AI. — Banno
Eventually one much act on the choice, irrevocably. You debate committing murder, but once the trigger is pulled, there's no doing otherwise. I suppose if you choose not to do it, the option remains open for quite some time. — noAxioms
The vanilla/chocolate debate at the ice cream shop. Exactly how late can one change one's mind before it's too late? We played a game like that with my 1 year old at a restaurant. He was feeding himself stuff from his plate with a spoon. He really like crab leg bits thrown on his plate and would eat those first. Game was to see how far we could get him to choose to eat something else, and still bail out because a crab bit was presented. I won the game when I caused an abort with the spoon already fully in his mouth. Imagine the cheering at the table, causing weird looks from others.
We're easily entertained over here. — noAxioms
Yes, that's what it means for there to be a choice. I'd argue that such choice is not always possible. Sometimes only one path is open. Sometimes not even that. Vanilla or chocolate? Well, there's a power outage at the softserve shop, so as Gene Wilder put it: You get Nothing. — noAxioms
The schools which take derivatives of the Latin existere [Latin:
to exist] as their device, would like to summon up the reality of
corporeal experience against the alienated particular science. Out of
fear of reification they shrink back from what has substantive content.
It turns unwittingly into an example. What they subsume under epochê
[Greek: suspension] revenges itself by exerting its power behind the
back of philosophy, in what this latter would consider irrational
decisions. The non-conceptual particular science is not superior to
thinking purged of its substantive content; all its versions end up, a
second time, in precisely the formalism which it wished to combat for
the sake of the essential interest of philosophy. It is retroactively filled
up with contingent borrowings, especially from psychology. The
intention of existentialism at least in its radical French form would not
be realizable at a distance from substantive content, but in its
threatening nearness to this. The separation of subject and object is not
to be sublated through the reduction to human nature, were it even the
absolute particularization. The currently popular question of
humanity, all the way into the Marxism of Lukacsian provenance, is
ideological because it dictates the pure form of the invariant as the only
possible answer, and were this latter historicity itself.
I elicited your response, thus doing more than arranging words. — Banno
No, it isn't. Wittgenstein said nothing of the sort. — Banno
Seems to me, at the fundament, that what we who pretend to the title “philosopher” are looking for is some semblance of truth, whatever that is; at writing that is thought provoking; at nuanced and sound argument. Whether such an argument comes from a person or an AI is secondary. — Banno
I’ve used AI to quickly and succinctly summarise accepted fact. — Banno
The idea is that the AI isn't really saying anything, but is arranging words as if it were saying something. — Banno
