It is at once comic and tragic, as Borges might have noted, that so much money and attention should be concentrated on so little a thing — something so trivial when contrasted with the human mind, which by dint of language, in the words of Wilhelm von Humboldt, can make “infinite use of finite means,” creating ideas and theories with universal reach.
The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.
I don't think it's a stochastic parrot, but I may be anthropomorphizing it. — RogueAI
The crux of machine learning is description and prediction; it does not posit any causal mechanisms or physical laws. Of course, any human-style explanation is not necessarily correct; we are fallible. But this is part of what it means to think: To be right, it must be possible to be wrong. Intelligence consists not only of creative conjectures but also of creative criticism. Human-style thought is based on possible explanations and error correction, a process that gradually limits what possibilities can be rationally considered.
I don't think it's a stochastic parrot, but I may be anthropomorphizing it. — RogueAI
I would say stochastic parrot is too narrow. It seems clear there are emergent behaviors from the more complex models like 3.5 and 4 where it's some building internal models to output the correct text. It doesn't understand like we do, but it does seem to have an increasingly emergent understanding of the semantic meanings embedded in our languages. — Marchesk
And now on version 4 of chatGPT they charge the gullible punter $$ to use. A bastardisation of openAI indeed — invicta
The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.
(bing already does and Google is working on its own thing called lamda) — Heracloitus
Reasoning is a problem, as seen in the question,"If 5 machines produce 5 products in 5 minutes, how long will it take 100 machines to produce 100 products?" I'm not sure what version was asked the question, but even with coaxing and additional info it could not give the correct answer. — jgill
As an AI language model, I don't have personal wants or desires since I am a machine programmed to perform specific tasks such as answering questions, generating text, or performing language-related tasks. My main goal is to provide helpful and accurate responses to the best of my abilities based on the input I receive. Is there anything specific you would like to ask or discuss?
The goal it provides is a piece of PR spin, programmed in to it. This is demonstrated by the ease with which one can generate wrong responses and hallucinations. It has no goals. — Banno
It presents arguments that are invalid, it hallucinated; it does this because it can have no intent that is not foisted upon it by the users. The liar cares about the truth and attempts to hide it; the bullshitter doesn't care if what they say is true or false. It generates bullshit. — Banno
In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.
Rather, the origin of those criticisms of LLMs are in Searle's Chinese Room and subsequent writings, the guts of which are that LLMs cannot have intentionality except by proxy. ChatGPT is a Chinese Room. — Banno
Well, I think that framing of internal and external approaches as problematic, along the lines of the private language argument. The most direct problem with LLM's is that because they are statistical algorithms, they cannot be truthful. — Banno
If 5 machines can make 5 devices in 5 minutes, that means each machine can make one device in 5 minutes. — Pierre-Normand
There is a subtlety here that GPT4 fails to address. But that's better than the other GPT. — jgill
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.