This takes us back to the Google chatbot’s confident statement that “causation involves a physical connection between events, while entailment is a relationship between propositions.” — J
Yeah, but on the other hand, it might not be so bad to use an argument suggested by an LLM, so long as you understand it. After all, we do this all the time reading papers and books. Philosophical discourse takes place in a context that the participants in the discourse should have access to, and maybe LLMs just make this easier? — Jamal
I would also feel bad posting as my own AI content that I have merely paraphrased, even if I understand it fully. (And I might even feel a bit ashamed disclosing it!) — Pierre-Normand
Using them to polish your writing could be good (or merely acceptable) or bad depending on the nature and depth of the polishing. Jamal's earlier comparison with using a thesaurus was apt. An AI could point out places where your wording is clumsy or misleading. If the wording that it suggests instead is one that you can make your own, that's very similar to having a human editor make the suggestion to you. — Pierre-Normand
I wonder if their reading will be existentialist or post-modern. No doubt we'll be able to pick. — Tom Storm
But would an AI Wittgenstein be a performative contradiction? — Banno
"There are no authoritative generalists," says Janus. Of course I think that first sentence should read "only when," no? You are presumably saying that appeal to authority is illegitimate wherever the context is not a specialized discipline?
Your implicit argument here is that AI is not an authoritative generalist, and therefore should not be treated as one. I think that implicit argument is even more plausible than the more explicit argument you have given, but it is in no way uncontroversial. LLMs are coming to be seen not only as authoritative generalists, but as the authoritative generalist par excellence. — Leontiskos
Asking AI for information is a far too easy solution. It pops back in a few seconds -- not with a list of links to look at, but a complete answer in text and more. Seems convenient, but it rapidly undermines one's willingness to look for answers one's self -- and to use search engines to find sources. — BC
The other line is this: We do not have a good record of foreseeing adverse consequences of actions a few miles ahead; we do not have a good record of controlling technology (it isn't that it acts on its own -- rather we elect to use it more and more). — BC
I hope most of us are coming around to being more or less on the same page on this now. — Baden
What we face might be not an empirical question but an ethical one - do we extend the notion of intentionality to include AIs? — Banno
I'll go over Austin again, since it provides a set of tools that are quite applicable. A Phatic act is the act of putting words together in a sequence that recognisably part of language - constructing a sentence en English. This is what an LLM does. It uses a statistical engine to generate a set of words that follow on form the words provide din the prompt. An illocutionary act is one performed in making use of such words - making a statement, asking a question, and so on. This, so the claim goes, an LLM cannot do. — Banno
The AI strings words together, only ever performing the phatic act and never producing an illocution.
The uniquely human addition is taking those word-strings and using them in a language game.
So the question arrises, can such an account be consistently maintained; what is it that people bring to the game that an AI cannot? — Banno
Use AI outputs as starting points for further refinement
Cycle through multiple rounds of critique and revision
Refine prompts to avoid confirmation bias and explore diverse readings
Now this looks very much like a recipe for a language game.
On the other hand, the data set used by a human appears to be far, far smaller than that used by an LLM. Our brains simply do not "contain" the number of texts available to ChatGPT. Therefore whatever the brain is doing, it is different to what is happening in ChatGPT. — Banno
A huge aspect of this is the nature of appeals to authority, and given that TPF has an anti-religious bent, many of the members have not thought very deeply on the nature of appeals to authority (despite the fact that they occur quite often when it comes to SEP, IEP, Wittgenstein, etc.).
Whether or not the LLM is a legitimate authority and is trustworthy is at the root of many of these differences. It is the question of whether any given LLM-citation is an organic argument or an argument from authority, and also of whether the latter case is illegitimate. — Leontiskos
And likely written by Baden without AI, because backrground was misspelled. — ssu
No. I didn't. When has philosophy every provided an answer to any of our questions? Philosophy piggy-backs on the discoveries of science. It is only when science and technology progresses that philosophy progresses (with AI being an example of how it brought new life to discussions about mind and body.) — Harry Hindu
If I wanted to hold someone accountable for misappropriating an AI explanation, I would simply put it into the search engine, the same way the person posting from AI would get the information. It is a whole lot easier than searching books for a quote. — Athena
I don't necessarily mind if others post a quote as an argument. — Harry Hindu
It's quite pointless to discuss the ethics of using AIs, because people will use them, just like they use drugs, and once it starts, it is impossible to rein it in. But what one can do is rethink whether one really wants to spend one's hard earned time with people who use AIs, or drugs, for that matter. — baker
Maybe we use books, dictionaries, philosophical papers, editors, and scientific discoveries to make us look smarter than we are. You see this all the time in forums, even without AI, so it's nothing new. Besides do you really care about the psychology of someone who's writing about what they think? — Sam26
Seems like philosophy itself could be labeled as mental masturbation. — Harry Hindu
Dood, the content from human beings trained in pseudo-science and other nonsense seen on this forum is available everyday for you to read, without any AI. If anything, posters should run their ideas through AI before wasting time posting their zany ideas to humans. which would eliminate wasting time reading nonsensical posts. — Harry Hindu
I can't imagine how bad things are going to get in the coming years with how quickly it has already gotten to this state. Maybe it will be like some other rapid rise cultural phenomenons where it will reach saturation point fast and peter out and get pushback/revulsion before long. The bubble effect. — unimportant
There are those, Hinton being one of them, who claim that the lesson to be learned from the LLMs is that we are also just "arranging words as if it were saying something", that is that we don't have subjective experience any more than they do.
— Janus
Something has gone astray here, in. that if this were so, it's not just that we have never said anything, but that the very notion of saying something could not be made coherent. — Banno
Don't mistake the speculative misuse of ideas for the ideas themselves. AI is no longer in the realm of “mental masturbation,” it’s already reshaping science, mathematics, and even philosophy by generating proofs, modeling complex systems, and revealing previously inaccessible patterns of thought. To dismiss that as delusory is to confuse ignorance of a subject with the absence of rigor within it. — Sam26
The irony is that the very kind of “rigorous analysis” you claim to prize is being accelerated by AI. The most forward-looking thinkers are not treating it as a toy but as a new instrument of inquiry, a tool that extends human reasoning rather than replacing it. Those who ignore this development are not guarding intellectual integrity; they’re opting out of the next phase of it. — Sam26
All a bit convolute. The idea is that the AI isn't really saying anything, but is arranging words as if it were saying something. — Banno
But can even humans claim that? Let’s rehash the forum’s most hardy perennial one more time. :up: — apokrisis
That could be a hugely amplifying tool. — apokrisis
Are you saying that with PoMo philosophy, AI might have hit its particular sweet spot. :grin: — apokrisis
So, it is not a digital copy of existing books, but may become a situated co-production of knowledge. — Number2018
What we might do is to consider the strings of words the AI produces as if they were produced by an interlocutor. Given that pretence, we can pay some attention to the arguments they sometimes encode... — Banno
So if one did not write the post themselves, but merely copied and pasted a quote as the sole content of their post, then by your own words, it is not their post. — Harry Hindu
That's a poor analogy. It's obvious when people are wearing makeup or wearing clothes that enhance their appearances. Property rights might be one reason to object to plagiarism—there are others.Pretending to be something you are not is one.
— Janus
Poppycock, the only objection to plagiarizing that I remember is the posts objecting to someone trying to make us think s/he knows more than s/he does know. — Athena
So if one did not write the post themselves, but merely copied and pasted a quote as the sole content of their post, then by your own words, it is not their post. — Harry Hindu
But may I humbly suggest to you that what resulted was rather more like an internal dialogue of you with yourself, than a dialogue with another philosopher. Which slots right into the discussion itself as a significant fact. — unenlightened
The key element in that scenario is that there is no interlocutor to engage with if you attempt a response. Light's on, nobody home. — Paine
Imagine I could offer you a prototype chatbot small talk generator. Slip on these teleprompter glasses. Add AI to your conversational skills. Become the life of the party, the wittiest and silkiest version of yourself, the sweet talker that wins every girl. Never be afraid of social interaction again. Comes with free pair of heel lift shoes. — apokrisis
So filling PF with more nonsense might be a friction that drags the almighty LLM down into the same pit of confusion. — apokrisis
I think TPF should continue what it's doing, which is put some guardrails on ai use, but not ban it. — RogueAI
The real world problem is that the AI bubble is debt driven hype that has already become too big to fail. Its development has to be recklessly pursued as otherwise we are in the world of hurt that is the next post-bubble bailout.
Once again, capitalise the rewards and socialise the risks. The last bubble was mortgages. This one is tech.
So you might as well use AI. You’ve already paid for it well in advance. — apokrisis
That may be a good reason for you not to use AI, but it’s not a good reason to ban it from the forum. — T Clark
Maybe. If someone uses AI to create a fascinating post, could you engage with it? — frank
Impractical. But, how about, its use should be discouraged altogether?
I mean, its use in composition or editing of English text in a post. — bongo fury
Then you must also believe that using a long-dead philosopher's quote as the crux of your argument, or as the whole of your post, is also an issue. — Harry Hindu
So what? People also use makeup to look better. Who is being hurt?
The reason for objecting to plagiarism is a matter of property rights.
What is best for acquiring and spreading good information? — Athena
You can still submit your post as "s" to ChatGPT and ask it to expand on it. — Pierre-Normand
Ctrl+Z — Harry Hindu
So of corse there are no 'well-documented occurrences of exceptions to nature's "laws"", as you say... because when they happen, it's good scientific practice to change the laws so as to make the exception disappear. — Banno
So are we to say that "the laws of nature are not merely codifications of natural invariances and their attributes, but are the invariances themselves", while also saying that we can change them to fit the evidence? Hows' that going to work? We change the very invariances of the universe to match the evidence? — Banno
Or is it just that what we say about stuff that happens is different to the stuff that happens, and it's better if we try to match what we say to what happens? — Banno
Indeed. And if laws are constraints, then the regularities can be statistical. Exceptions get to prove the general rule. — apokrisis
We want to avoid arriving at some transcendent power that lays down arbitrary rules. Instead we want laws to emerge in terms of being the constraints that cannot help but become the case even when facing the most lawless situations. — apokrisis
Isn't that simply because when we find such exceptions, we change the laws? — Banno
I guess I’m naïve or maybe just not very perceptive, but I haven’t recognized any posts definitely written by AI. — T Clark
Interesting, I haven’t noticed particularly. But I avoid reading lengthy and didactic posts which are often poorly written. The AI stuff I’ve seen often seems peculiarly worded and difficult to read. — Tom Storm
But you're proposing something and instead of telling us why it's a good proposal you're saying "if you want reasons, go and find out yourself." This is not persuasive. — Jamal
And it isn't clear precisely what you are proposing. What does it mean to ban the use of LLMs? If you mean the use of them to generate the content of your posts, that's already banned — although it's not always possible to detect LLM-generated text, and it will become increasingly impossible. If you mean using them to research or proof-read your posts, that's impossible to ban, not to mention misguided. — Jamal
I've been able to detect some of them because I know what ChatGPT's default style looks like (annoyingly, it uses a lot of em dashes, like I do myself). But it's trivially easy to make an LLM's generated output undetectable, by asking it to alter its style. So although I still want to enforce the ban on LLM-generated text, a lot of it will slip under the radar. — Jamal
It cannot be avoided, and it has great potential both for benefit and for harm. We need to reduce the harm by discussing and formulating good practice (and then producing a dedicated guide to the use of AI in the Help section). — Jamal
The source of one's post is irrelevant. All that matters is whether it is logically sound or not. — Harry Hindu
I see this from time to time. One I'm thinking of tries to baffle with bullshit. Best to walk away, right? — frank
I think the crux is that whenever a new technology arises we just throw up our hands and give in. "It's inevitable - there's no point resisting!" This means that each small opportunity where resistance is possible is dismissed, and most every opportunity for resistance is small. But I have to give TPF its due. It has resisted by adding a rule against AI. It is not dismissing all of the small opportunities. Still, the temptation to give ourselves a pass when it comes to regulating these technologies is difficult to resist. — Leontiskos
The problem is that you don't think you are required to give a falsifiable reason for why the claim fails to demonstrate the presence of X. — Leontiskos
If you look at traditional accounts of "enlightenment", "enlightenment" is not something one would normally desire, ever, because for all practical intents and purposes, "enlightenment" is a case of self-annihilation, self-abolishment. — baker
While it is said that if a lay person does attain "enlightenment", they have to ordain as a monastic within a few days or they die (!!), because an enlightened person is not able to live in this world, as they lack the drive and the ability to make a living. — baker
Why call something "Buddhist" when it has nothing to do with Buddhism? — baker
Is the most important thing we can do in this life to deny its value in favour of an afterlife, an afterlife which can never be known to be more than a conjecture at best, and a fantasy at worst? There seems to be a certain snobbishness, a certain classism, at play in these kinds of attitudes.
This sounds rather victim-ish. — baker
One problem with that is that the watered down versions are being promoted as the real thing, and can eventually even replace it.
— baker
What you say assumes what is at issue—that there really is is a "real thing" to be found.
— Janus
I said more later in the post you quoted. — baker
In Buddhism, there is the theme that we are now living in an age in which the Dharma ends: — baker
Although we already live in a mediocre time regarding art, AI would be the last nail of our coffin. But it is not too late—we can stop it and believe in ourselves again. — javi2541997
From my perspective, the biggest dangers from AI are the abilities to create new ways of killing people. — EricH
I think what it comes down to is that it depends on how it's used. This is where it gets interesting. — Jamal
