The key element in that scenario is that there is no interlocutor to engage with if you attempt a response. Light's on, nobody home. — Paine
Imagine I could offer you a prototype chatbot small talk generator. Slip on these teleprompter glasses. Add AI to your conversational skills. Become the life of the party, the wittiest and silkiest version of yourself, the sweet talker that wins every girl. Never be afraid of social interaction again. Comes with free pair of heel lift shoes. — apokrisis
So filling PF with more nonsense might be a friction that drags the almighty LLM down into the same pit of confusion. — apokrisis
I think TPF should continue what it's doing, which is put some guardrails on ai use, but not ban it. — RogueAI
The real world problem is that the AI bubble is debt driven hype that has already become too big to fail. Its development has to be recklessly pursued as otherwise we are in the world of hurt that is the next post-bubble bailout.
Once again, capitalise the rewards and socialise the risks. The last bubble was mortgages. This one is tech.
So you might as well use AI. You’ve already paid for it well in advance. — apokrisis
That may be a good reason for you not to use AI, but it’s not a good reason to ban it from the forum. — T Clark
Maybe. If someone uses AI to create a fascinating post, could you engage with it? — frank
Impractical. But, how about, its use should be discouraged altogether?
I mean, its use in composition or editing of English text in a post. — bongo fury
Then you must also believe that using a long-dead philosopher's quote as the crux of your argument, or as the whole of your post, is also an issue. — Harry Hindu
So what? People also use makeup to look better. Who is being hurt?
The reason for objecting to plagiarism is a matter of property rights.
What is best for acquiring and spreading good information? — Athena
You can still submit your post as "s" to ChatGPT and ask it to expand on it. — Pierre-Normand
Ctrl+Z — Harry Hindu
So of corse there are no 'well-documented occurrences of exceptions to nature's "laws"", as you say... because when they happen, it's good scientific practice to change the laws so as to make the exception disappear. — Banno
So are we to say that "the laws of nature are not merely codifications of natural invariances and their attributes, but are the invariances themselves", while also saying that we can change them to fit the evidence? Hows' that going to work? We change the very invariances of the universe to match the evidence? — Banno
Or is it just that what we say about stuff that happens is different to the stuff that happens, and it's better if we try to match what we say to what happens? — Banno
Indeed. And if laws are constraints, then the regularities can be statistical. Exceptions get to prove the general rule. — apokrisis
We want to avoid arriving at some transcendent power that lays down arbitrary rules. Instead we want laws to emerge in terms of being the constraints that cannot help but become the case even when facing the most lawless situations. — apokrisis
Isn't that simply because when we find such exceptions, we change the laws? — Banno
I guess I’m naïve or maybe just not very perceptive, but I haven’t recognized any posts definitely written by AI. — T Clark
Interesting, I haven’t noticed particularly. But I avoid reading lengthy and didactic posts which are often poorly written. The AI stuff I’ve seen often seems peculiarly worded and difficult to read. — Tom Storm
But you're proposing something and instead of telling us why it's a good proposal you're saying "if you want reasons, go and find out yourself." This is not persuasive. — Jamal
And it isn't clear precisely what you are proposing. What does it mean to ban the use of LLMs? If you mean the use of them to generate the content of your posts, that's already banned — although it's not always possible to detect LLM-generated text, and it will become increasingly impossible. If you mean using them to research or proof-read your posts, that's impossible to ban, not to mention misguided. — Jamal
I've been able to detect some of them because I know what ChatGPT's default style looks like (annoyingly, it uses a lot of em dashes, like I do myself). But it's trivially easy to make an LLM's generated output undetectable, by asking it to alter its style. So although I still want to enforce the ban on LLM-generated text, a lot of it will slip under the radar. — Jamal
It cannot be avoided, and it has great potential both for benefit and for harm. We need to reduce the harm by discussing and formulating good practice (and then producing a dedicated guide to the use of AI in the Help section). — Jamal
The source of one's post is irrelevant. All that matters is whether it is logically sound or not. — Harry Hindu
I see this from time to time. One I'm thinking of tries to baffle with bullshit. Best to walk away, right? — frank
I think the crux is that whenever a new technology arises we just throw up our hands and give in. "It's inevitable - there's no point resisting!" This means that each small opportunity where resistance is possible is dismissed, and most every opportunity for resistance is small. But I have to give TPF its due. It has resisted by adding a rule against AI. It is not dismissing all of the small opportunities. Still, the temptation to give ourselves a pass when it comes to regulating these technologies is difficult to resist. — Leontiskos
The problem is that you don't think you are required to give a falsifiable reason for why the claim fails to demonstrate the presence of X. — Leontiskos
If you look at traditional accounts of "enlightenment", "enlightenment" is not something one would normally desire, ever, because for all practical intents and purposes, "enlightenment" is a case of self-annihilation, self-abolishment. — baker
While it is said that if a lay person does attain "enlightenment", they have to ordain as a monastic within a few days or they die (!!), because an enlightened person is not able to live in this world, as they lack the drive and the ability to make a living. — baker
Why call something "Buddhist" when it has nothing to do with Buddhism? — baker
Is the most important thing we can do in this life to deny its value in favour of an afterlife, an afterlife which can never be known to be more than a conjecture at best, and a fantasy at worst? There seems to be a certain snobbishness, a certain classism, at play in these kinds of attitudes.
This sounds rather victim-ish. — baker
One problem with that is that the watered down versions are being promoted as the real thing, and can eventually even replace it.
— baker
What you say assumes what is at issue—that there really is is a "real thing" to be found.
— Janus
I said more later in the post you quoted. — baker
In Buddhism, there is the theme that we are now living in an age in which the Dharma ends: — baker
Although we already live in a mediocre time regarding art, AI would be the last nail of our coffin. But it is not too late—we can stop it and believe in ourselves again. — javi2541997
From my perspective, the biggest dangers from AI are the abilities to create new ways of killing people. — EricH
I think what it comes down to is that it depends on how it's used. This is where it gets interesting. — Jamal
So someone can't objectively identify when X is present because to do so is impossible, but you are able to objectively identify when X is absent? Again, this makes no sense. Is it the unfalsifiable sophistry coming up again. — Leontiskos
If you are making a claim that says, "no, not tout court inferior," and the racist is making a claim that says, "yes, tout court inferior," and you say that "tout court inferior" is as subjective as the color claim, then both of you are making merely subjective claims, and neither one of you has any rational basis for enforcing your claim. — Leontiskos
On your reasoning if we found an alien species, how would we know how to treat it? — Leontiskos
I'm trying to make sense of this in a 2 step process, because it avoids directly leaping from a set of evidence of past things to claims about the future, without any clear reason. Laws of nature provide the reason. — Relativist
Wet is not the same as liquid, yet they are physically inseparable. Likewise, existents (i.e. things, facts) are discrete properties (i.e. events, fluctuations) of existence. — 180 Proof
Why is it unsupportable? You simply ask the claimant what they mean by "superior" and go from there. — Leontiskos
Thus if there is some race which is equivalent to a beast, such as an ox, then that race can be permissibly enslaved. We would be able to provide the racist with a falsifiable case, "Okay racist, so if you can demonstrate that this race has no greater dignity than an ox, then you will have proved that it is permissible to enslave them." — Leontiskos
So consider two charges:
"Your position is unverifiable."
"Your position is unsupportable." — Leontiskos
That is an anti-racist claim, and we are asking whether it is falsifiable. It seems that you and baker have missed the whole point. I am asking whether @Janus' anti-racist claim is falsifiable, given that Janus has said that falsifiability is the key to rationality and claim-making. — Leontiskos
They are separate in the same sense that a true fact, 2+2=4, is "separate from" truth. — Colo Millz
In this sense, JTB+U performs a Wittgensteinian clarification: it dissolves the illusion that justification alone guarantees comprehension. “U” distinguishes genuine justification from parroting, algorithmic correctness, or social conformity. Philosophically, that difference is now urgent—especially in an age where machines can simulate justification without understanding.
This is important, because it's easy to suppose your point is correct. — Sam26
I suspect we are emphasising different aspects of the same issues, and that we do not have an actual disagreement. What do you think? — Banno
I agree. However, we could draw inferences about the nature of reality by examining the past, and apply that analysis (that model of reality) to making predictions. This is, of course, the nature of physics. — Relativist
Note that your examples concern our beliefs. There's a difference between the past constraining the future, and the past constraining our beliefs about the future. Bayesian calculus only allows the latter.
The other is Gillian Russell's recent work on logic, just mentioned. That is about the world rather than about our beliefs. — Banno
the idea that the past constrains the future relies on the idea that the '"laws of nature" may evolve over long time periods, but will not suddenly alter. — Janus
I still do not understand this. "We can by inductive reasoning" just is "the future will resemble the past". It's re-stating, not explaining. — Banno
To put it another way, it is rational in a practical sense to assume that the future will resemble the past, because to our knowledge it always has.
— Janus
That says that the future resembles the past, because the future resembles the past...?
Valid, I suppose, but I find it unsatisfactory. — Banno
