The AI is not being appealed to as an authority — Banno
It's noticeable that you have not presented any evidence, one way or the other.
If you think that what the AI said is wrong, then what you ought do is to present evidence, perhaps in the form of peer-reviewed articles that say that humans can reliably recognise AI generated text.
But that is not what you have chosen to do. Instead, you cast aspersions. — Banno
No, I presented what the AI had to say, for critique. Go ahead and look at the papers it cites... — Banno
So you cannot see the difference between "A rule against AI use will not be heeded" and "A rule against AI use cannot be enforced". Ok. — Banno
Baden? Tell us what you think. Is my reply to you against the rules? — Banno
With intended irony...
Prompt: find peer-reviewed academic studies that show the effectiveness of any capacity to recognise AI generated text.
The result.
"...there is peer-reviewed evidence that both humans... and automated tools can sometimes detect AI-generated text above chance. Effectiveness is highly conditional. Measured accuracy is often only modest."
So yes, I overstated my case. You may be able to recognise posts as AI generated at a slightly better rate than choosing at random. — Banno
I know more pay attention to the fact that I am using a machine when I consult a.i. than when I use the world-processing features of my iphone to type this. — Joshs
If you have ever been prompted to seek out relevant literature to aid in the composing of an OP, or your response to an OP, then your telos in consulting that textual material is the same as that of the many here who consult a.i while engaging in TPF discussions. — Joshs
A pissing contest, combined with quasi-efforts at healing existential anxiety. — baker
Here is a case in point. I have not made the argument he here attributes to me. I have, amongst other things pointed out that a rule against AI cannot be reliably enforce, which is quite different. — Banno
For those who think philosophy consists in a series of appeals to authority, AI must be quite confounding. — Banno
Anyway, here is one example: ask it to critique your argument. This is an exercise in humility and takes your thoughts out of the pugilistic mode and into the thoughtful, properly philosophical mode. It's a form of Socratization: stripping away the bullshit, re-orientating yourself towards the truth. Often it will find problems with your argument that can only be found when interpreting it charitably; on TPF this often doesn't happen, because people will score easy points and avoid applying the principle of charity at all costs, such that their criticisms amount to time-wasting pedantry. — Jamal
Another example: before LLMs it used to be a tedious job to put together an idea that required research, since the required sources might be diverse and difficult to find. The task of searching and cross-referencing was, I believe, not valuable in itself except from some misguided Protestant point of view. Now, an LLM can find and connect these sources, allowing you to get on with the task of developing the idea to see if it works. — Jamal
The arguments are similar to what I see here. "AI is inevitable and therefore" etc. Some teachers---the good ones---are appalled. — Baden
I think it would be helpful to continue to reflect on what is disagreeable about AI use and why. For example, if we don't know why we want to engage in human communication rather than non-human communication, then prohibitions based on that axiom will become opaque. — Leontiskos
I've mentioned this in the mod forum, so I'll mention it here too. I disagree with diluting the guidelines. I think we have an opportunity to be exceptional on the web in keeping this place as clean of AI-written content as possible. And given that the culture is veering more and more towards letting AI do everything, we are likely over time to be drowned in this stuff unless we assertively and straightforwardly set enforceable limitations. That is, I don't see any reward from being less strict that balances the risk of throwing away what makes up special and what, in the future, will be even rarer than it is now, i.e. a purely human online community.
The idea that we should keep up with the times to keep up with the times isn't convincing. Technocapitalism is definitive of the times we're in now, and it's a system that is not particularly friendly to human creativity and freedom. But you don't even have to agree with that to agree with me, only recognize that if we don't draw a clear line, there will effectively be no line. — Baden
It seems to me difficult to argue against the point, made in the OP, that since LLMs are going to be used, we have to work out how to use them well... — Jamal
that will lead people to hide their use of it generally. — Jamal
This is the sort of appeal-to-LLM-authority that I find disconcerting, where one usually does not recognize that they have appealed to the AI's authority at all. — Leontiskos
All of which makes using AI for philosophy, on one level, like using any one else’s words besides your own to do philosophy. — Fire Ologist
This becomes rather subtle, but what I find is that people who tell themselves that they are merely using AI to generate candidate theories which they then assess the validity of in a posterior manner, are failing to understand their own interaction with AI. They are failing to appreciate the trust they place in AI to generate viable candidate theories, for example. But they also tend to ignore the fact that they are very often taking AI at its word. — Leontiskos
Further, it makes no sense to give AI the type of authority that would settle a dispute, such as: “you say X is true - I say Y is true; but because AI says Y is true, I am right and you are wrong.” Just because AI spits out a useful turn of phrase and says something you happen to agree is true, that doesn’t add any authority to your position. — Fire Ologist
The presence and influence of AI in a particular writing needs to never be hidden from the reader. — Fire Ologist
You need to be able to make AI-generated knowledge your own, just as you make anything you know your own. — Fire Ologist
Unlike handing it to a human editor, which is what authors have been doing for yonks?
— SophistiCat
Nah. You are engaging in the same basic equivocation between a human and an AI. The whole point is that interacting with humans is different from interacting with AI, and the two should not be conflated. You've begged the question in a pretty basic manner, namely by implying that interacting with a human duo is the same as interacting with a human and AI duo. — Leontiskos
So in this case the LLM carried out the tedious part of the task; — Jamal
It didn't occur to me that anyone would interpret those guidelines as suggesting that posts written by people who are usng AI tools are generally superior to those written by people who don't use AI, — Jamal
We encourage using LLMs as assistants for research, brainstorming, and editing. — Deepseek
I like this. I asked Deepseek to incorporate it into a set of guidelines based on the existing AI discussions on TPF. Below is the output. I think it's a useful starting point, and I encourage people here to suggest additions and amendments. — Jamal
3. Prohibited Uses: What We Consider "Cheating"
The following uses undermine the community and are prohibited:
[*] Ghostwriting: Posting content that is entirely or mostly generated by an LLM without significant human input and without disclosure.
[*] Bypassing Engagement: Using an LLM to formulate responses in a debate that you do not genuinely understand. This turns a dialogue between people into a dialogue between AIs and destroys the "cut-and-thrust" of argument.
[*] Sock-Puppeting: Using an LLM to fabricate multiple perspectives or fake expertise to support your own position. — Deepseek
Inspired by Kierkegaard's ideas — Astorre
From a Socratic perspective, every temporal point of departure is eo ipso contingent, something vanishing, an occasion; the teacher is no more significant, and if he presents himself or his teachings in any other way, then he gives nothing... — Kierkegaard, Philosophical Crumbs, tr. M. G. Piety
But is the problem preaching, or is it a particular kind of preaching? — Leontiskos
I encountered the preacher's paradox in my everyday life. It concerns my children. Should I tell them what I know about religion myself, take them to church, convince them, or leave it up to them, or perhaps avoid religious topics altogether? — Astorre
I was drawn to this topic by conversations with so-called preachers (not necessarily Christian ones, but any kind). They say, "You must do this, because I'm a wise man and have learned the truth." When you ask, "What if I do this and it doesn't work?" Silence ensues, or something like, "That means you didn't do what I told you to do/you didn't believe/you weren't chosen." — Astorre
Question: Which of these judgments conveys the speaker's belief that the Sistine Chapel ceiling is beautiful, or proves it? — Astorre
"What cannot be spoken of, one must remain silent about." — Astorre
Language is incapable of exhaustively expressing subjective experience — Astorre
And here a paradox arises: infecting another person with an idea you don't fully understand yourself... — Astorre
It would be unethical, for instance, for me to ask a perfect stranger for their view about some sensitive material I've been asked to review - and so similarly unethical for me to feed it into AI. Whereas if I asked a perfect stranger to check an article for typos and spelling, then it doesn't seem necessary for me to credit them... — Clarendon
Yes, it would only be a heuristic and so would not assume AI is actually a person. — Clarendon
Preaching faith means either not having it or betraying it. — Astorre
The preacher supposedly doesn't teach, but testifies. — Astorre
But love doesn't guarantee the right to interfere in someone else's destiny. — Astorre
As soon as you try to convey faith, you rationalize it... — Astorre
It can point me to an interpretation that I hadn’t thought of, and I can then verify the credibility of that interpretation. — Joshs
The USSR collapsed not because it was too Marxist but because the vigour and paranoia of the liberal west out-competed it. The USSR functioned reasonably well and at least achieved the main aim of clambering aboard the rapidly industrialising world. But it was fundamentally inefficient rather than fundamentally a lie. — apokrisis
I would argue that one cannot believe something and not believe something at the same time. Or that it will at least lead to problems. — Leontiskos
That is why we have ambiguity. Logic demands that we don't. But then that is why Peirce had add vagueness to logic. That to which the PNC does not apply.
Between absolute belief and absolute disbelief. I would say in practice that is where we all should sit. Even if the counterfactual grammar of logic doesn't like it. — apokrisis
Dominance~submission may be the natural dynamic. But it plays out with all the variety of its many different settings.
So the dynamic has the simplicity of a dichotomy. And then also the variety of the one principle that can emerge as the balancing act that suits every occasion. — apokrisis
Liberal democracy clearly promotes discussion about the socially constructed nature of society. That is the liberating thought. Hey guys, we invented this system. And if it seems shit, we can therefore invent something better. — apokrisis
By neutral, I mean in the dynamical systems sense of being critically poised. Ready to go vigourously in opposing directions as the need demands. So we have to have some central state from which to depart in counterfactual directions.
Neutrality is not a state of passivity. It is the most extreme form of potency as you can swing either way with equal vigour. Which is what makes you choice of direction always something with significance and meaning.
A passively neutral person is a very dull fellow. An actively neutral person is centred and yet always ready to act strongly in either direction. Be your friend, be your enemy. Act as the occasion appears to demand and then switch positions just as fast if something changes.
So neutrality at the level of an egalatarian social democracy is about promoting equal opportunity for all, but then also allowing everyone to suffer or enjoy the consequences of their own actions. Make their own mistakes and learn from them.
Within then socially agreed limits. A social safety net below and a tax and justice system above. A liberal society would aim to mobilise its citizens as active participants of that society, yet still impose a constraining balance on the overall outcomes. Winning and losing is fine. Just so long as it is kept within pragmatically useful bounds. — apokrisis
Well my argument is that "liberalism" is the promise of that kind of world. Or rather pragmatism. — apokrisis
We are socially constructed. — apokrisis
Well you seem to be calling social constructions fictions. So I can go along with that. — apokrisis
You can have political parties divided by left and right. Liberal and conservative. Working class and managerial class. But then the system as a whole is free to pick and choose how it acts from this range of options. Identities aren't tied to particular solutions. Everyone can see that pragmatism is what is winning in the general long run. Life doesn't feel broken at the social level, and thus at the individual level. — apokrisis
I find that a.i. is good at honing in on the expert opinions within these camps — Joshs
Isn't the best policy simply to treat AI as if it were a stranger? — Clarendon
We may be witnessing, in real time, the birth of a snowball of bullshit.
Are our conversations improving as a result? Or are they decaying? Let's wait and see. — unenlightened
That is, whenever we trust ChatGPT we have taken our thumb off the line that tests whether the response is true or false, and ChatGPT was created to be trusted. What could happen, and what very likely will happen, is that the accuracy of human literature will be polluted at a very fundamental level. We may find ourselves "at sea," supported by layers and layers of artificially generated truth-claims, none of which can any longer be sufficiently disentangled and verified. Verification requires the ability to trace and backtrack, and my guess is that this ability will be lost due to three things: the speed and power of the technology, a tendency towards uncritical use of the technology, and the absence of a verification paper-trail within the technology itself. — Leontiskos
What are we supposed to do about it? — RogueAI
so would you [...] cede the ai race to China? — RogueAI
In using a.i. for a field like philosophy, I think one is interacting with extremely intelligent fragments of the ideas of multiple knowledgeable persons, and one must consult one’s own understanding to incorporate, or disassemble and reassemble those fragments in useful ways. — Joshs
It's not the first time I've heard people combine progressive historical sentiments with Christianity. — ProtagoranSocratist
One of my goals is to read Copleston's entire works on the history of philosophy — ProtagoranSocratist
(B) swallowing the insulting fantasy of interaction with an intelligent oracle. — bongo fury
Unlike handing it to a human editor, which is what authors have been doing for yonks? — SophistiCat
I don't have to show X is absent. — Janus
"Not tout court inferior" is not a subjective claim but a refutation of the masquerade. — Janus
I don't agree with enslaving any species. — Janus
Any support they come up with will necessarily be merely subjective, while it purports to be a universally valid claim. — Janus
Such a race would obviously not be human. — Janus
Think of the claim that red is a superior colour to green. I reject that because it is unsupportable, If I say there are no sound criteria for considering red to be superior to green, is that claim falsifiable? — Janus
There simply are no sound criteria for considering one race to be, tout court, inferior to another. — Janus
"No race is, tout court, inferior to another." — Leontiskos
It's not that anti-racist claims are falsifiable. — Janus
The anti-racist claim is made on the basis of the unverifiability, and further, the complete unsupportability, of the racist claim. — Janus
The world does not work via baseball-bat falsification. — Leontiskos
It does. — AmadeusD
People do use violence as a 'valid retort' to various positions. — AmadeusD
What's being suggested is you are being sanguine to the point of irrelevancy. — AmadeusD
They think it's logical. — AmadeusD
Ignorance of how the world actually works (i.e how people actually reason) isn't fixed by inserting a (totally reasonable, and valid) position on the logic of those impulses. — AmadeusD
There simply are no sound criteria for considering one race to be, tout court, inferior to another. — Janus
And shame on you for suggesting I was a racist. — baker
As it is often put, a valid deductive argument extracts the implications of its premises. That's its function. I assume that it is no vice in an argument that it does this, but the point of such arguments... — Clarendon