guard against confabulation by asking for sources and checking them. — Banno
So you cannot see the difference between "A rule against AI use will not be heeded" and "A rule against AI use cannot be enforced". Ok. — Banno
Baden? Tell us what you think. Is my reply to you against the rules? — Banno
It is also worth noting how the pro-AI Banno simply takes the AI at it's word, — Leontiskos
No, I presented what the AI had to say, for critique. Go ahead and look at the papers it cites... — Banno
But you didn't read the papers it cited, and you ↪concluded, "So yes, I overstated my case. You may be able to recognise posts as AI generated at a slightly better rate than choosing at random." — Leontiskos
It's noticeable that you have not presented any evidence, one way or the other.
If you think that what the AI said is wrong, then what you ought do is to present evidence, perhaps in the form of peer-reviewed articles that say that humans can reliably recognise AI generated text.
But that is not what you have chosen to do. Instead, you cast aspersions. — Banno
That's simply not so. I am not saying that because it is AI generated, it is authoritative. The material is offered here for critique. Baden asked who said that the detection of AI text was unreliable. I use an AI to provide examples in answer to his question.I am pointing out that all you have done is appealed to the authority of AI, — Leontiskos
The AI is not being appealed to as an authority — Banno
Again, you have not even attempted to show that the AI's summation was in any way inaccurate. — Banno
There are primary sources, there are secondary sources, there are search engines, and then there is the LLM. Consulting a secondary source and consulting an LLM are not the same thing.
It is worth noting that those who keep arguing in favor of LLMs seem to need to make use of falsehoods, and especially false equivalences. — Leontiskos
the difference between consulting a secondary source and consulting an llm is the following:
After locating a secondary source one merely jots down the reference and that’s the end of it. — Joshs
When one locates an argument from an llm... — Joshs
When one locates an argument from an llm that one finds valuable, one decides if the argument is something one can either defend in one’s own terms, or should be backed up with a direct quote from either a secondary or primary source. This can then be achieved with a quick prompt to the llm, and finding a reference for the quote. — Joshs
The fact that proper use of a.i. leaves one with the choice between incorporating arguments one can easily defend and elaborate on based on one’s own understanding of the subject, and connecting those arguments back to verifiable quotes, means that the danger of falsehood doesn’t come up at all. — Joshs
If you have a group of people argue over a topic and then you appoint a person to summarize the arguments and produce a working document that will be the basis for further discussion, you haven't given them a "calculator" job. You have given them the most important job of all. You have asked them to draft the committee document, which is almost certainly the most crucial point in the process. Yet you have re-construed this as "a calculator job to avoid tedium." — Leontiskos
To say, "We encourage X," is to encourage X. It is not to say, "If you are doing Y, then we would encourage you to do Y in X manner." To say "allow" or "permit" instead of "encourage" would make a large difference. — Leontiskos
What is the end/telos? Of a university? Of a philosophy forum?
Universities have in some ways become engines for economic and technological progress. If that is the end of the university, and if AI is conducive to that end, then there is no reason to prevent students from using AI. In that case a large part of what it means to be "a good student" will be "a student who knows how to use AI well," and perhaps the economically-driven university is satisfied with that.
But liberal education in the traditional sense is not a servant to the economy. It is liberal; free from such servility. It is meant to educate the human being qua human being, and philosophy has always been a central part of that. — Leontiskos
The line for me is certainly crossed when posters begin to use them to directly write posts and particularly OPs — Baden
Baden? Tell us what you think. Is my reply to you against the rules? And should it be? — Banno
"We encourage X," the X stands for "using LLMs as assistants for research, brainstorming, and editing," with the obvious emphasis on the "as". — Jamal
@Jamal @Baden
Regarding the new policy, sometimes when I’ve written something that comes out clunky I run it through an AI for “clarity and flow” and it subtly rearranges what I’ve written. Is that a no-no now? — praxis
My take on this---which I think is fairly consistent with Jamal as we've just had an exchange in the mod forum---is, as I said there:
"We allow proofreading in the guidelines. But we also more or less say if the proofreading moves too far into editing and then rewriting and therefore makes your text look AI generated, that's a risk you run. I would agree it's similar to grammarly in a way, but AI can sometimes take it too far. So, yes, it's not against the rules in itself, but I don't know why people can't just live with a bit of clunky writing. It will save us wondering about whether or not its AI gen'd and maintain their quirky indviduality." — Baden
Arguably the most important part of the job is very often the "calculator" task, the most tedious task. — Jamal
But I'll keep "encourage", since the point is to encourage some uses of LLMs over others. In "We encourage X," the X stands for "using LLMs as assistants for research, brainstorming, and editing," with the obvious emphasis on the "as". — Jamal
If you had asked the AI to write your reply in full or in part and had not disclosed that, we would be in the area I want to immediately address. — Baden
Reflectivity and expressivity, along with intuition and imagination are at the heart of what we do here, and at least my notion of what it means to be human. — Baden
And, while AIs can be a useful tool (like all technology, they are both a toxin and a cure), there is a point at which they become inimical to what TPF is and should be. The line for me is certainly crossed when posters begin to use them to directly write posts and particularly OPs, in full or in part. And this is something it is still currently possible to detect. The fact that it is more work for us mods is unfortunate. But I'm not for throwing in the towel. — Baden
This is a thread about ways to make use of AI in doing philosophy. Make a list of the suggestions in that thread that would supplement or improve the quality of the philosophy presented in the posts on the forum. — prompt
I'll fetch and analyze this philosophy forum thread to identify suggestions for using AI effectively in philosophical work.Based on my analysis of this philosophy forum thread, here are the suggestions for using AI that would supplement or improve the quality of philosophical work:
Productive AI Uses for Philosophy
**Research and Knowledge Acquisition:**
- Use AI to quickly summarize philosophical concepts, theories, or historical contexts to establish common ground for discussion
- Summarize the thought of specific philosophers as a starting point
- Guard against AI confabulation by always asking for sources and verifying them
- Use AI for extensive research conversations (as opposed to writing)
**Critical Engagement and Self-Improvement:**
- Use AI as a Socratic sparring partner to challenge your own arguments
- Feed your draft work to AI (pretending it's from someone else) and ask for critique to identify weak points before posting
- Have AI generate counter-arguments to strengthen your position
- Use AI to explore multiple interpretations and readings of philosophical texts by crafting prompts that cover a wide variety of perspectives
**Role-Playing and Dialectical Exploration:**
- Get AI to role-play as a specific philosopher or someone holding a particular view
- Have AI engage from different perspectives (as itself, as you, as various other people)
- Use AI to argue positions you don't hold, allowing you to hear what you don't want to hear
**Writing Support:**
- Use AI for clarity and editing—rephrasing clunky sentences or improving flow
- Use AI as a stylistic editor for grammar, syntax, and clarity after writing your own ideas
- Use AI for proofreading (though be aware this can sometimes edge too far into rewriting)
**Synthesis and Organization:**
- Use AI to synthesize multiple perspectives or arguments (e.g., synthesizing different philosophical approaches in a particular style)
- Use AI to help structure ideas and brainstorm, but not to compose full arguments
- Use AI for the "tedious tasks" of organizing and structuring complex material
**Critical Best Practices:**
- **Always verify AI output** - treat AI as an "over-confident assistant" that requires rigorous evaluation
- **Maintain intellectual ownership** - you must be able to defend any AI-generated argument in your own terms
- **Iterate and refine** - use AI outputs as starting points for your own philosophical refinement
- **Check sources** - when AI provides references, verify them (as they can be fabricated)
- **Use transparency** - disclose substantial AI assistance to maintain trust and intellectual honesty
- **Avoid tunnel vision** - craft prompts that explore diverse readings rather than confirming your biases
- **Make it your own** - AI-generated knowledge must be internalized just as you would internalize knowledge from any source
The thread emphasizes that AI should augment human thinking rather than replace it, serving as a tool for expanding your philosophical capacities while you remain the ultimate director and arbiter of the ideas. — Claude
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.