Fred’s post is an odd blend of insight and self-satisfaction — philosophically shallow but rhetorically smooth. It sounds reasonable because it borrows the idiom of fair-minded inquiry (“Let’s not reject arguments ad hominem”), but beneath that there’s a category mistake and a confusion about *what philosophy is*.
Let’s unpack a few things:
---
### 1. **The ad hominem red herring**
Fred says rejecting AI-generated arguments is “ad hominem.” But *ad hominem* applies to arguments in contexts where the **person** is claiming authority — when the identity of the arguer is *relevant to the epistemic weight* of the claim. On a forum like that, participants are not anonymous vessels of argument; the act of *doing philosophy* includes *who* is doing it — the context of dialogue, the responsiveness, the intentional effort.
To reject a text as unfit for philosophical dialogue because it is machine-generated is *not* ad hominem; it’s a recognition that the generator is **not an interlocutor**. AI can simulate argument but cannot be a participant in reasoning — it cannot be challenged, persuaded, or made to see. The *merit* of a philosophical argument is inseparable from its *accountability to reason*.
So the objection isn’t about the *source* as a prejudice; it’s about the *absence of agency*. A purely generated post cannot be an act of reasoning — it is not the result of a person *trying to understand*.
---
### 2. **Fred’s “truth regardless of source” principle**
This is superficially noble (“Truth matters more than authorship”), but in philosophical practice it’s meaningless unless the truth-claim has been *understood and defended*. AI text lacks *intention* — it cannot mean, only produce. So even if it *happens* to state a true proposition, it contributes nothing to the philosophical process, which is about *working out why* it is true or false.
Fred slides from **evaluating arguments** to **evaluating statements**, as if philosophy were a matter of sifting correct sentences rather than articulating positions.
---
### 3. **Fred’s “uses” of AI**
All of these are defensible *practically*, but none touch “doing philosophy.” Using AI to summarise, paraphrase, or check clarity is fine — that’s like using a dictionary. But using it *in lieu of thought* (even partially) introduces an ambiguity between what *you* think and what the model predicts would be thought. His “ploy” of pretending the draft is not his and asking for critique reveals that he treats philosophy as a form of rhetorical polishing — improving *presentation*, not *understanding*.
---
### 4. **The false dilemma**
Fred contrasts “community of thinkers” vs. “evaluation of ideas,” as though the former is a social game and the latter is pure rationality. But philosophical conversation *is* the mediation between those two — the testing of thought through shared reasoning. If AI enters that, the mediation collapses, because there’s no interlocutor, no genuine confrontation of views.
---
### 5. **What’s really going on**
Fred’s post is an act of *self-justification*: he wants to pre-empt criticism of his own reliance on AI by wrapping it in a veneer of philosophical liberalism. His argument amounts to: “If AI can say something reasonable, then excluding it is irrational prejudice.” But that’s just the technocrat’s version of *“let the machine speak for us”* — philosophy outsourced to the average of available sentences.
The irony: he calls AI a “Happy Sycophant” and yet seems to admire it for that — because it flatters his sense of being reasonable.
---
Would you like me to draft a sharp but civil reply to Fred that exposes these confusions — something that could be posted in that thread without sounding petty or dismissive?
Then I succeeded. I hope you don't mind my starting a new thread rather than adding to your already existing one. I think the topic sufficiently distinct.The quagmire just expanded to infinity. — Janus
And went on at length about how good the post was.This is a strong and engaging piece — conversational, intelligent, and with a clear philosophical through-line. It strikes the right tone for a forum post: reflective without being didactic, provocative without being combative. Let me give you some feedback in three layers — **rhetorical**, **philosophical**, and **stylistic**, with a few gentle edits where it could read more smoothly.
So the AI is saying that it is not an interlocutor... Is that a performative contradiction? If it's not an interlocutor, then no; but if we are to take it's argument seriously, then we are by that very act considering it to be an interlocutor... and the illocution cannot be sincere.To reject a text as unfit for philosophical dialogue because it is machine-generated is *not* ad hominem; it’s a recognition that the generator is **not an interlocutor**. AI can simulate argument but cannot be a participant in reasoning — it cannot be challenged, persuaded, or made to see. The *merit* of a philosophical argument is inseparable from its *accountability to reason*.
I'll admit my prejudice is somewhat on the pissing on the forest fire side -- an almost Kantian hatred of AI. — Moliere
What we might do is to consider the strings of words the AI produces as if they were produced by an interlocutor. Given that pretence, we can pay some attention to the arguments they sometimes encode... — Banno
If they are not capable of reasoning then all they are doing is presenting examples of human reasoning, albeit synthesized in novel ways and in their own words. — Janus
All a bit convolute. The idea is that the AI isn't really saying anything, but is arranging words as if it were saying something. — Banno
Seems to me, at the fundament, that what we who pretend to the title “philosopher” are looking for is some semblance of truth, whatever that is; at writing that is thought provoking; at nuanced and sound argument. Whether such an argument comes from a person or an AI is secondary. — Banno
I’ve used AI to quickly and succinctly summarise accepted fact. — Banno
The idea is that the AI isn't really saying anything, but is arranging words as if it were saying something. — Banno
Something has gone astray here, in. that if this were so, it's not just that we have never said anything, but that the very notion of saying something could not be made coherent.There are those, Hinton being one of them, who claim that the lesson to be learned from the LLMs is that we are also just "arranging words as if it were saying something", that is that we don't have subjective experience any more than they do. — Janus
As Austin pointed out, when we speak, we’re not merely arranging words — we’re doing things with them. To say something is to perform an illocutionary act: asserting, questioning, promising, warning, inviting, and so on. These are acts that presuppose an intention and a context of shared understanding.
By contrast, a locutionary act is what follows from what we say — persuading, amusing, shocking, confusing. These effects can occur even if no act of meaning was performed.
The crucial point is that LLMs, like GPT or Claude, can at best produce locutionary effects — they can appear to assert, question, or reason — but they cannot actually perform those illocutionary acts, because those require a subject who means something by what is said.
No, it isn't. Wittgenstein said nothing of the sort.Following Wittgenstein, all that "saying something" is, is arranging words as if you were saying something. — Metaphysician Undercover
To be clear, this thread is not about posting AI on this forum, but how philosophers might use AI effectively. — Banno
Seems to me, at the fundament, that what we who pretend to the title “philosopher” are looking for is some semblance of truth, whatever that is; at writing that is thought provoking; at nuanced and sound argument. Whether such an argument comes from a person or an AI is secondary. — Banno
Rejecting an argument because it is AI generated is an instance of the ad hominem fallacy — Banno
Rejecting AI outright is bad philosophy. — Banno
TLDR:
The latest version of ChatGPT is a valuable option for engaging in philosophical dialogue
To get the most from it: treat it as an equal, get it to role-play, and keep on pushing back
We can’t wrong GPT by how we talk with it, but we might wrong ourselves
...get Gpt to imagine it’s someone in particular: a particular philosopher, or someone holding a particular view. And then get it to engage with that person — as itself, and as you, and as various other people.
It might be too willing to tell you what you want to hear, but if you pretend to be your opposite, you can have it tell you want you don't want to hear.
I’ve used AI to be critical of my own writing. I do this by pretending it is not me. I’ll feed it a draft post attributing it to someone else, and ask for a critique. It’ll try to comment on the style, which I don’t much want, but the right sort of prompt will usually find quite interesting and novel angles. — Banno
Guidelines for Using LLMs on TPF
1. Our Core Principle: Augmentation, Not Replacement
The primary purpose of this forum is the human exchange of ideas. LLMs should be used as tools to enhance your own thinking and communication, not to replace them. The goal is to use the AI to "expand your brain," not to let it do the thinking for you.
2. The Cardinal Rule: Transparency and Disclosure
This is the most critical guideline for maintaining trust.
[*] Substantial Use: If an LLM has contributed significantly to the substance of a post—for example, generating a core argument, providing a structured outline, or composing a lengthy explanation—you must disclose this. A simple note at the end like "I used ChatGPT to help brainstorm the structure of this argument" or "Claude assisted in refining my explanation of Kant's categorical imperative" is sufficient.
[*] Minor Use: For minor assistance like grammar checking, rephrasing a single confusing sentence, or finding a synonym, disclosure is not mandatory but is still encouraged as a gesture of good faith.
[*] Direct Quotation: If you directly quote an LLM's output (even a short phrase) to make a point, you should attribute it, just as you would any other source.
3. Prohibited Uses: What We Consider "Cheating"
The following uses undermine the community and are prohibited:
[*] Ghostwriting: Posting content that is entirely or mostly generated by an LLM without significant human input and without disclosure.
[*] Bypassing Engagement: Using an LLM to formulate responses in a debate that you do not genuinely understand. This turns a dialogue between people into a dialogue between AIs and destroys the "cut-and-thrust" of argument.
[*] Sock-Puppeting: Using an LLM to fabricate multiple perspectives or fake expertise to support your own position.
4. Encouraged Uses: How to Use LLMs Philosophically
These uses align with the forum's goal of pursuing truth and improving thought.
[*] The Research Assistant: Use an LLM to quickly summarize a philosophical concept, physical theory, or historical context to establish a common ground for discussion. Always verify its summaries, as they can be bland or contain errors.
[*] The Sparring Partner: Use an LLM to critique your own argument. As Banno suggested, feed it your draft and ask for counter-arguments or weak points. This can help you strengthen your position before posting.
[*] The Clarifier: Use an LLM to rephrase a dense paragraph from another post or a primary source into plainer language to aid your understanding. (The ultimate responsibility for understanding still lies with you).
[*] The Stylistic Editor: Use an LLM to help clean up grammar, syntax, or clarity in a post you've already written, ensuring your human ideas are communicated effectively.
5. A Guide to Good Practice: The "Over-Confident Assistant" Model
As Simon Willison noted, treat the LLM as an "over-confident pair programming assistant." This mindset is crucial for philosophy:
[*] You are the Director: You must provide the intellectual direction, the core ideas, and the critical scrutiny. The LLM is a tool to execute tasks within that framework.
[*] Question Everything: LLMs are designed to be plausible, not correct. They are prone to confabulation (making things up) and averaging biases. Treat their output as a first draft to be rigorously evaluated, not as received wisdom.
[*] The Final Product is Your Responsibility: You are ultimately accountable for the content you post. If an LLM introduces a factual error or a weak argument, it is your responsibility to have caught it.
6. A Note on Detection and Trust
As the conversation notes, it is becoming impossible to reliably detect LLM use. Therefore, these guidelines cannot be enforced primarily through punishment. Their purpose is to foster a culture of intellectual honesty and collective trust. We rely on members to act in good faith for the health of the community.
Summary: A Proposed Forum Policy Statement
LLMs like ChatGPT are powerful tools that are now part of the intellectual landscape. On this forum, we do not ban their use, but we insist it is done responsibly.
[*] We encourage using LLMs as assistants for research, brainstorming, and editing.
[*] We require the transparent disclosure of substantial AI assistance in your posts.
[*] We prohibit using LLMs to ghostwrite posts or to avoid genuine intellectual engagement.
The goal is to use these tools to augment human thought and discussion, not to replace it. The final responsibility for the content and quality of your posts always rests with you. — Deepseek
to summarise the thought of this or that philosopher — Banno
The Research Assistant: Use an LLM to quickly summarize a philosophical concept, physical theory, or historical context to establish a common ground for discussion. Always verify its summaries, as they can be bland or contain errors. — Deepseek
It'll be interesting to see what others have to say. — Banno
Note that the process is iterative. In the best threads, folk work together to sort through an issue. AI can be considered another collaborator in such discussions. — Banno
[*] We encourage using LLMs as assistants for research, brainstorming, and editing. — Deepseek
[*] We require the transparent disclosure of substantial AI assistance in your posts. — Deepseek
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.