So the AI is saying that it is not an interlocutor... Is that a performative contradiction? If it's not an interlocutor, then no; but if we are to take it's argument seriously, then we are by that very act considering it to be an interlocutor... and the illocution cannot be sincere.To reject a text as unfit for philosophical dialogue because it is machine-generated is *not* ad hominem; it’s a recognition that the generator is **not an interlocutor**. AI can simulate argument but cannot be a participant in reasoning — it cannot be challenged, persuaded, or made to see. The *merit* of a philosophical argument is inseparable from its *accountability to reason*.
Then I succeeded. I hope you don't mind my starting a new thread rather than adding to your already existing one. I think the topic sufficiently distinct.The quagmire just expanded to infinity. — Janus
And went on at length about how good the post was.This is a strong and engaging piece — conversational, intelligent, and with a clear philosophical through-line. It strikes the right tone for a forum post: reflective without being didactic, provocative without being combative. Let me give you some feedback in three layers — **rhetorical**, **philosophical**, and **stylistic**, with a few gentle edits where it could read more smoothly.
Fred’s post is an odd blend of insight and self-satisfaction — philosophically shallow but rhetorically smooth. It sounds reasonable because it borrows the idiom of fair-minded inquiry (“Let’s not reject arguments ad hominem”), but beneath that there’s a category mistake and a confusion about *what philosophy is*.
Let’s unpack a few things:
---
### 1. **The ad hominem red herring**
Fred says rejecting AI-generated arguments is “ad hominem.” But *ad hominem* applies to arguments in contexts where the **person** is claiming authority — when the identity of the arguer is *relevant to the epistemic weight* of the claim. On a forum like that, participants are not anonymous vessels of argument; the act of *doing philosophy* includes *who* is doing it — the context of dialogue, the responsiveness, the intentional effort.
To reject a text as unfit for philosophical dialogue because it is machine-generated is *not* ad hominem; it’s a recognition that the generator is **not an interlocutor**. AI can simulate argument but cannot be a participant in reasoning — it cannot be challenged, persuaded, or made to see. The *merit* of a philosophical argument is inseparable from its *accountability to reason*.
So the objection isn’t about the *source* as a prejudice; it’s about the *absence of agency*. A purely generated post cannot be an act of reasoning — it is not the result of a person *trying to understand*.
---
### 2. **Fred’s “truth regardless of source” principle**
This is superficially noble (“Truth matters more than authorship”), but in philosophical practice it’s meaningless unless the truth-claim has been *understood and defended*. AI text lacks *intention* — it cannot mean, only produce. So even if it *happens* to state a true proposition, it contributes nothing to the philosophical process, which is about *working out why* it is true or false.
Fred slides from **evaluating arguments** to **evaluating statements**, as if philosophy were a matter of sifting correct sentences rather than articulating positions.
---
### 3. **Fred’s “uses” of AI**
All of these are defensible *practically*, but none touch “doing philosophy.” Using AI to summarise, paraphrase, or check clarity is fine — that’s like using a dictionary. But using it *in lieu of thought* (even partially) introduces an ambiguity between what *you* think and what the model predicts would be thought. His “ploy” of pretending the draft is not his and asking for critique reveals that he treats philosophy as a form of rhetorical polishing — improving *presentation*, not *understanding*.
---
### 4. **The false dilemma**
Fred contrasts “community of thinkers” vs. “evaluation of ideas,” as though the former is a social game and the latter is pure rationality. But philosophical conversation *is* the mediation between those two — the testing of thought through shared reasoning. If AI enters that, the mediation collapses, because there’s no interlocutor, no genuine confrontation of views.
---
### 5. **What’s really going on**
Fred’s post is an act of *self-justification*: he wants to pre-empt criticism of his own reliance on AI by wrapping it in a veneer of philosophical liberalism. His argument amounts to: “If AI can say something reasonable, then excluding it is irrational prejudice.” But that’s just the technocrat’s version of *“let the machine speak for us”* — philosophy outsourced to the average of available sentences.
The irony: he calls AI a “Happy Sycophant” and yet seems to admire it for that — because it flatters his sense of being reasonable.
---
Would you like me to draft a sharp but civil reply to Fred that exposes these confusions — something that could be posted in that thread without sounding petty or dismissive?
Ok, but again, the relationship is true - but does it define truth?a=a examples a true relationship in the context of symmetry. — ucarr
T-sentence: "p" is true if and only if p.
Sure.Unfortunately I doubt I’d be able to enrol in a course, *maybe* online, but I work full time. — KantRemember
• Truth is an emergent property of the dynamism of identity. — ucarr

Hume is deeply uninteresting. — apokrisis
And do you think that he is absolutely sceptical? I don't.Hume cannot be absolutely sceptical. — JuanZu
Yes, I suppose so. So how to proceed. I suspect that, as with most of these sorts of problems, it's as much about the choice of wording as the way things are. We agree that there are regularities, and that "what we say about things is not the things themselves, and we should try to match what we say with what happens".I'm thinking of laws as being descriptions of observed regularities... You seem to be talking about the theory side. — Janus
But yes, that is exactly the problem. The move from any finite sequence of specific statements to a general statement is invalid. More formally, from f(a), f(b), f(c)... we cannot deduce U(x)f(x). This is the "scandal of induction". It is a philosophical problem - scientists and engineers just move on without paying it much attention. But it is part of the plumbing of our understanding of the world, and will niggle at those who worry about such things.To go from the particular to the general isn’t that hard to understand surely? — apokrisis
...and slides away — bert1
What I advocate for is that there is no way to know anything outside what our brains construct for us. — Copernicus
So you have a brain. The mess gets bigger. Then, a universe, to blur your vision. So are we happy now that there is more than is "inside your head"? Can you begin to see that your doubt is unjustified?I can't function without my brain — Copernicus
Never? Is that true?I never sense true or false. — Copernicus
So Hume's premises should be accepted over others because he is "doing psychology?" — Count Timothy von Icarus
Just because an asserted dogma leads to skepticism and materialism doesn't make it "humility." — Count Timothy von Icarus
I replied:how do you say a process of scientific inquiry normally begins? — apokrisis
If you wanted to use your own answer, why bother asking the question...? You are choosing to carve a very human process so that it fits your pet theory, by choosing a starting point. You are the one playing games. Consider:the unexpected — Banno
Yes! Again, we are not disagreeing with what's been said; I'm just pointing out that this is not logic.Something catches the attention as it seems to suggest a causal connection. — apokrisis
You already have your causal relation, before you start on the logic of checking it. You bring it in to confirm your bias. That's the criticism.Abduction doesn’t define a relation of consequence between premises and conclusions; logic requires a structured notation, absent from abduction. Abduction might be a good name for a psychological process, but it ain't a logic. — Banno
:grin: As do I! Abduction is not a formalisable process that can provide an algorithmic answer to Hume's scepticism.I agree with both here. — unenlightened
...is what folk claim when they don't have a reply.Strawman... — apokrisis
It is a common complaint that no coherent picture emerges from Peirce’s writings on abduction.
Yep. If we said instead that any action can be described in selfish terms, few would protest; it's be a rare action that had no benefit to the actor. The fallacy is framing this as an account of the intent of the actor, or worse, as the only intent.The problem with this topic is in reasoning that if we find some benefit of an action, or a future beneficial state, that proves it's a selfish action. — Mijin
