This is a good example... — Outlander
All you have done is to notice that any given action might be described in selfish terms. It simple does not follow, as you seem to suppose, that therefore all actions are selfish. — Banno
No, it isn't. Wittgenstein said nothing of the sort.Following Wittgenstein, all that "saying something" is, is arranging words as if you were saying something. — Metaphysician Undercover
Something has gone astray here, in. that if this were so, it's not just that we have never said anything, but that the very notion of saying something could not be made coherent.There are those, Hinton being one of them, who claim that the lesson to be learned from the LLMs is that we are also just "arranging words as if it were saying something", that is that we don't have subjective experience any more than they do. — Janus
As Austin pointed out, when we speak, we’re not merely arranging words — we’re doing things with them. To say something is to perform an illocutionary act: asserting, questioning, promising, warning, inviting, and so on. These are acts that presuppose an intention and a context of shared understanding.
By contrast, a locutionary act is what follows from what we say — persuading, amusing, shocking, confusing. These effects can occur even if no act of meaning was performed.
The crucial point is that LLMs, like GPT or Claude, can at best produce locutionary effects — they can appear to assert, question, or reason — but they cannot actually perform those illocutionary acts, because those require a subject who means something by what is said.
If they are not capable of reasoning then all they are doing is presenting examples of human reasoning, albeit synthesized in novel ways and in their own words. — Janus
No. They are, as you say, operators.Logical operators (∧, ∨, ↔︎, ¬) are not logical identities? — ucarr
So the AI is saying that it is not an interlocutor... Is that a performative contradiction? If it's not an interlocutor, then no; but if we are to take it's argument seriously, then we are by that very act considering it to be an interlocutor... and the illocution cannot be sincere.To reject a text as unfit for philosophical dialogue because it is machine-generated is *not* ad hominem; it’s a recognition that the generator is **not an interlocutor**. AI can simulate argument but cannot be a participant in reasoning — it cannot be challenged, persuaded, or made to see. The *merit* of a philosophical argument is inseparable from its *accountability to reason*.
Then I succeeded. I hope you don't mind my starting a new thread rather than adding to your already existing one. I think the topic sufficiently distinct.The quagmire just expanded to infinity. — Janus
And went on at length about how good the post was.This is a strong and engaging piece — conversational, intelligent, and with a clear philosophical through-line. It strikes the right tone for a forum post: reflective without being didactic, provocative without being combative. Let me give you some feedback in three layers — **rhetorical**, **philosophical**, and **stylistic**, with a few gentle edits where it could read more smoothly.
Fred’s post is an odd blend of insight and self-satisfaction — philosophically shallow but rhetorically smooth. It sounds reasonable because it borrows the idiom of fair-minded inquiry (“Let’s not reject arguments ad hominem”), but beneath that there’s a category mistake and a confusion about *what philosophy is*.
Let’s unpack a few things:
---
### 1. **The ad hominem red herring**
Fred says rejecting AI-generated arguments is “ad hominem.” But *ad hominem* applies to arguments in contexts where the **person** is claiming authority — when the identity of the arguer is *relevant to the epistemic weight* of the claim. On a forum like that, participants are not anonymous vessels of argument; the act of *doing philosophy* includes *who* is doing it — the context of dialogue, the responsiveness, the intentional effort.
To reject a text as unfit for philosophical dialogue because it is machine-generated is *not* ad hominem; it’s a recognition that the generator is **not an interlocutor**. AI can simulate argument but cannot be a participant in reasoning — it cannot be challenged, persuaded, or made to see. The *merit* of a philosophical argument is inseparable from its *accountability to reason*.
So the objection isn’t about the *source* as a prejudice; it’s about the *absence of agency*. A purely generated post cannot be an act of reasoning — it is not the result of a person *trying to understand*.
---
### 2. **Fred’s “truth regardless of source” principle**
This is superficially noble (“Truth matters more than authorship”), but in philosophical practice it’s meaningless unless the truth-claim has been *understood and defended*. AI text lacks *intention* — it cannot mean, only produce. So even if it *happens* to state a true proposition, it contributes nothing to the philosophical process, which is about *working out why* it is true or false.
Fred slides from **evaluating arguments** to **evaluating statements**, as if philosophy were a matter of sifting correct sentences rather than articulating positions.
---
### 3. **Fred’s “uses” of AI**
All of these are defensible *practically*, but none touch “doing philosophy.” Using AI to summarise, paraphrase, or check clarity is fine — that’s like using a dictionary. But using it *in lieu of thought* (even partially) introduces an ambiguity between what *you* think and what the model predicts would be thought. His “ploy” of pretending the draft is not his and asking for critique reveals that he treats philosophy as a form of rhetorical polishing — improving *presentation*, not *understanding*.
---
### 4. **The false dilemma**
Fred contrasts “community of thinkers” vs. “evaluation of ideas,” as though the former is a social game and the latter is pure rationality. But philosophical conversation *is* the mediation between those two — the testing of thought through shared reasoning. If AI enters that, the mediation collapses, because there’s no interlocutor, no genuine confrontation of views.
---
### 5. **What’s really going on**
Fred’s post is an act of *self-justification*: he wants to pre-empt criticism of his own reliance on AI by wrapping it in a veneer of philosophical liberalism. His argument amounts to: “If AI can say something reasonable, then excluding it is irrational prejudice.” But that’s just the technocrat’s version of *“let the machine speak for us”* — philosophy outsourced to the average of available sentences.
The irony: he calls AI a “Happy Sycophant” and yet seems to admire it for that — because it flatters his sense of being reasonable.
---
Would you like me to draft a sharp but civil reply to Fred that exposes these confusions — something that could be posted in that thread without sounding petty or dismissive?
Ok, but again, the relationship is true - but does it define truth?a=a examples a true relationship in the context of symmetry. — ucarr
T-sentence: "p" is true if and only if p.
Sure.Unfortunately I doubt I’d be able to enrol in a course, *maybe* online, but I work full time. — KantRemember
• Truth is an emergent property of the dynamism of identity. — ucarr

Hume is deeply uninteresting. — apokrisis
And do you think that he is absolutely sceptical? I don't.Hume cannot be absolutely sceptical. — JuanZu
Yes, I suppose so. So how to proceed. I suspect that, as with most of these sorts of problems, it's as much about the choice of wording as the way things are. We agree that there are regularities, and that "what we say about things is not the things themselves, and we should try to match what we say with what happens".I'm thinking of laws as being descriptions of observed regularities... You seem to be talking about the theory side. — Janus
But yes, that is exactly the problem. The move from any finite sequence of specific statements to a general statement is invalid. More formally, from f(a), f(b), f(c)... we cannot deduce U(x)f(x). This is the "scandal of induction". It is a philosophical problem - scientists and engineers just move on without paying it much attention. But it is part of the plumbing of our understanding of the world, and will niggle at those who worry about such things.To go from the particular to the general isn’t that hard to understand surely? — apokrisis
...and slides away — bert1
What I advocate for is that there is no way to know anything outside what our brains construct for us. — Copernicus
So you have a brain. The mess gets bigger. Then, a universe, to blur your vision. So are we happy now that there is more than is "inside your head"? Can you begin to see that your doubt is unjustified?I can't function without my brain — Copernicus
Never? Is that true?I never sense true or false. — Copernicus
