think, given the dangers of AI, and the ways in which prominent members of this site have used it to make themselves look smarter than they really are, that its use should be banned altogether on this site. — Janus
AI poses several dangers to ordinary human intellectual debate, primarily through the erosion of critical thinking, the amplification of bias, and the potential for large-scale misinformation. Instead of fostering deeper and more informed discourse, AI can undermine the very human skills needed for a robust and productive exchange of ideas.
Erosion of critical thinking and independent thought: By outsourcing core intellectual tasks to AI, humans risk a decline in the mental rigor necessary for debate.
Cognitive offloading: People may delegate tasks like research and analysis to AI tools, a process called cognitive offloading. Studies have found a negative correlation between heavy AI use and critical thinking scores, with younger people showing a greater dependence on AI tools for problem-solving.
Reduced analytical skills: Over-reliance on AI for quick answers can diminish a person's ability to engage in independent, deep analysis. The temptation to let AI generate arguments and counterarguments can bypass the human-centered process of careful reasoning and evaluation.
Stagnation of ideas: If everyone relies on the same algorithms for ideas, debate can become repetitive and less creative. True intellectual debate thrives on the unpredictable, human-driven generation of novel thoughts and solutions.
Amplification of bias and groupthink: AI systems are trained on human-created data, which often contains pre-existing biases. Algorithms can create "filter bubbles" and "echo chambers" by feeding users content that reinforces their existing beliefs. In a debate, this means participants may be intellectually isolated, only encountering information that confirms their own point of view, and they may be less exposed to diverse perspectives.
Erosion of authenticity: As AI-generated content becomes indistinguishable from human-generated content, it can breed a pervasive sense of distrust. In a debate, it becomes harder for participants to trust the authenticity of arguments, eroding the foundation of good-faith discussion
I think, given the dangers of AI, and the ways in which prominent members of this site have used it to make themselves look smarter than they really are, that its use should be banned altogether on this site. — Janus
I think, given the dangers of AI, and the ways in which prominent members of this site have used it to make themselves look smarter than they really are, that its use should be banned altogether on this site. — Janus
There have always been enough overblown and oratorical but poorly thought out OPs and posts here on the forum even without AI that I don’t know how easy it is to tell. Perhaps it would be helpful if people called them out when you see them. — T Clark
Banning A.I. is banning background research that will become built into the way we engage with each other. — Joshs
I guess I’m naïve or maybe just not very perceptive, but I haven’t recognized any posts definitely written by AI. — T Clark
Surely, discussing AI capabilities, flaws and impacts, as well as the significance this technology has for the philosophy of mind and of language (among other things) should be allowed, and illustrating those topics with properly advertised examples of AI outputs should be allowed. — Pierre-Normand
The A.I.-derived OP’s are likely to be better thought-out than many non-A.I. efforts. Banning A.I. is banning background research that will become built into the way we engage with each other. — Joshs
I am not going to outline all the possible dangers of AI—people can educate themselves about that by undertaking a search in whatever search engine they use or YouTube or whatever. — Janus
I am not going to outline all the possible dangers of people educating themselves by undertaking a search in whatever search engine they use or YouTube or whatever. — praxis
I guess I’m naïve or maybe just not very perceptive, but I haven’t recognized any posts definitely written by AI. There have always been enough overblown and oratorical but poorly thought out OPs and posts here on the forum even without AI that I don’t know how easy it is to tell. Perhaps it would be helpful if people called them out when you see them. — T Clark
Interesting, I haven’t noticed particularly. But I avoid reading lengthy and didactic posts which are often poorly written. The AI stuff I’ve seen often seems peculiarly worded and difficult to read. — Tom Storm
people can educate themselves about that by undertaking a search in whatever search engine they use — Janus
The A.I.-derived OP’s are likely to be better thought-out than many non-A.I. efforts. Banning A.I. is banning background research that will become built into the way we engage with each other. Think of it as walking around with a host of sages constantly whispering purported words of wisdom into your ear, and it is up to you to sort out what is valuable and what isn’t, what is true and what is false. — Joshs
The tool is now ubiquitous. Every intellectual is going to have to factor it into their practice. Time to learn what that means.
If you need to cheat to pass your exams or publish your research, then in the end it is you who suffers. But if AI can be used in a way than actually expands your brain, then that ought to be encouraged.
PF seems a suitably low stakes place to evolve some social norms. — apokrisis
LLMs now routinely write clear and flowing prose. — Jamal
Take me for instance. Although I use LLMs quite a lot, for everyday tasks or research, in the context of philosophical discussion or creative writing I always say I never directly cut and paste what they give me. But sometimes they come up with a word or phrase that is too good to refuse. So — was I lying? — Jamal
I'm mystified that percipient philosophers can't see a gaping difference between (A) using a search engine to produce a list of texts containing a given string (well done, us and it) and on tother hand (B) swallowing the insulting fantasy of interaction with an intelligent oracle. — bongo fury
That is, I can't understand or sympathise with them admitting to reading the AI summary, instead of ignoring that insulting click-bait and searching immediately among the genuinely authored texts. — bongo fury
Yes — bongo fury
The regular Google results have been garbage for years, — Jamal
Is using a thesaurus to write a novel and saying you wrote it lying? — Jamal
This is obviously missing the point. We knew the order of listing was biased and constantly under attack from bots. It was our job to filter and find actually authored texts, and attribute their epistemic value or lack of it to the genuinely accountable authors. — bongo fury
You honestly now want to defer those epistemic judgements to a bot? — bongo fury
No. Well done you. Getting the thesaurus to suggest whole phrases and sentences is obviously plagiarism. The gaping difference denied, again. — bongo fury
Getting the thesaurus to suggest whole phrases and sentences is obviously plagiarism. — bongo fury
annoyingly, it uses a lot of em dashes, like I do myself — Jamal
For some reason it always puts spaces between em-dashes, which is a stylistic faux pas outside a few style guides (basically just AP), and so this is one way to tell between usages—also, it virtually never uses an em-dash for a single trailing clause, instead always bracketing a clause in them. — Count Timothy von Icarus
Anyhow, it seems to me that the main risk of them are:
Sycophantic confirmation bias; anything said to GPT in particular is "genius;"
Hallucination of sources, which is bad for any information environment when they get repeated. — Count Timothy von Icarus
I would think handing your half-formed prose to a bot for it to improve it is plagiarism, regardless of the number of words changed or inserted. It's a different thing from you deliberately searching for a synonym. — bongo fury
The Conceptual Boundary of Authorship
The act of submitting half-formed prose to an autonomous processing system for "improvement" raises a profound ethical and philosophical question regarding the locus of authorship.
I would posit that this practice constitutes an illicit appropriation of intellectual effort—a form of plagiarism—irrespective of the quantitative degree of lexical or syntactic transformation enacted by the machine. The core violation lies in the delegation of the substantive process of refinement and telos (purposeful development) of the text to an external agent without explicit, critical engagement.
This is epistemologically distinct from the deliberate, conscious act of a human agent consulting a thesaurus to seek a more precise synonym. The latter remains an act of intentional, informed choice that preserves the continuous thread of human intellectual stewardship over the text's final form and meaning. The former, in contrast, risks dissolving the very boundary between personal expression and automated fabrication.
Within 10 years, the vast majority of the internet will be AI generated---such is the logic of competitiveness and consumerism. We won't be. — Baden
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.