• Janus
    17.6k
    I think, given the dangers of AI, and the ways in which prominent members of this site have used it to make themselves look smarter than they really are, that its use should be banned altogether on this site.

    I come here to listen to what others think and discuss ideas with them, not with chatbots.

    I am not going to outline all the possible dangers of AI—people can educate themselves about that by undertaking a search in whatever search engine they use or YouTube or whatever. It's not hard to find people like Yuval Noah Harari and Geoffrey Hinton.
  • 180 Proof
    16.1k
    I come here to listen to what others think and discuss ideas with them, not with chatbots.Janus
    :100: I don't bother reading or responding to any post that I even suspect is chatbot/LLM chatter.
  • Metaphysician Undercover
    14.3k
    think, given the dangers of AI, and the ways in which prominent members of this site have used it to make themselves look smarter than they really are, that its use should be banned altogether on this site.Janus

    If copying AI makes them look smarter than they are, that's pretty sad.
  • apokrisis
    7.6k
    And AI agrees. :razz:

    AI poses several dangers to ordinary human intellectual debate, primarily through the erosion of critical thinking, the amplification of bias, and the potential for large-scale misinformation. Instead of fostering deeper and more informed discourse, AI can undermine the very human skills needed for a robust and productive exchange of ideas.

    Erosion of critical thinking and independent thought: By outsourcing core intellectual tasks to AI, humans risk a decline in the mental rigor necessary for debate.

    Cognitive offloading: People may delegate tasks like research and analysis to AI tools, a process called cognitive offloading. Studies have found a negative correlation between heavy AI use and critical thinking scores, with younger people showing a greater dependence on AI tools for problem-solving.

    Reduced analytical skills: Over-reliance on AI for quick answers can diminish a person's ability to engage in independent, deep analysis. The temptation to let AI generate arguments and counterarguments can bypass the human-centered process of careful reasoning and evaluation.

    Stagnation of ideas: If everyone relies on the same algorithms for ideas, debate can become repetitive and less creative. True intellectual debate thrives on the unpredictable, human-driven generation of novel thoughts and solutions.

    Amplification of bias and groupthink: AI systems are trained on human-created data, which often contains pre-existing biases. Algorithms can create "filter bubbles" and "echo chambers" by feeding users content that reinforces their existing beliefs. In a debate, this means participants may be intellectually isolated, only encountering information that confirms their own point of view, and they may be less exposed to diverse perspectives.

    Erosion of authenticity: As AI-generated content becomes indistinguishable from human-generated content, it can breed a pervasive sense of distrust. In a debate, it becomes harder for participants to trust the authenticity of arguments, eroding the foundation of good-faith discussion
  • T Clark
    15.4k
    And AI agrees. :razz:apokrisis

    A little snotty irony is always appreciated
  • T Clark
    15.4k
    I think, given the dangers of AI, and the ways in which prominent members of this site have used it to make themselves look smarter than they really are, that its use should be banned altogether on this site.Janus

    I guess I’m naïve or maybe just not very perceptive, but I haven’t recognized any posts definitely written by AI. There have always been enough overblown and oratorical but poorly thought out OPs and posts here on the forum even without AI that I don’t know how easy it is to tell. Perhaps it would be helpful if people called them out when you see them.
  • Tom Storm
    10.3k
    I think, given the dangers of AI, and the ways in which prominent members of this site have used it to make themselves look smarter than they really are, that its use should be banned altogether on this site.Janus

    Interesting, I haven’t noticed particularly. But I avoid reading lengthy and didactic posts which are often poorly written. The AI stuff I’ve seen often seems peculiarly worded and difficult to read.
  • Pierre-Normand
    2.7k
    I'm unsure in what way the OP proposal is meant to strengthen the already existing prohibition on the use of AI. Maybe the OP is concerned with this prohibition not being sufficiently enforced in some cases. If someone has an AI write their responses for them, or re-write them, that's already prohibited. I think one is allowed to make use of them a spell/grammar checkers. I've already myself argued about the downsides of using them for more substantive writing assistance (e.g. rewording or rephrasing what one intends to post in a way that could alter the meaning in ways not intended by the poster and/or not being reflective of their own understanding). But it may be difficult do draw the line between simple language correction and substantive rewording. If a user is suspected to abuse such AI usage, I suppose moderators could bring it up with this user and/or deal with it with a warning.

    One might also use AI for research or for bouncing off ideas before posting. Such an usages seems unobjectionable to me and, in any case, prohibiting them would be difficult to enforce. Lastly, AI has a huge societal impacts currently. Surely, discussing AI capabilities, flaws and impacts (including its dangers), as well as the significance this technology has for the philosophy of mind and of language (among other things) is important, and illustrating those topics with properly advertised examples of AI outputs should be allowed.
  • Joshs
    6.4k


    There have always been enough overblown and oratorical but poorly thought out OPs and posts here on the forum even without AI that I don’t know how easy it is to tell. Perhaps it would be helpful if people called them out when you see them.T Clark

    The A.I.-derived OP’s are likely to be better thought-out than many non-A.I. efforts. Banning A.I. is banning background research that will become built into the way we engage with each other. Think of it as walking around with a host of sages constantly whispering purported words of wisdom into your ear, and it is up to you to sort out what is valuable and what isn’t, what is true and what is false. Would I rather rely on my own knowledge than expose myself to the potentially dangerous influence of these muses? Hell no, I thrive on the opportunity to challenge my skills at vetting information.

    If I am responding to an OP, I don’t care whether it is a human or one of the whispering muses I’m dealing with. I have at times learned much from my conversations with these muses. If the human who sets them into action doesn’t know how to properly guide them, they may of course make a disaster out of the OP almost as bad as that which many human posters have been know to do.
    But I’m willing to take my chances with both the human and their muses.
  • 180 Proof
    16.1k
    I avoid reading lengthy and didactic posts which are often poorly written. The AI stuff I’ve seen often seems peculiarly worded and difficult to read.Tom Storm
    :up: :up:
  • T Clark
    15.4k
    Banning A.I. is banning background research that will become built into the way we engage with each other.Joshs

    I disagree with this. I was toying around with a bunch of disparate ideas that seemed related to me. I used chat GPT to help me figure out what they had in common. That seems like a legitimate use to me. I use a thesaurus when I can’t think of the right word for a particular idea. I use quotes when I want to add legitimacy or clarity. AI feels like the same kind of tool.
  • apokrisis
    7.6k
    I guess I’m naïve or maybe just not very perceptive, but I haven’t recognized any posts definitely written by AI.T Clark

    I’m definitely seeing posters who are suddenly injecting chunks of more organised and considered material into their responses. There are AI tools to detect the giveaway changes in rhythm, vocab and style. But if you know the poster, even if they’ve done some rewriting, it is already jarring enough.

    So sure. AI as a tool will change things in ways that are the usual mix of better and worse. And all my life I have seen nothing but that kind of change.

    I remember life before and after Google. The internet before and after it was just academics and geeks on it. The world as it once was when I had to fill out cards at the British Library and wait several days for obscure tomes to arrive at my desk, brought by porters with clanking metal trolleys.

    Being Luddite never works. Listservs were once the greatest intellectual medium ever invented - the ideal combination of book and conference. But the internet got overrun and personal blogs took over. They didn’t last long themselves - or tried to evolve into substacks or whatever. I had already lost interest in that line of development. YouTube was the next medium to become actually useful.

    If anyone values PF for some reason, they ought to think about why and how to respond to AI from that point of view. Banning it is just going to increase the disguised use of it. Folk can already Google and then can’t help but get an AI response from it as the first hit. So would one ban search engines too?

    There was once a moment when PF went in for social media likes and dislikes. PF is already socially gamified and some got into that while others deplored it. I think the change in platform might have simply failed to support the necessary like button. I vaguely remember an ignore function that also bit the dust.

    Anyway, the point is there is always change and its tempo is only increasing. And what even is PF’s mission? What would you miss most if it upped and vanished? That should inform any policies on AI.

    Are we here for erudition or the drama? And what would AI’s impact be on either?
  • apokrisis
    7.6k
    Surely, discussing AI capabilities, flaws and impacts, as well as the significance this technology has for the philosophy of mind and of language (among other things) should be allowed, and illustrating those topics with properly advertised examples of AI outputs should be allowed.Pierre-Normand

    The A.I.-derived OP’s are likely to be better thought-out than many non-A.I. efforts. Banning A.I. is banning background research that will become built into the way we engage with each other.Joshs

    This is the reality. The tool is now ubiquitous. Every intellectual is going to have to factor it into their practice. Time to learn what that means.

    If you need to cheat to pass your exams or publish your research, then in the end it is you who suffers. But if AI can be used in a way than actually expands your brain, then that ought to be encouraged.

    PF seems a suitably low stakes place to evolve some social norms.
  • praxis
    7k
    I am not going to outline all the possible dangers of AI—people can educate themselves about that by undertaking a search in whatever search engine they use or YouTube or whatever.Janus

    I am not going to outline all the possible dangers of people educating themselves by undertaking a search in whatever search engine they use or YouTube or whatever.
  • Outlander
    2.8k
    I am not going to outline all the possible dangers of people educating themselves by undertaking a search in whatever search engine they use or YouTube or whatever.praxis

    I think his concern is, not to be that dramatic, perhaps not quite a SkyNet movie takeover scenario, which theoretically could happen. But definitely along that line of thinking. It's funny, you're good at chess but when it comes to other things, well, let's just say, your humanity shines through. :smile:

    And yes, that is a formal challenge for rematch.

    For example, as an actual experienced computer programmer, I know the difference between an object can ultimately befall on a simple 1 or 0. Video game programmers know this. They often joke with one another and run "real world" scenarios where they go around changing the enemy AI and friendly AI in such a quick fashion and watch the world they created turn into chaos.

    This is possible in a world where military and police rely on AI drones with lethal capability. All it takes is a simple 1 turned to 0 or vice-versa. And all of a sudden, the drones sent to attack person A viewed as 'Criminal' instead view all citizens as person A's accomplice and 'Criminal'.

    It's not hard to do, really.

    A record number of 1862 data breaches occurred in 2021 in the US.

    In an AI-centric world, that's a possible 1,862 massacres of tens, thousands, maybe millions of people if drones and bombs are in the equation, that would have occurred. Perhaps even by some little kid who got lucky.

    Now, is that the future you want? Because it's what you'll get. Were it not for folk you've yet to meet or at least understand.
  • praxis
    7k


    I think the point is that you can’t let your guard down anywhere, and you never could.

    I read Nexus last year, btw. What I recall seems like a mild forecast compared to today’s predictions.
  • Jamal
    11k


    I sympathize. But you're proposing something and instead of telling us why it's a good proposal you're saying "if you want reasons, go and find out yourself." This is not persuasive.

    And it isn't clear precisely what you are proposing. What does it mean to ban the use of LLMs? If you mean the use of them to generate the content of your posts, that's already banned — although it's not always possible to detect LLM-generated text, and it will become increasingly impossible. If you mean using them to research or proof-read your posts, that's impossible to ban, not to mention misguided.

    The reality, which many members are not aware of, is that a great many posts on TPF have been written in full or in part by LLMs, even those posted by long-term members known for their writing skills and knowledge. I've been able to detect some of them because I know what ChatGPT's default style looks like (annoyingly, it uses a lot of em dashes, like I do myself). But it's trivially easy to make an LLM's generated output undetectable, by asking it to alter its style. So although I still want to enforce the ban on LLM-generated text, a lot of it will slip under the radar.

    And there are cases where a fully LLM-generated post is acceptable: translation comes to mind, for those whose first language is not English. Maybe that's the only acceptable case, I'm not sure. But then it becomes fuzzy how to define "fully LLM-generated": translations and grammar-corrected output, it could be argued, are not fully generated by the LLMs, whereas the text they produce based on a prompt is — but is there a clear line?

    Anyway, the following comments, though totally understandable, are significantly outdated:

    I guess I’m naïve or maybe just not very perceptive, but I haven’t recognized any posts definitely written by AI. There have always been enough overblown and oratorical but poorly thought out OPs and posts here on the forum even without AI that I don’t know how easy it is to tell. Perhaps it would be helpful if people called them out when you see them.T Clark

    Interesting, I haven’t noticed particularly. But I avoid reading lengthy and didactic posts which are often poorly written. The AI stuff I’ve seen often seems peculiarly worded and difficult to read.Tom Storm

    LLMs now routinely write clear and flowing prose.

    people can educate themselves about that by undertaking a search in whatever search engine they useJanus

    Where they will now get an AI-generated answer, which will be infinitely better than the enshittified results that Google was giving us until quite recently.

    This is the reality:

    The A.I.-derived OP’s are likely to be better thought-out than many non-A.I. efforts. Banning A.I. is banning background research that will become built into the way we engage with each other. Think of it as walking around with a host of sages constantly whispering purported words of wisdom into your ear, and it is up to you to sort out what is valuable and what isn’t, what is true and what is false.Joshs

    The tool is now ubiquitous. Every intellectual is going to have to factor it into their practice. Time to learn what that means.

    If you need to cheat to pass your exams or publish your research, then in the end it is you who suffers. But if AI can be used in a way than actually expands your brain, then that ought to be encouraged.

    PF seems a suitably low stakes place to evolve some social norms.
    apokrisis

    :up:

    It cannot be avoided, and it has great potential both for benefit and for harm. We need to reduce the harm by discussing and formulating good practice (and then producing a dedicated guide to the use of AI in the Help section).
  • Jamal
    11k
    Part of that discussion has to be putting our cards on the table, and refusing to be ashamed of it. It's not a matter of using AI vs. not using AI; it's how we use it.

    Currently, its use frowned upon and seen as cheating — like using a calculator to do arithmetic — such that most people will be reluctant to admit how much they use it. It's like telling the doctor how much you drink: you don't completely deny drinking, you just under-report it.

    Take me for instance. Although I use LLMs quite a lot, for everyday tasks or research, in the context of philosophical discussion or creative writing I always say I never directly cut and paste what they give me. But sometimes they come up with a word or phrase that is too good to refuse. So — was I lying?

    But using that word or phrase is surely no worse than using a thesaurus. Which leads me to think that it probably ought to be seen as, and used as, a multitool.
  • Tom Storm
    10.3k
    LLMs now routinely write clear and flowing prose.Jamal

    Interesting. I wonder then why the job applications written to me are all so terrible, full of clunky locutions that few people would actually use. Applicants need to edit the stuff they rip off so that it actually works as a coherent job application.
  • Jamal
    11k


    I don't know what's going on there. It could just be bad, lazy, or inconsistent use of LLMs. If there are any applications which are not terrible, they might be written by people who are better at using them.
  • bongo fury
    1.8k
    I'm mystified that percipient philosophers can't see a gaping difference between (A) using a search engine to produce a list of texts containing a given string (well done, us and it) and on tother hand (B) swallowing the insulting fantasy of interaction with an intelligent oracle.

    That is, I can't understand or sympathise with them admitting to reading the AI summary, instead of ignoring that insulting click-bait and searching immediately among the genuinely authored texts.

    And if you admit to no longer constructing all the sentences you post to me, then I'm disappointed. I'm looking for a better relationship.
  • bongo fury
    1.8k
    Take me for instance. Although I use LLMs quite a lot, for everyday tasks or research, in the context of philosophical discussion or creative writing I always say I never directly cut and paste what they give me. But sometimes they come up with a word or phrase that is too good to refuse. So — was I lying?Jamal

    Yes.
  • Jamal
    11k
    I'm mystified that percipient philosophers can't see a gaping difference between (A) using a search engine to produce a list of texts containing a given string (well done, us and it) and on tother hand (B) swallowing the insulting fantasy of interaction with an intelligent oracle.bongo fury

    This is obviously a false dichotomy. One can use LLMs without committing to the latter.

    That is, I can't understand or sympathise with them admitting to reading the AI summary, instead of ignoring that insulting click-bait and searching immediately among the genuinely authored texts.bongo fury

    This is quite amusing. The regular Google results have been garbage for years, and it was partly this fact that led to the tendency getting its own name: enshittification. And search engines have never simply produced "a list of texts containing a given string". To think that the AI-overview is clickbait, but the actual clickbait, i.e., the sponsored and gamified results that actually try to get you to click are somehow not — well, you've got it completely the wrong way round.

    Yesbongo fury

    Is using a thesaurus to write a novel and saying you wrote it lying?
  • bongo fury
    1.8k
    The regular Google results have been garbage for years,Jamal

    This is obviously missing the point. We knew the order of listing was biased and constantly under attack from bots. It was our job to filter and find actually authored texts, and attribute their epistemic value or lack of it to the genuinely accountable authors.

    You honestly now want to defer those epistemic judgements to a bot? How would that not be swallowing the fantasy? (Of an intelligent oracle.)

    Is using a thesaurus to write a novel and saying you wrote it lying?Jamal

    No. Well done you. Getting the thesaurus to suggest whole phrases and sentences is obviously plagiarism. The gaping difference denied, again.
  • Jamal
    11k
    This is obviously missing the point. We knew the order of listing was biased and constantly under attack from bots. It was our job to filter and find actually authored texts, and attribute their epistemic value or lack of it to the genuinely accountable authors.bongo fury

    And we have to do something similar with LLMs. So it's a "no" to this:

    You honestly now want to defer those epistemic judgements to a bot?bongo fury

    As for the thesaurus issue...

    No. Well done you. Getting the thesaurus to suggest whole phrases and sentences is obviously plagiarism. The gaping difference denied, again.bongo fury

    I'm not denying the difference between a word and a phrase. I'm just wondering where the line is in your mind. One word is ok, but a two word phrase isn't? Three, maybe?

    If you're here just to rant, I guess that's ok, but I won't be carrying on a discussion with someone so rude and confrontational. There really is no call for it. What I want to do — now that @T Clark and @apokrisis have clarified this for me — is develop a set of best practices. Since the technology won't go away, your complaints are beside the point from my point of view as someone who wants to work out how best to use it.
  • Pierre-Normand
    2.7k
    Getting the thesaurus to suggest whole phrases and sentences is obviously plagiarism.bongo fury

    I would never dare use a phrase that I first read in a thesaurus, myself. I'd be much too worried that the author of the thesaurus might sue me for copyright infringement.
  • bongo fury
    1.8k


    I would think handing your half-formed prose to a bot for it to improve it is plagiarism, regardless of the number of words changed or inserted. It's a different thing from you deliberately searching for a synonym. No?
  • Count Timothy von Icarus
    4.2k


    annoyingly, it uses a lot of em dashes, like I do myselfJamal

    For some reason it always puts spaces between em-dashes, which is a stylistic faux pas outside a few style guides (basically just AP), and so this is one way to tell between usages—also, it virtually never uses an em-dash for a single trailing clause, instead always bracketing a clause in them.

    Anyhow, it seems to me that the main risk of them are:

    Sycophantic confirmation bias; anything said to GPT in particular is "genius;"
    Hallucination of sources, which is bad for any information environment when they get repeated.

    It's gotten pretty good at citing popular authors, and so it's good for something like looking up where Socrates says something if you already know basically what he says, etc., although it tends to hallucinate bizarre paraphrase translations even of famous works. And it's not bad at giving lists of relatively important books on fairly specific topics. However, ask it for sources related to a narrow thesis and it routinely just hallucinates the thesis into tangentially related articles, sometimes summarizing them as saying the opposite of what they actually say, which is ironic considering they ripped Anna's Archive and fed it into these things, so the original papers are in there somewhere in many cases.
  • Jamal
    11k
    For some reason it always puts spaces between em-dashes, which is a stylistic faux pas outside a few style guides (basically just AP), and so this is one way to tell between usages—also, it virtually never uses an em-dash for a single trailing clause, instead always bracketing a clause in them.Count Timothy von Icarus

    This is useful information. I had it in my mind that it didn't use the spaces, so I started using spaces to distinguish myself. I guess I'll go back to spaceless em dashes. (But I think either way is ok, despite what style authorities might say)

    But the fact that it never uses an em dash for a single trailing clause—this is very good to know.

    Anyhow, it seems to me that the main risk of them are:

    Sycophantic confirmation bias; anything said to GPT in particular is "genius;"
    Hallucination of sources, which is bad for any information environment when they get repeated.
    Count Timothy von Icarus

    :up:

    Yes, the hallucination is still quite bad.
  • apokrisis
    7.6k
    I would think handing your half-formed prose to a bot for it to improve it is plagiarism, regardless of the number of words changed or inserted. It's a different thing from you deliberately searching for a synonym.bongo fury

    For fun’s, I just gave Gemini a whirl. I don’t think there is much to worry about quite yet if this is how it performs. I asked it to make your point in a more rigorous and philosophical fashion…

    The Conceptual Boundary of Authorship

    The act of submitting half-formed prose to an autonomous processing system for "improvement" raises a profound ethical and philosophical question regarding the locus of authorship.

    I would posit that this practice constitutes an illicit appropriation of intellectual effort—a form of plagiarism—irrespective of the quantitative degree of lexical or syntactic transformation enacted by the machine. The core violation lies in the delegation of the substantive process of refinement and telos (purposeful development) of the text to an external agent without explicit, critical engagement.

    This is epistemologically distinct from the deliberate, conscious act of a human agent consulting a thesaurus to seek a more precise synonym. The latter remains an act of intentional, informed choice that preserves the continuous thread of human intellectual stewardship over the text's final form and meaning. The former, in contrast, risks dissolving the very boundary between personal expression and automated fabrication.
  • bongo fury
    1.8k
    Within 10 years, the vast majority of the internet will be AI generated---such is the logic of competitiveness and consumerism. We won't be.Baden

    I really hope. :grimace:

    Sorry for ranting.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.