• Fire Ologist
    1.7k
    Why don't we work with those?Ludwig V

    AI is a tool. Tools can be useful. I don’t think it should be banned.

    And regardless of what we do, and regardless of what we say and think about AI, it will be used to harm people. All things digital can now be faked so well; people are already great at lying - we really didn’t need to make the internet even more suspicious. But we have it now.

    So we should also watch out. And have conversations like this one.
  • Outlander
    2.8k
    Regarding the new policy, sometimes when I’ve written something that comes out clunky I run it through an AI for “clarity and flow” and it subtly rearranges what I’ve written. Is that a non-no now?praxis

    +1 for an excellent question. Naturally, @Jamal would have the final say.

    (though I would presume the answer to be no, I'd be fairly confident in stating the policy and rules will likely be selectively applied to members based on their standing and tenure here. I'd eat my hat [though perhaps I'd have to buy one first] if you were penalized for such, considering the noteworthiness of your contributions here, versus say, someone who signed up a week or a month ago and seems to be keyboard-happy)

    Additionally, assuming you're honest at chess, you're clearly no simpleton. Why would a skilled biker not scoff at the idea of using training wheels for his daily commute? It just doesn't seem necessary. Look at the mod Timothy's posts. They're very long. Perfectly coherent, not a single waste of space despite the volume. But if one really wanted to critique, sure, perhaps the same idea or message can be said with a bit less wording. But that would be a butchery of his expression and who he is as a person and what he allows us to privilege to glean from his mind, and yes perhaps very soul. It would be something that was once an honor turned into a mockery of humanity itself. A simpleton might look at his posts and call such "clunky", but certainly not you.

    Look at every great classic novel. If you really wanted to, you could turn every chapter or the entire book into a sentence or a paragraph without removing a bit of the "essence" or "point" the author intended or that the general readership considers as important. What kind of nightmare world would that be? Why not just look at a traffic light all day and get amazed by the simplistic changes of color. Or why not watch paint dry? Because that's what it all boils down to when you remove your, what you believe to be "clunky-ness", what you think might be excess or something you'd otherwise wish to be perfected. We're human. We're alive. We make mistakes. We make imperfections. We ARE imperfect. And that's why we're unique and can never be replaced by autonomy. If I found out all AI "died" or became inert or was otherwise outlawed tomorrow, I'd laugh. If I found out something happened to you, I'd cry. And that's the truth. Because what you may consider something people don't want to read (and sure, most people are rather dull and probably wouldn't), people who actually have their head on right, would enjoy nothing more.

    You don't need AI. Not one bit. Again, you have tenure. So I wouldn't worry. Though, it couldn't hurt to set the example and just do what everyone has to, at least right away while the new rule is literally hot off the presses. :smile:
  • praxis
    7k


    It’s basically laziness on my part for relying on AI to occasionally straighten out my jumbled thoughts and I’ll abide by whatever decree is imposed upon us.

    About chess, that actually shows how dimwitted and lazy I am. Given how many games I’ve played I should be much better than I am, and I’ve never really studied to improve.
  • Baden
    16.7k
    What does bother me a bit is how one can identify what is and isn't written by AIs. Or have you trained an AI to do that?Ludwig V

    There are plenty of online tools out there that already do that. Some are more reliable than others. Tip: Avoid sponsored results that give false positives to sell you something.
  • Outlander
    2.8k
    There are plenty of online tools out there that already do that. Some are more reliable than others. Tip: Avoid sponsored results that give false positives to sell you something.Baden

    I think the elephant in the room, or perhaps the question he was not able to form, would be:

    Aren't people raised by their parents? The apple doesn't fall far from the tree? Don't men consciously and subconsciously become influenced by the "father" figures in their lives, be they in the flesh, or perhaps in movies, music, books, and media in general? Don't we try to learn lessons (not to say be like, as to the grounds of cosplay or impersonation) but otherwise wish to be influenced by great philosophers and other folk we respect? We do.

    So what happens when people use AI more than they speak to other humans so end up speaking in the same pattern with the same frequency of usages as the AI they use does? I do tend to purposely try to speak "extra clearly, professionally, concisely, and intelligently" online, a significant more than I would talking to anyone else in person, friend or stranger. Almost, non-genuinely and "perfectly" (out of consideration and respect for the sake of readability and of course interest to the person I'm replying to). Surely you can see how there is a concern for mistakes to be made? To err is human. And to use a tool made by humans, well, that just complicates things even further, does it not? :smile:
  • Ludwig V
    2.2k
    AI is a tool. Tools can be useful. I don’t think it should be banned.Fire Ologist
    I don't disagree. Actually, I don't think it is possible to prevent it being used. There's a lot of hype and over-enthusiasm around at the moment. I'm sure it will settle down eventually.

    So we should also watch out. And have conversations like this one.Fire Ologist
    Hopefully, people will get more reflective and more selective in how they deal with it.

    There are plenty of online tools out there that already do that. Some are more reliable than others.Baden
    I'm glad to hear that and that there are a number of them.
    If what has happened with pictures and videos is any guide, it'll be a constant struggle. Fakers will work to outsmart the tools. New tools will be developed. There's no magic bullet
    Tip: Avoid sponsored results that give false positives to sell you somethingBaden
    Yes. It's always a good idea to assume that you don't get anything for nothing - and very little for six pence, as they say in Yorkshire.
  • Leontiskos
    5.2k
    Anyway, here is one example: ask it to critique your argument. This is an exercise in humility and takes your thoughts out of the pugilistic mode and into the thoughtful, properly philosophical mode. It's a form of Socratization: stripping away the bullshit, re-orientating yourself towards the truth. Often it will find problems with your argument that can only be found when interpreting it charitably; on TPF this often doesn't happen, because people will score easy points and avoid applying the principle of charity at all costs, such that their criticisms amount to time-wasting pedantry.Jamal

    This is a good point.

    Another example: before LLMs it used to be a tedious job to put together an idea that required research, since the required sources might be diverse and difficult to find. The task of searching and cross-referencing was, I believe, not valuable in itself except from some misguided Protestant point of view. Now, an LLM can find and connect these sources, allowing you to get on with the task of developing the idea to see if it works.Jamal

    I don't think this is right. It separates the thinking of an idea from the having of an idea, which doesn't make much sense. If the research necessary to ground a thesis is too "tedious," then the thesis is not something one can put forth with integrity.

    But perhaps you are saying that we could use the LLM as a search engine, to see if others have interpreted a philosopher in the same way we are interpreting them?

    Part of the problem with the LLM is that it is private, not public. One's interaction history, prompting, etc., are not usually disclosed when appealing to the LLM as a source. The code is private in a much starker sense, even where the LLM is open source. Put differently, the LLM is a mediator that arguably has no place in person-to-person dialogue. If the LLM provides you with a good argument, then give that argument yourself, in your own words. If the LLM provides you with a good source, then read the source and make it your own before using it. The interlocutor needs your own sources and your own arguments, not your reliance on a private authority. Whatever parts of the LLMs mediation are publicly verifiable can be leveraged without use of the LLM (when dialoguing with an interlocutor). The only reason to appeal to the LLM itself would be in the case where publicly verifiable argumentation or evidence is unavailable, in which case one is appealing to the authority of the LLM qua LLM, which is both controversial and problematic. Thus a ban on LLMs need not be a ban on background, preparatory use of LLMs.
  • baker
    5.8k
    If the research necessary to ground a thesis is too "tedious," then the thesis is not something one can put forth with integrity.Leontiskos

    It all just goes to show once more how plebeified higher education has become.
  • Banno
    28.9k
    I've added the note: NO AI-WRITTEN CONTENT ALLOWED to the guidelines and I intend to start deleting AI written threads and posts and banning users who are clearly breaking the guidelines. If you want to stay here, stay human.Baden
    This is surely a rod for your own back, and the backs of the other mods. Apart form the most obvious cases, you can't tell. “AI-written” stops being a meaningful category as AI is blended in to the way we operate online, the way we search, research, browse and read is permeated and augmented by AI.

    Better to focus Mod activity on quality and engagement rather than origin.
  • Metaphysician Undercover
    14.3k
    This is surely a rod for your own back, and the backs of the other mods. Apart form the most obvious cases, you can't tell. “AI-written” stops being a meaningful category as AI is blended in to the way we operate online, the way we search, research, browse and read is permeated and augmented by AI.Banno

    I find it amazing that you seem to think that an AI which distinguishes between human and AI writing is impossible. I think that this is exactly the type of thing that an AI would be exceptionally good at.

    But what is this blending you are talking about? Copying any sentence word for word is plagiarism, and disqualifies the whole work. How do you propose that a human writer might blend with AI? Would that be an implant in the brain?
  • Jamal
    11k
    What does bother me a bit is how one can identify what is and isn't written by AIs. Or have you trained an AI to do that?Ludwig V

    There are plenty of online tools out there that already do that.Baden

    I think I agree with @Banno about this: such tools will never be able to catch everything, will produce false positives, and quickly become outdated anyway. It's easy, I think, to make an LLM's output pass as written by a human, with a prompt like this: "Write X but make it sound human. Vary the sentence structure, add a few conversational phrases, and some minor spelling and grammatical mistakes, to avoid AI detection."

    The only way detection will be reliable is if LLM watermarking is implemented, but so far it's not being implemented consistently and maybe never will be.
  • Baden
    16.7k


    If someone wants to go to that trouble, sure. And we should make them do it rather than make it easy for them. There is also the possibility of comparing to past posts, but, ultimately, if a poster wants to fool us as a means to fooling themselves about their written capabilities, they can probably get away with that somehow. But perhaps the vanity of going through that process might be enlightening to them. And if the product is undetectable, our site will at least not look like an AI playground.

    I think, though, if we make the case for human writing here, less posters will think it's acceptable to break the rules in whatever manner. We should make the case and the rules strongly because we need to be assertive about who and what we are and not just roll over. We have nothing to lose by going in that direction, and I believe the posters with most integrity here will respect us for it.
  • unimportant
    108
    prominent members of this site have used it to make themselves look smarter than they really areJanus

    Can you show an example of this? Maybe that is why user Boethius seemed to reel off such long posts in short order now I think of it? :rofl:

    I really hate it too on the wider internet now. How seemingly overnight everyone and their mother are casually writing "I asked chatgpt xyz" and have fully adopted it into their weltenshaung.

    I can't imagine how bad things are going to get in the coming years with how quickly it has already gotten to this state. Maybe it will be like some other rapid rise cultural phenomenons where it will reach saturation point fast and peter out and get pushback/revulsion before long. The bubble effect.

    What really gets my goat is being on chat for an online store and when they get tired of replying they will pas you over to AI without telling you. I quickly figured that out on livechat when pages worth of responses were regurgitated in milliseconds but over email it takes a little longer but I suspect it several times when there are no spelling errors and some general female name who is very friendly as usually you will get curt monosyllabic replies from real people as they can't be bothered with you.
  • Metaphysician Undercover
    14.3k
    There is also the possibility of comparing to past posts...Baden

    That's the trick. Idiosyncrasies are proper to the individual, and everyone has them, we cannot escape that, and trying to escape them makes them more obvious. The AI on the other hand, can write with a new random style each time, providing no indication of its identity, or that it is itself. Therefore the AI cannot be identified as the AI. However, the human being always has an identity and can always be identified. I learned that from the case of Ted Kaczynski (Unabomber). So an adequately trained AI, reviewing files, would be able to quickly flag unidentifiability, as a nonhuman trait.
  • bongo fury
    1.8k
    “AI-written” stops being a meaningful category as AI is blended in to the way we operate online, the way we search, research, browse and read is permeated and augmented by AI.Banno

    ... if and only if "plagiarised" stops being a meaningful category, for corresponding reasons?

    And yet, it seems entirely plausible to many of us that,

    I use it to research not write the results of my research. I also use books to research and don't plagiarise from them.Baden

    I conclude that you and @Jamal are unduly defeatist. (Or playing devil's advocate?) Which I had put down to a corrupting effect of engaging with chatbots at all, but am now at a loss to explain.
  • Harry Hindu
    5.8k
    That's the trick. Idiosyncrasies are proper to the individual, and everyone has them, we cannot escape that, and trying to escape them makes them more obvious. The AI on the other hand, can write with a new random style each time, providing no indication of its identity, or that it is itself. Therefore the AI cannot be identified as the AI. However, the human being always has an identity and can always be identified. I learned that from the case of Ted Kaczynski (Unabomber). So an adequately trained AI, reviewing files, would be able to quickly flag unidentifiability, as a nonhuman trait.Metaphysician Undercover

    Not to mention you can ask AI for different ways of putting the same idea.

    Not to mention that people evolve in their writing style.

    Not to mention that re-wording someone else's idea does not make you the originator of that idea, but some on this forum imply that you would be.

    What if a non-English speaker runs their writing through a translator and posts it?

    It seems like this site will have to perform a balancing act between encouraging well written posts and limiting the use of tools that allow writers to do just that.

    If the focus is more on the source rather than the idea, then this is no longer a philosophy forum but a publisher's warehouse.
  • Baden
    16.7k
    Regarding the new policy, sometimes when I’ve written something that comes out clunky I run it through an AI for “clarity and flow” and it subtly rearranges what I’ve written. Is that a non-no now?praxis

    My take on this---which I think is fairly consistent with @Jamal as we've just had an exchange in the mod forum---is, as I said there:

    "We allow proofreading in the guidelines. But we also more or less say if the proofreading moves too far into editing and then rewriting and therefore makes your text look AI generated, that's a risk you run. I would agree it's similar to grammarly in a way, but AI can sometimes take it too far. So, yes, it's not against the rules in itself, but I don't know why people can't just live with a bit of clunky writing. It will save us wondering about whether or not its AI gen'd and maintain their quirky indviduality."
  • Baden
    16.7k
    My take on this---which I think is fairly consistent with Jamal as we've just had an exchange in the mod forum---is, as I said there:Baden

    See above for example of clunky writing... :smile:
  • Baden
    16.7k
    I conclude that you and Jamal are unduly defeatist. (Or playing devil's advocate?) Which I had put down to a corrupting effect of engaging with chatbots at all, but am now at a loss to explain.bongo fury

    How so? Are you against all use of AI in every context? I mean that is definitely something we couldn't police even if we wanted to.
  • bongo fury
    1.8k
    Are you against all use of AI in every context?Baden

    No. Just the plagiarism.
  • Metaphysician Undercover
    14.3k
    It seems like this site will have to perform a balancing act between encouraging well written posts and limiting the use of tools that allow writers to do just that.Harry Hindu

    The tool which allows writers to produce well written posts is knowledge. There is no need to place any restrictions on that tool.
  • Leontiskos
    5.2k
    We have nothing to lose by going in that direction, and I believe the posters with most integrity here will respect us for it.Baden

    Good stuff.

    And if the product is undetectable, our site will at least not look like an AI playground.Baden

    The "undetectability" argument turns back on itself in certain respects. Suppose AI-use is undetectable. Ex hypothesi, this means that AI-use is not detrimental, for if something cannot be detected then it cannot be detrimental (or at least it cannot be identified as the cause of any detriment). But this is absurd. The whole premise of a rule against AI-use is that excessive and inappropriate AI-use would be detrimental to the forum, and what is detrimental to the forum is obviously also detectable. There is an equivocation occurring between being able to detect every instance of AI-use, and AI-use being a detectable cause given certain undesirable effects.

    So I want to say that one should think about generating a philosophical culture that is adverse to outsourcing thinking to AI, rather than merely thinking about a rule and its black-and-white enforcement. It shouldn't be too hard to generate that culture, given that it already exists in anyone remotely interested in philosophy. This is precisely why it is more important that the general membership would heed such a rule, whether or not the rule could be enforced with some measure of infallibility. The rule is not heeded for mere fear of being found out and punished, but rather because it is in accord with the whole ethos of philosophical inquiry. This is in accord with Kant's idea of respect for a law, rather than obeying out of fear or self-interest.

    In order to be effective, a rule need not be infallibly enforceable. No rule achieves such a thing, and the rules are very rarely enforced in that manner. It only needs to track and shape the cultural sense of TPF with respect to AI. Of course it goes far beyond AI. The fellow who is mindlessly beholden to some particular philosopher, and cannot handle objections that question his philosopher's presuppositions, does not receive much respect in philosophical circles, and such a fellow does not tend to prosper in pluralistic philosophical settings. So too with the fellow who constantly appeals to AI. The TPF culture already opposes and resists the outsourcing of one's thinking, simply in virtue of the fact that the TPF culture is a philosophical culture. The rule against outsourcing one's thinking to AI is obvious to philosophers, and those who aspire towards philosophy certainly have the wherewithal to come to understand the basis for such a rule. But I should stress that a key point here is to avoid a democratization of the guidelines. On a democratic vote we will sell our thinking to AI for a bowl of pottage. The moderators and owners need to reserve this decision for themselves, and for this reason it seems fraught to have an AI write up a democratic set of guidelines, where everyone's input is equally weighed (or else weighed in virtue of their post-count).
  • Leontiskos
    5.2k


    ...a similar argument could be given from a more analytic perspective, although I realize it is a bit hackneyed. It is as follows:

    --

    The communal danger from AI lies in the possibility that the community come to outsource its thinking as a matter of course, constantly appealing to the authority of AI instead of giving organic arguments. This danger is arguably epistemic, in the sense that someone who is interacting with an argument will be doing philosophy as long as they do not know that they are interacting with AI. For example, if Ben is using AI to write his posts and Morgan does not know this, then when Morgan engages Ben's posts he will be doing philosophy. He will be—at least to his knowledge—engaging in human-to-human philosophical dialogue. Ben hurts only himself, and Morgan is (mostly) unaffected.

    --

    There are subtle ways in which this argument fails, but it does point up the manner in which a rule need not "catch" every infraction. Ben can lie about his posts all he likes, and Morgan will not be harmed in any serious way. Indeed it is salutary that Ben his LLM-use, both for Morgan and the community, but also for Ben.
  • hypericin
    1.9k
    This danger is arguably epistemic, in the sense that someone who is interacting with an argument will be doing philosophy as long as they do not know that they are interacting with AI.Leontiskos

    Why is interacting with an AI not "doing philosophy"?
  • Leontiskos
    5.2k
    - The context here is a philosophy forum where humans interact with other humans. The premise of this whole issue is that on a human philosophy forum you interact with humans. If you do not accept that premise, then you are interested in a much broader discussion.
  • apokrisis
    7.7k
    The context here is a philosophy forum where humans interact with other humans. The premise of this whole issue is that on a human philosophy forum you interact with humans.Leontiskos

    I agree in spirit. But let's be practical.

    A blanket ban on LLM generated OPs and entire posts is a no brainer.

    It would be wonderful to delete posts where an LLM has generated the counter-argument, even if this has been disguised by the poster. But detecting this is subjective.

    With frequent posters, it is pretty obvious that they are suddenly generating slabs of text above their usual pay grade. This is bad as they aren't doing any thinking themselves and so not learning, only point scoring or being lazy. But if the argument is good, you can still just respond. And if it annoys, you can just ignore or show the finger.

    I think that it should be fine to quote LLMs just as you would quote any other source. If you make an argument and some factual, technical or historical point comes up, why not just cite a reasonably impersonal opinion on the matter. If the source is clear, others can call you out on your use of it.

    @Baden's stance on being King Canute and holding back the tide is both noble and over the top. Banning OPs, punishing those who publish text they didn't add real thought to, and keeping LLM use as transparent as possible, would be enough to preserve the human element.

    Don't you feel that being black and white is usually counter-productive in human interaction, so shouldn't a philosophy forum be up to a nuanced approach?
156789Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.