• fdrake
    6.7k
    I added the following to the site guidelines thread:

    Academic Conduct

    While we won't enforce any academic standards of discussion, sourcing your claims is great. Using ChatGPT or similar bots as an academic source can be grounds for a warning. Engaging with explicitly sourced ChatGPT text in the context of a thread is still ok. Misrepresenting where you got information from intentionally might get you a warning.

    Using ChatGPT like you would a news article or an academic citation is now grounds for a warning. You can still use and reference ChatGPT output, just don't use it as an authoritative source.

    Threads which discuss ChatGPT output are also still fine.
  • fdrake
    6.7k
    Some clarifying remarks:

    • Using it as a writing assistant is fine, using its largely unguided output as your entire post is not fine.
    • Using it as a source suggester is fine, using it as a source is not fine.
    • Talking about and analysing its output is fine, using it uncritically and without context is not fine.
    • Do use it to simplify the language in your pre-written OP, don't use it to write an OP in response to a lazy question. eg posting the output of "Write me a forum post comparing determinism to libertarianism as viewpoints on free will" will be frowned on, but "please simplify the language I just used in writing my essay comparing libertarianism and free will" and working that output into your post is fine.

    The intent of the rule is to stop people from using it to spread misinformation and from generating reams of undigested content. The intent is not to stop people from using it as a tool for enhancing creativity, for editing and standardisation of prose, or for using what it suggests when you have verified it before posting.
  • Baden
    16.4k
    It's somewhat of a menace to forum quality, and to the quality of human thought in general, which, in its written form, is being increasingly outsourced, and therefore lost to AI LLMs. Personally, I would be absolutely embarrassed to present anything not fully written by me as being my own, except possibly as a Shoutbox joke.

    When it comes to guidelines though, there is only so much we can do in the face of the laziness and fecklessness such a powerful tool tends to engender when paired with the natural human tendency to want to make everything as easy as possible.
  • Leontiskos
    3.2k
    Great guideline. :up:
    I've seen ChatGPT harm other forums that haven't taken such precautions.
  • frank
    16k
    I've seen Wikipedia provide misinformation. I've never seen ChatGPT do that, although I guess it could.
  • flannel jesus
    1.8k
    https://en.m.wikipedia.org/wiki/Hallucination_(artificial_intelligence)#:~:text=Researchers%20have%20recognized%20this%20issue,in%2046%25%20of%20generated%20texts.

    For example, a chatbot powered by large language models (LLMs), like ChatGPT, may embed plausible-sounding random falsehoods within its generated content. Researchers have recognized this issue, and by 2023, analysts estimated that chatbots hallucinate as much as 27% of the time,[9] with factual errors present in 46% of generated texts.
  • Jamal
    9.8k


    ChatGPT has provided me with a veritable flood of misinformation, and I don't even use it much.
  • frank
    16k

    Oh, I didn't realize that.
  • Jamal
    9.8k


    I always have to ask lots of "are you sure about x, because I thought it was y" kind of questions, whereupon it'll often say oh sorry, you're right — although sometimes it won't budge.
  • fdrake
    6.7k
    It repeats common misconceptions and flubs details constantly. It's especially bad at sourcing. It is also far too agreeable. You can easily talk it into contradicting itself, and it will do so with the air of academic aplomb regardless of what it says.

    Not using it judiciously is as dangerous for someone's sense of reality as taking Reddit or Twitter posts at their word. Perhaps more so, as chatGPT is treated as an authoritative source. It should be thought of as closer to Professor Reddit, unless you keep it on a tight, tight leash.
  • fdrake
    6.7k
    @Pierre-Normand - I see you use it a lot, I just wanted to tag you in thread to make sure you're aware of the ruling. While personally I think your use of it is about as good as can be hoped, it's just something to keep an eye on.
  • fdrake
    6.7k
    I amended the site guidelines with these clarifying remarks.
  • fdrake
    6.7k
    I encourage users to flag explicit or seeming uses of chatGPT or other bots that they see. They may turn out to be fine, it just makes it easier to keep track of how it's getting used.
  • tim wood
    9.3k
    Using ChatGPT like you would a news article or an academic citation is now grounds for a warning.fdrake

    Just for clarity, in this respect ChatGPT bad, news articles or academic citations good. Right?
  • Pierre-Normand
    2.4k
    I always have to ask lots of "are you sure about x, because I thought it was y" kind of questions, whereupon it'll often say oh sorry, you're right — although sometimes it won't budge.Jamal

    (@fdrake I was in the process of writing my response to Jamal when you tagged me)

    In my two AI threads I've posted quite a few discussions that I have had with different versions of GPT-4 and Claude (and a few with Gemini, Llama, etc.) regarding the nature and source of their hallucinations and of their understanding. One thing that I've only touched on obliquely is the distinction between their reliability and suitability as sources. I think it's important to separate those. When one learns how to use those tools better, and as they improve, so does their reliability. Them being reliable, however, still is consistent with them being unsuitable to be used as sources for a variety of reasons.

    First, regarding the reliability issue, I'd like to point out that when used skilfully (e.g. while knowing how to prompt them in a way that reduces their sycophantic inclinations to agree with the user and/or reinforce their prior beliefs) the best models (e.g. ChatGPT o1-preview and Claude 3.5 Sonnet) tend to be increasingly accurate when they push back. In the cases of GPT-4o or Claude 3.5 Sonnet, techniques like CoT (chain of thought reasoning) or a Socratic form of dialogue enhance their capabilities to unpack well grounded insights and push back, with good reasons, against the user. ChatGPT o1-preview can do this out of the box (when suitably prompted) due to its built in ability to engage in internal dialogue. While they have those capabilities, it remains the case that, unlike most human interlocutors, they don't have much of a drive to stick to their guns, as it were, rather than to develop a conciliatory viewpoint. So, if a user is caught into an epistemic bubble of sorts, the use of a language model would be much less helpful to steer them out of it than engaging humans who have planted their flag outside of this bubble. This lack of a drive that they have to stick to their guns is therefore a double edged sword.

    Coming back to my point of separating suitability from reliability; In many cases, when probed about a philosophical topic — and the measure of "reliability" is agreement with this or that part of the primary or secondary literature regarding a topic — LLMs can show themselves to be more reliable than almost any other method that falls short of a fairly deep engagement with the relevant literature. (This is what makes them such powerful tools.) If one would appeal to this fact in appealing the LLM's authority in the context of a discussion of TPF, though, this outsourcing to their authority would also constitute an outsourcing of understanding, which runs exactly counter to the purpose of philosophical discussion. For instance, it makes sense gesture toward what Descartes thought about this or that thesis in support of one's thesis since Descarte's thought signifies a landmark in the development of human thought. As such, Descartes isn't mentioned as an authority regarding a topic in the same way a textbook would be regarding a mundane problem in maths of physics. And likewise were I to point to whatever I think fdrake or Jamal think about this of that issue. The thoughts they expressed not only express understandings on the topic, but those understanding are (evolving) landmarks in an ongoing dialogues. ChatGPT's thought isn't like that. Unlike a written encyclopedia, it is fluid rather than fixed, but it is also unassumed. When I "defer" to Descartes, fdrake or Jamal, it in order to let them speak, as it were. They are authoritative in positioning themselves in the conversation. LLMs just are fallible oracles who seek to make themselves useful to their users and hence they aren't authoritative regarding where it is that their own flags are planted. They have no such flags (and won't have any until we've designed them with some degree of personal autonomy). Therefore, treating them as suitable sources amounts to relinquishing on developing one's own understanding since developing an autonomous understanding requires that one plants their own flags in relation to those of other participants in the discussion.

    tl;dr, I fully agree with the proposed site rules amendment, which seems to me warranted regardless of the degree of accuracy or reliability of LLM outputs.
  • Janus
    16.5k
    I refuse to use ChatGPT at all. I suspect some participants are using it to research particular issues and to enhance their own posts making them look much more substantive than the posts they used to present.

    I'd say ban its use altogether except in those kinds of threads explicitly concerned with exploring the chatbot nature such as @Pierre-Normand has produced.

    I mean even banning it for simple purposes such as improving grammar and writing clarity. Of course this will rely on the honesty of posters since it would seem to be impossible to prove that ChatGPT has been used.
  • Pierre-Normand
    2.4k
    I mean even banning it for simple purposes such as improving grammar and writing clarity. Of course this will rely on the honesty of posters since it would seem to be impossible to prove that ChatGPT has been used.Janus

    The new guidelines, as they are currently written, still seem to me to enable moderators to take action if the use of AI tools are being abused without permitting a sense of paranoia and/or inquisition to develop. While they don't prevent participants who have an imperfect command of English to make use of those tools to learn how to better express themselves, they also make them aware of the risks inherent in abusing them (and enable moderators who suspect such abuse to point to the guidelines).

    I'd have more to say about the worry that Banno and you express about the proliferation of AI generated content on the Internet in general (and on billboards, customer service phone lines, etc.) but that would be more topical on a thread about the political/ethical implication of AI.
  • Janus
    16.5k
    While they don't prevent participants who have an imperfect command of English to make use of those tools to learn how to better express themselves, they also make them aware of the risks inherent in abusing them (and enable moderators who suspect such abuse to point to the guidelines).Pierre-Normand

    I wonder whether using LLMs to tidy up grammar and improve clarity of expression would not hamper one's own development of those skills. In other words, I wonder whether it would not foster laziness. For the rest I agree with you.
  • Pierre-Normand
    2.4k
    I wonder whether using LLMs to tidy up grammar and improve clarity of expression would not hamper one's own development of those skills. In other words, I wonder whether it would not prevent laziness. For the rest I agree with you.Janus

    I'm just back from a walk and I was pondering about the limitations of the category "tool" for describing LLMs. One may ask: are crutches tools? Certainly, for the person who has a disability, they are tools. They enhance their physical abilities. The metaphorical use of the term also signifies a sort of harmful dependence that can indeed hamper the growth of a personal ability. But crutches (real and metaphorical) can also support and scaffold our abilities while we heal or grow.

    Another related issue is that even if we grant the usefulness of LLMs being used at tools for the restricted purpose of improving the clarity of our language, philosophy is a domains where improving the linguistic expression of an idea is inseparable from developing it. LLMs are becoming increasingly good at summarising, synthesising and analysing (unpacking) ideas. Let me just focus on the concept of unpacking. One can have an insight and harness the help of a LLM to unpack it. One then recognises the product as a clearer expression of what one had in mind all along. The LLM is being used as a midwife for purpose of philosophical maieutic. But this kind of unpacking of an obscurely expressed idea into explicit components always takes place against a background of latent knowledge. LLMs have an very large amount of latent knowledge that they can draw upon, which is distinct from the knowledge that is easily available to me (or that I am able to bring to bear). The consequence of this is that it often remains an open to question (and may be somewhat indeterminate) whether the clarified expression of the idea that has been achieved with the help of a LLM is my idea (or the unpacking of my idea) at all.

    I think the issue mostly goes away if we consider the improved formulation as a moment within the interaction between the user and the LLM. But then, the idea isn't entirely mine. Disclosing the process by means of which we have made use of a LLM also takes care of the ethical issue regarding our posting the outcome of this process on the forum. But personal discretion should also be granted in cases where the mandatory disclosure of minor linguistic improvements achieved with the help of AI tools would be tedious, I think.
  • Srap Tasmaner
    5k


    I've been thinking of beginning all my replies with "That's a very interesting point! I would enjoy the opportunity to explore these ideas further with you."

    Is that likely to be a problem?
  • Hanover
    13k
    It will make things up entirely, but it's 2024. My guess is in a few years, it'll be smarter than all of us. Except me.
  • frank
    16k
    It will make things up entirelyHanover

    I was thinking of the AI that google uses. I think I only used ChatGPT once.
  • Pierre-Normand
    2.4k
    I was thinking of the AI that google uses. I think I only used ChatGPT once.frank

    Google developed the Gemini family of LLMs. The pizza-glue episode is infamous. I couldn't find out on my own if Gemini still was integrated in the Google search function so I asked GPT-4o to help me search the internets about it.

    Reveal
    Hi GPT-4,

    You may need to search the internet to help me answer this question. Several months ago, Google had integrated their model Gemini into the Google search function to sometimes produce an overview of the search results. Some user had asked if the use of glue might be advised when cooking pizza to help the toppings stick to the dough. The Gemini generated overview had endorsed the idea of adding glue to the sauce for that purpose (An old Reddit post meant as a joke likely was part of Gemini's training data). People had made much fun of this snafu and this "pizza-glue" episode is now infamous. I can't find out if Google has since removed this Gemini powered overview function altogether or if people have just stopped talking about it. I don't see any official announcement regarding this function. Can you help me find out?

    https://chatgpt.com/share/66fb6d7c-93a8-8004-9b7b-0845250f2740
  • BC
    13.6k
    Or we could say, "Do your own thinking."
  • Ludwig V
    1.7k
    Therefore, treating them as suitable sources amounts to relinquishing on developing one's own understanding since developing an autonomous understanding requires that one plants their own flags in relation to those of other participants in the discussion.Pierre-Normand

    This point is a very good one, and suggests that the biggest loser when AIs are misused is the misuser. Socrates would be very gratified.

    When I was teaching and found derivative or plagiarized work in their essays, they often said "But X says it so much better than I can". Which reflects a misunderstanding of or perhaps a disagreement about the point of the exercise.

    Perhaps a comment somewhere in the guide-lines might help some people not to get caught up in that mistake?

    This discussion has been very helpful to me personally. I'm coming out with a view that nothing much has changed. Judicious scepticism still seems the best policy.
  • javi2541997
    5.9k
    I thought the same. For this, and other reasons I support the new guidelines for Chat GPT in TPF. People who overrate and overact on this flawed robot make me sick. 

    "It will substitute X and that." "Chat GPT will erase public employees." 

    People are crazy with this shite (sorry for using mean words).
  • Christoffer
    2.1k
    You can still use and reference ChatGPT output, just don't use it as an authoritative source.fdrake

    Do I interpret it correctly that we can use ChatGPT in arguments as long as we mark it as a ChatGPT reference? Like, supporting reasoning, but not as a factual source?

    In essence, as a tool, not as a source of facts.
  • Baden
    16.4k
    Al LLMs make naught but verbal cardboard. There is an all-pervasive ploddingness and insubstantiality to their cadence that betrays their empty core. They are the McDonald's of the written word, churning out processed verbiage that can truly impress no one but those amputated of taste, of inner poetry. They do not strike, they pose. They do not inspire, they anaesthetise. They are one more incursion of the commercial juggernaut into the beauty of human existence. And their aim is to crush it.

    They are an ugly black hole of nothingness that want our souls, which some of us will gladly trade for anything new and shiny in this new shiny world of second class techno toys our masters will keep tossing to us until we babble and drool our way to mental oblivion.

    I do not want their shit-encrusted footprints mucking up this site.

    Resist.

    My rant is over. Give me a cookie. And a hammer.
  • Hanover
    13k


    "Now the man that invented the steam drill
    Thought he was mighty fine
    But John Henry made fifteen feet
    The steam drill only made nine, Lord, Lord
    The steam drill only made nine

    John Henry hammered in the mountains
    His hammer was striking fire
    But he worked so hard, he broke his poor heart
    He laid down his hammer and he died, Lord, Lord
    He laid down his hammer and he died

    . . .

    They took John Henry to the graveyard
    And they buried him in the sand
    And every locomotive comes a-roaring by
    Says "There lies a steel-driving man, Lord, Lord
    There lies a steel-driving man"

    Well every Monday morning
    When the bluebirds begin to sing
    You can hear John Henry a mile or more
    You can hear John Henry's hammer ring, Lord, Lord
    You can hear John Henry's hammer ring."
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.