• ProtagoranSocratist
    69
    guard against confabulation by asking for sources and checking them.Banno

    yes, and overtime you can kinda intuit accuracy of what it's telling you based on subject matter and topic. For example, it's pretty much 100% accurate if you are asking it for common knowledge in popular subjects, but if the subject is more obscure, or relies more on analogue information, then it's much more likely to fail.
  • Leontiskos
    5.2k
    So you cannot see the difference between "A rule against AI use will not be heeded" and "A rule against AI use cannot be enforced". Ok.Banno

    We both know that the crux is not unenforceability. If an unenforceable rule is nevertheless expected to be heeded, then there is no argument against it. Your quibble is a red herring in relation to the steelman I've provided. :roll:

    Baden? Tell us what you think. Is my reply to you against the rules?Banno

    I would be interested, too. I haven't seen the rule enforced despite those like Banno often contravening it.

    It is also worth noting how the pro-AI Banno simply takes the AI at it's word, as a blind-faith authority. This is precisely what the end game is.
  • Banno
    28.9k
    Yep. It does a pretty good job of locating quotes and other supporting information, too.
  • ProtagoranSocratist
    69
    for example (just sharing my experiences), it's excellent for verifying claims from random internet users (it immediately calls out their BS) and helping you write computer programs, but pretty aweful at helping with musical creativity, and i've gotten mixed results with organizing wildlife information. With text, it's easy for it, but with photos, it still struggles a little.
  • Banno
    28.9k
    It is also worth noting how the pro-AI Banno simply takes the AI at it's word,Leontiskos

    No, I presented what the AI had to say, for critique. Go ahead and look at the papers it cites, and see if it represented them correctly. Let us know the result. Use the AI as a part of an ongoing conversation.

    At stake here is the task set for our Mods. Do they spend time guessing if a post is AI generated, or removing poor posts, regardless of their provenience.
  • Leontiskos
    5.2k
    No, I presented what the AI had to say, for critique. Go ahead and look at the papers it cites...Banno

    But you didn't read the papers it cited, and you , "So yes, I overstated my case. You may be able to recognise posts as AI generated at a slightly better rate than choosing at random."

    If you were better at logic you would recognize your reasoning process: "The AI said it, so it must be true." This is the sort of mindless use of AI that will become common if your attempt to undermine the LLM rule succeeds.
  • Banno
    28.9k
    It's not too bad at providing support for game play, too.
  • ProtagoranSocratist
    69
    It does amazing things with anything related to computers...yet sometimes it makes poor guesses about what should work in a certain situation.
  • Banno
    28.9k
    But you didn't read the papers it cited, and you ↪concluded, "So yes, I overstated my case. You may be able to recognise posts as AI generated at a slightly better rate than choosing at random."Leontiskos

    It's noticeable that you have not presented any evidence, one way or the other.

    If you think that what the AI said is wrong, then what you ought do is to present evidence, perhaps in the form of peer-reviewed articles that say that humans can reliably recognise AI generated text.

    But that is not what you have chosen to do. Instead, you cast aspersions. This is another part of your modus operandi, in addition to your confabulation. You do not participate in a discussion about the topic, preferring instead to talk about the folk posting.

    It's tedious.
  • Leontiskos
    5.2k
    It's noticeable that you have not presented any evidence, one way or the other.

    If you think that what the AI said is wrong, then what you ought do is to present evidence, perhaps in the form of peer-reviewed articles that say that humans can reliably recognise AI generated text.

    But that is not what you have chosen to do. Instead, you cast aspersions.
    Banno

    I am pointing out that all you have done is appealed to the authority of AI, which is precisely something that most everyone recognizes as a danger (except for you!). Now you say that I am "casting aspersions" on the AI, or that I am engaging in ad hominem against the AI (!).

    The AI has no rights. The whole point is that blind appeals to AI authority are unphilosophical and unresponsible. That's part of why the rule you are trying to undermine exists. That you have constantly engaged in these blind appeals could be shown rather easily, and it is no coincidence that the one who uses AI in these irresponsible ways is the one attempting to undermine the rule against AI.
  • Banno
    28.9k
    I am pointing out that all you have done is appealed to the authority of AI,Leontiskos
    That's simply not so. I am not saying that because it is AI generated, it is authoritative. The material is offered here for critique. Baden asked who said that the detection of AI text was unreliable. I use an AI to provide examples in answer to his question.

    If you have some evidence that the citations provided by the AI are incorrect or misrepresent the case, then present it.

    The AI is not being appealed to as an authority, but being used in order to provide sources for further consideration.

    It is being used to promote the conversation, not to foreclose on it.
  • Leontiskos
    5.2k
    The AI is not being appealed to as an authorityBanno

    But it is, as I've shown . You drew a conclusion based on the AI's response, and not based on any cited document the AI provided. Therefore you appealed to the AI as an authority. The plausibility of the conclusion could come from nowhere else than the AI, for the AI is the only thing you consulted.

    This goes back to what I've pointed out a number of times, namely that those who take the AI's content on faith are deceiving themselves when they do so, and are failing to see the way they are appealing to the AI as an authority.
  • Banno
    28.9k
    Again, you have not even attempted to show that the AI's summation was in any way inaccurate. Again, it is presented in support of a contention, and not to foreclose on the discussion. It is not an appeal to authority.

    I'll leave you to it, Leon. Cheers.
  • Leontiskos
    5.2k
    Again, you have not even attempted to show that the AI's summation was in any way inaccurate.Banno

    True, and that's because there is no such thing as an ad hominem fallacy against your AI authority. According to the TPF rules as I understand them, you are not allowed to present AI opinions as authoritative. The problem is that you have presented the AI opinion as authoritative, not that I have disregarded it as unauthoritative. One simply does not need some counterargument to oppose your appeal to AI. The appeal to AI is intrinsically impermissible. That you do not understand this underlines the confusion that AI is breeding.
  • Joshs
    6.5k
    There are primary sources, there are secondary sources, there are search engines, and then there is the LLM. Consulting a secondary source and consulting an LLM are not the same thing.

    It is worth noting that those who keep arguing in favor of LLMs seem to need to make use of falsehoods, and especially false equivalences.
    Leontiskos

    If one is using a.i. properly (and to me that’s the real issue here, not whether to use it at all), then the difference between consulting a secondary source and consulting an llm is the following:
    After locating a secondary source one merely jots down the reference and that’s the end of it. When one locates an argument from an llm that one finds valuable, one decides if the argument is something one can either defend in one’s own terms, or should be backed up with a direct quote from either a secondary or primary source. This can then be achieved with a quick prompt to the llm, and finding a reference for the quote. The fact that proper use of a.i. leaves one with the choice between incorporating arguments one can easily defend and elaborate on based on one’s own understanding of the subject, and connecting those arguments back to verifiable quotes, means that the danger of falsehood doesn’t come up at all.
  • Leontiskos
    5.2k
    the difference between consulting a secondary source and consulting an llm is the following:
    After locating a secondary source one merely jots down the reference and that’s the end of it.
    Joshs

    Well, they could read the secondary source. That's what I would usually mean when I talk about consulting a secondary source.

    When one locates an argument from an llm...Joshs

    Okay, but remember that many imbibe LLM content without thinking of it as "arguments," so you are only presenting a subclass here.

    When one locates an argument from an llm that one finds valuable, one decides if the argument is something one can either defend in one’s own terms, or should be backed up with a direct quote from either a secondary or primary source. This can then be achieved with a quick prompt to the llm, and finding a reference for the quote.Joshs

    Right, and also reading the reference. If someone uses a LLM as a kind of search engine for primary or secondary sources, then there is no concern. If someone assents to the output of the LLM without consulting (i.e. reading) any of the human sources in question, or if one is relying on the LLM to summarize human sources accurately, then the problems in question do come up, and I think this is what often occurs.

    The fact that proper use of a.i. leaves one with the choice between incorporating arguments one can easily defend and elaborate on based on one’s own understanding of the subject, and connecting those arguments back to verifiable quotes, means that the danger of falsehood doesn’t come up at all.Joshs

    What do you mean, "The danger of falsehood doesn't come up at all?"

    It seems to me that you use LLMs more responsibly than most people, so there's that. But I think there is a very large temptation to slip from responsible use to irresponsible use. LLMs were built for quick answers and the outsourcing of research. I don't find it plausible that the available shortcuts will be left untrodden.

    If the LLM is merely being used to find human sources, which are in turn consulted in their own right, then I have no more objection to an LLM than to a search engine. In I give an argument to the effect that LLMs should not be directly used in philosophical dialogue (with other humans). I am wondering if you would disagree.
  • RogueAI
    3.4k
    if the ai-using students are outcompeting the non-ai using students (or if its a "punishment" as you claim to write a thesis entirely by yourself without ai help) isnt the implication the ai is producing better work than the students at your university?

    This goes back to philosophiums point back on page 1: the argument is everything in philosophy. A good sound argument produce by an ai should trump a bad argument produced by a human, right? A 40% ai written thesis thats better than a 100% human produced one should be preferable right?
  • Jamal
    11k
    If you have a group of people argue over a topic and then you appoint a person to summarize the arguments and produce a working document that will be the basis for further discussion, you haven't given them a "calculator" job. You have given them the most important job of all. You have asked them to draft the committee document, which is almost certainly the most crucial point in the process. Yet you have re-construed this as "a calculator job to avoid tedium."Leontiskos

    Arguably the most important part of the job is very often the "calculator" task, the most tedious task.

    To say, "We encourage X," is to encourage X. It is not to say, "If you are doing Y, then we would encourage you to do Y in X manner." To say "allow" or "permit" instead of "encourage" would make a large difference.Leontiskos

    I may rewrite it to avoid misreadings like yours and bongo's. But I'll keep "encourage", since the point is to encourage some uses of LLMs over others. In "We encourage X," the X stands for "using LLMs as assistants for research, brainstorming, and editing," with the obvious emphasis on the "as". But it seems it wasn't obvious enough, so as I say, I might rewrite it or add a note at the top.
  • Baden
    16.7k
    What is the end/telos? Of a university? Of a philosophy forum?

    Universities have in some ways become engines for economic and technological progress. If that is the end of the university, and if AI is conducive to that end, then there is no reason to prevent students from using AI. In that case a large part of what it means to be "a good student" will be "a student who knows how to use AI well," and perhaps the economically-driven university is satisfied with that.

    But liberal education in the traditional sense is not a servant to the economy. It is liberal; free from such servility. It is meant to educate the human being qua human being, and philosophy has always been a central part of that.
    Leontiskos

    Absolutely. I made this point to a colleague when discussing this issue. The university is not just the buildings and the abstract institution, it is the valuing of knowledge, and the process of fostering and advancing it. Similarly, here, we are not just about being efficient in getiing words on a page, we are supposed to be developing ourseves and expressing ourselves. Reflectivity and expressivity, along with intuition and imagination are at the heart of what we do here, and at least my notion of what it means to be human.

    And, while AIs can be a useful tool (like all technology, they are both a toxin and a cure), there is a point at which they become inimical to what TPF is and should be. The line for me is certainly crossed when posters begin to use them to directly write posts and particularly OPs, in full or in part. And this is something it is still currently possible to detect. The fact that it is more work for us mods is unfortunate. But I'm not for throwing in the towel.
  • Jamal
    11k
    The line for me is certainly crossed when posters begin to use them to directly write posts and particularly OPsBaden

    For the record, I agree with this, but I think it has to be put in the context of a How to use LLMs, since there is significant ambiguity even in a statement like "you are prohibited from using AI to write a post on this forum".
  • Baden
    16.7k


    Agreed. :up:
  • Baden
    16.7k
    Baden? Tell us what you think. Is my reply to you against the rules? And should it be?Banno

    You were transparent about where you got the information, so it comes down to a question of credibility, and we can make our own minds up on that. If you had asked the AI to write your reply in full or in part and had not disclosed that, we would be in the area I want to immediately address.

    We may disagree about this issue, but I appreciate your character and personality, and that has always come through in your writing. How you internally process information from different sources when you are clear about your sources is not my main concern here. It is that I think we all ought to make sure we continue to be ourselves and produce our unique style of content. That is what makes this community diverse and worthwhile---not some product, but a process.
  • Banno
    28.9k
    Thank you.
  • bongo fury
    1.8k
    "We encourage X," the X stands for "using LLMs as assistants for research, brainstorming, and editing," with the obvious emphasis on the "as".Jamal

    For editing, as in spell and (in the limit) grammar checking, yes? Or, I guess not, as these aren't LLM tasks?

    @Jamal @Baden

    Regarding the new policy, sometimes when I’ve written something that comes out clunky I run it through an AI for “clarity and flow” and it subtly rearranges what I’ve written. Is that a no-no now?
    praxis

    @praxis Did you get clarification on this?
  • praxis
    7k


    My take on this---which I think is fairly consistent with Jamal as we've just had an exchange in the mod forum---is, as I said there:

    "We allow proofreading in the guidelines. But we also more or less say if the proofreading moves too far into editing and then rewriting and therefore makes your text look AI generated, that's a risk you run. I would agree it's similar to grammarly in a way, but AI can sometimes take it too far. So, yes, it's not against the rules in itself, but I don't know why people can't just live with a bit of clunky writing. It will save us wondering about whether or not its AI gen'd and maintain their quirky indviduality."
    Baden
  • Leontiskos
    5.2k
    Arguably the most important part of the job is very often the "calculator" task, the most tedious task.Jamal

    The point is that you've outsourced the drafting of the guidelines to AI. Whether or not drafting forum guidelines is a tedious, sub-human task is a separate question.

    But I'll keep "encourage", since the point is to encourage some uses of LLMs over others. In "We encourage X," the X stands for "using LLMs as assistants for research, brainstorming, and editing," with the obvious emphasis on the "as".Jamal

    You are claiming that, "We encourage using LLMs as assistants for research, brainstorming, and editing," means, "If one wishes to use an LLM, we would encourage that they use the LLM in X way rather than in Y way." Do you understand that this is what you are claiming?

    It is very helpful when those who enforce the rules write the rules. When this does not happen, those who enforce the rules end up interpreting the rules contrary to their natural meaning.
  • Leontiskos
    5.2k
    If you had asked the AI to write your reply in full or in part and had not disclosed that, we would be in the area I want to immediately address.Baden

    So we are allowed to write entire posts that are purely AI-generated content, or to simply cite AI as evidence that something is true, so long as we are transparent that the content is AI-generated? Such that if someone gives an entire post that is nothing more than a quote from AI, nothing has been violated?

    How is this in line with the human-to-human interaction that the rule is supposed to create?
  • Leontiskos
    5.2k
    Reflectivity and expressivity, along with intuition and imagination are at the heart of what we do here, and at least my notion of what it means to be human.Baden

    I would agree. I would want to say that, for philosophy, thinking is an end in itself, and therefore cannot be outsourced as a means to some further end.

    And, while AIs can be a useful tool (like all technology, they are both a toxin and a cure), there is a point at which they become inimical to what TPF is and should be. The line for me is certainly crossed when posters begin to use them to directly write posts and particularly OPs, in full or in part. And this is something it is still currently possible to detect. The fact that it is more work for us mods is unfortunate. But I'm not for throwing in the towel.Baden

    I'm encouraged that you're willing to put in the work.

    As above, I don't see how the line can be drawn in such a way that mere appeals to AI authority—whether an implicit appeal as found in a post with nothing more than a quoted AI response, or an explicit appeal where one "argues" their position by mere reference to AI output—are not crossing the line. If one can cite AI as an authority that speaks for itself and requires no human comment or human conveyance, then it's not clear why the AI can't speak for itself tout court.

    We could envision a kind of limit case where someone queries AI and then studies the output extensively. They "make it their own," by agreeing with the arguments and the language to such an extent that they are committed to argue the exact points and words as their own points and words. They post the same words to TPF, which they have "baptized" as their own and are willing to defend in a fully human manner. Supposing for the sake of argument that such a thing would be formally permissible (even if, materially, it would be sanctioned or flagged), what then would be the difference when someone posts AI output to justify their claims? ...And let us suppose that in both cases the AI-sourcing is transparent.

    If one wants members to think in a manner that goes beyond AI regurgitation, then it would seem that quote-regurgitations of AI fall into the same category as first-person regurgitations of AI. Contrariwise, if I love Alasdair MacIntyre, imbibe his work, quote him, and begin to sound like him myself, there is no problem. There is no problem because MacIntyre is a human, and thus the thinking being emulated or even regurgitated is human thinking. Yet if someone imbibes AI, quotes it constantly, and begins to sound themselves like AI, in this case the "thinking" being emulated or regurgitated is non-human thinking. If I quote MacIntyre and appeal to his authority, I am appealing to the authority of a thinking human. When Banno quotes AI and appeals to its authority, he is appealing to the authority of a non-thinking language-piecing algorithm.

    The laissez-faire approach to sourcing leads to camps, such as the camp of people who take Wittgenstein as an authority and accept arguments from the authority of Wittgenstein, and those who don't. The laissez-faire approach to AI sourcing will lead to the same thing, where there will be groups of people who simply quote AI back and forth to each other in the same way that Wittgenstenians quote Wittgenstein back and forth to each other, and on the other hand those who do not accept such sources as authorities. One difference is that Wittgenstein and MacIntyre are humans whereas AI is not. Another difference is that reading and exegeting Wittgenstein requires philosophical effort and exertion, whereas LLMs were basically created to avoid that sort of effort and exertion. Hence there will be a much greater impetus to lean on LLMs than to lean on Wittgenstein.

    Isn't the problem that of letting LLMs do our thinking for us, whether or not we are giving the LLM credit for doing our thinking? If so, then it doesn't matter whether we provide the proper citation to the LLM source.* What matters is that we are letting the LLM do our thinking for us. "It's true because the LLM said so, and I have no need to read the LLM's sources or understand the underlying evidence."

    (Cf. The LLM is a private authority, not a public authority, and therefore arguments from authority based on LLMs are invalid arguments from authority.)


    * And in this case it is equally true that the "plagiarism" argument is separate and lesser, and should not be conflated with the deeper issue of outsourcing thinking. One need not plagiarize in order to outsource their thinking.
  • Banno
    28.9k
    This is a thread about ways to make use of AI in doing philosophy. There is another thread about banning AI altogether.
  • Banno
    28.9k
    By way of getting the thread back on topic, this is from Claude, prompted by the URLs for all five pages and
    This is a thread about ways to make use of AI in doing philosophy. Make a list of the suggestions in that thread that would supplement or improve the quality of the philosophy presented in the posts on the forum. — prompt


    I'll fetch and analyze this philosophy forum thread to identify suggestions for using AI effectively in philosophical work.Based on my analysis of this philosophy forum thread, here are the suggestions for using AI that would supplement or improve the quality of philosophical work:

    Productive AI Uses for Philosophy

    **Research and Knowledge Acquisition:**
    - Use AI to quickly summarize philosophical concepts, theories, or historical contexts to establish common ground for discussion
    - Summarize the thought of specific philosophers as a starting point
    - Guard against AI confabulation by always asking for sources and verifying them
    - Use AI for extensive research conversations (as opposed to writing)

    **Critical Engagement and Self-Improvement:**
    - Use AI as a Socratic sparring partner to challenge your own arguments
    - Feed your draft work to AI (pretending it's from someone else) and ask for critique to identify weak points before posting
    - Have AI generate counter-arguments to strengthen your position
    - Use AI to explore multiple interpretations and readings of philosophical texts by crafting prompts that cover a wide variety of perspectives

    **Role-Playing and Dialectical Exploration:**
    - Get AI to role-play as a specific philosopher or someone holding a particular view
    - Have AI engage from different perspectives (as itself, as you, as various other people)
    - Use AI to argue positions you don't hold, allowing you to hear what you don't want to hear

    **Writing Support:**
    - Use AI for clarity and editing—rephrasing clunky sentences or improving flow
    - Use AI as a stylistic editor for grammar, syntax, and clarity after writing your own ideas
    - Use AI for proofreading (though be aware this can sometimes edge too far into rewriting)

    **Synthesis and Organization:**
    - Use AI to synthesize multiple perspectives or arguments (e.g., synthesizing different philosophical approaches in a particular style)
    - Use AI to help structure ideas and brainstorm, but not to compose full arguments
    - Use AI for the "tedious tasks" of organizing and structuring complex material

    **Critical Best Practices:**
    - **Always verify AI output** - treat AI as an "over-confident assistant" that requires rigorous evaluation
    - **Maintain intellectual ownership** - you must be able to defend any AI-generated argument in your own terms
    - **Iterate and refine** - use AI outputs as starting points for your own philosophical refinement
    - **Check sources** - when AI provides references, verify them (as they can be fabricated)
    - **Use transparency** - disclose substantial AI assistance to maintain trust and intellectual honesty
    - **Avoid tunnel vision** - craft prompts that explore diverse readings rather than confirming your biases
    - **Make it your own** - AI-generated knowledge must be internalized just as you would internalize knowledge from any source

    The thread emphasizes that AI should augment human thinking rather than replace it, serving as a tool for expanding your philosophical capacities while you remain the ultimate director and arbiter of the ideas.
    — Claude
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.