• ProtagoranSocratist
    69
    guard against confabulation by asking for sources and checking them.Banno

    yes, and overtime you can kinda intuit accuracy of what it's telling you based on subject matter and topic. For example, it's pretty much 100% accurate if you are asking it for common knowledge in popular subjects, but if the subject is more obscure, or relies more on analogue information, then it's much more likely to fail.
  • Leontiskos
    5.2k
    So you cannot see the difference between "A rule against AI use will not be heeded" and "A rule against AI use cannot be enforced". Ok.Banno

    We both know that the crux is not unenforceability. If an unenforceable rule is nevertheless expected to be heeded, then there is no argument against it. Your quibble is a red herring in relation to the steelman I've provided. :roll:

    Baden? Tell us what you think. Is my reply to you against the rules?Banno

    I would be interested, too. I haven't seen the rule enforced despite those like Banno often contravening it.

    It is also worth noting how the pro-AI Banno simply takes the AI at it's word, as a blind-faith authority. This is precisely what the end game is.
  • Banno
    28.9k
    Yep. It does a pretty good job of locating quotes and other supporting information, too.
  • ProtagoranSocratist
    69
    for example (just sharing my experiences), it's excellent for verifying claims from random internet users (it immediately calls out their BS) and helping you write computer programs, but pretty aweful at helping with musical creativity, and i've gotten mixed results with organizing wildlife information. With text, it's easy for it, but with photos, it still struggles a little.
  • Banno
    28.9k
    It is also worth noting how the pro-AI Banno simply takes the AI at it's word,Leontiskos

    No, I presented what the AI had to say, for critique. Go ahead and look at the papers it cites, and see if it represented them correctly. Let us know the result. Use the AI as a part of an ongoing conversation.

    At stake here is the task set for our Mods. Do they spend time guessing if a post is AI generated, or removing poor posts, regardless of their provenience.
  • Leontiskos
    5.2k
    No, I presented what the AI had to say, for critique. Go ahead and look at the papers it cites...Banno

    But you didn't read the papers it cited, and you , "So yes, I overstated my case. You may be able to recognise posts as AI generated at a slightly better rate than choosing at random."

    If you were better at logic you would recognize your reasoning process: "The AI said it, so it must be true." This is the sort of mindless use of AI that will become common if your attempt to undermine the LLM rule succeeds.
  • Banno
    28.9k
    It's not too bad at providing support for game play, too.
  • ProtagoranSocratist
    69
    It does amazing things with anything related to computers...yet sometimes it makes poor guesses about what should work in a certain situation.
  • Banno
    28.9k
    But you didn't read the papers it cited, and you ↪concluded, "So yes, I overstated my case. You may be able to recognise posts as AI generated at a slightly better rate than choosing at random."Leontiskos

    It's noticeable that you have not presented any evidence, one way or the other.

    If you think that what the AI said is wrong, then what you ought do is to present evidence, perhaps in the form of peer-reviewed articles that say that humans can reliably recognise AI generated text.

    But that is not what you have chosen to do. Instead, you cast aspersions. This is another part of your modus operandi, in addition to your confabulation. You do not participate in a discussion about the topic, preferring instead to talk about the folk posting.

    It's tedious.
  • Leontiskos
    5.2k
    It's noticeable that you have not presented any evidence, one way or the other.

    If you think that what the AI said is wrong, then what you ought do is to present evidence, perhaps in the form of peer-reviewed articles that say that humans can reliably recognise AI generated text.

    But that is not what you have chosen to do. Instead, you cast aspersions.
    Banno

    I am pointing out that all you have done is appealed to the authority of AI, which is precisely something that most everyone recognizes as a danger (except for you!). Now you say that I am "casting aspersions" on the AI, or that I am engaging in ad hominem against the AI (!).

    The AI has no rights. The whole point is that blind appeals to AI authority are unphilosophical and unresponsible. That's part of why the rule you are trying to undermine exists. That you have constantly engaged in these blind appeals could be shown rather easily, and it is no coincidence that the one who uses AI in these irresponsible ways is the one attempting to undermine the rule against AI.
  • Banno
    28.9k
    I am pointing out that all you have done is appealed to the authority of AI,Leontiskos
    That's simply not so. I am not saying that because it is AI generated, it is authoritative. The material is offered here for critique. Baden asked who said that the detection of AI text was unreliable. I use an AI to provide examples in answer to his question.

    If you have some evidence that the citations provided by the AI are incorrect or misrepresent the case, then present it.

    The AI is not being appealed to as an authority, but being used in order to provide sources for further consideration.

    It is being used to promote the conversation, not to foreclose on it.
  • Leontiskos
    5.2k
    The AI is not being appealed to as an authorityBanno

    But it is, as I've shown . You drew a conclusion based on the AI's response, and not based on any cited document the AI provided. Therefore you appealed to the AI as an authority. The plausibility of the conclusion could come from nowhere else than the AI, for the AI is the only thing you consulted.

    This goes back to what I've pointed out a number of times, namely that those who take the AI's content on faith are deceiving themselves when they do so, and are failing to see the way they are appealing to the AI as an authority.
  • Banno
    28.9k
    Again, you have not even attempted to show that the AI's summation was in any way inaccurate. Again, it is presented in support of a contention, and not to foreclose on the discussion. It is not an appeal to authority.

    I'll leave you to it, Leon. Cheers.
  • Leontiskos
    5.2k
    Again, you have not even attempted to show that the AI's summation was in any way inaccurate.Banno

    True, and that's because there is no such thing as an ad hominem fallacy against your AI authority. According to the TPF rules as I understand them, you are not allowed to present AI opinions as authoritative. The problem is that you have presented the AI opinion as authoritative, not that I have disregarded it as unauthoritative. One simply does not need some counterargument to oppose your appeal to AI. The appeal to AI is intrinsically impermissible. That you do not understand this underlines the confusion that AI is breeding.
  • Joshs
    6.5k
    There are primary sources, there are secondary sources, there are search engines, and then there is the LLM. Consulting a secondary source and consulting an LLM are not the same thing.

    It is worth noting that those who keep arguing in favor of LLMs seem to need to make use of falsehoods, and especially false equivalences.
    Leontiskos

    If one is using a.i. properly (and to me that’s the real issue here, not whether to use it at all), then the difference between consulting a secondary source and consulting an llm is the following:
    After locating a secondary source one merely jots down the reference and that’s the end of it. When one locates an argument from an llm that one finds valuable, one decides if the argument is something one can either defend in one’s own terms, or should be backed up with a direct quote from either a secondary or primary source. This can then be achieved with a quick prompt to the llm, and finding a reference for the quote. The fact that proper use of a.i. leaves one with the choice between incorporating arguments one can easily defend and elaborate on based on one’s own understanding of the subject, and connecting those arguments back to verifiable quotes, means that the danger of falsehood doesn’t come up at all.
  • Leontiskos
    5.2k
    the difference between consulting a secondary source and consulting an llm is the following:
    After locating a secondary source one merely jots down the reference and that’s the end of it.
    Joshs

    Well, they could read the secondary source. That's what I would usually mean when I talk about consulting a secondary source.

    When one locates an argument from an llm...Joshs

    Okay, but remember that many imbibe LLM content without thinking of it as "arguments," so you are only presenting a subclass here.

    When one locates an argument from an llm that one finds valuable, one decides if the argument is something one can either defend in one’s own terms, or should be backed up with a direct quote from either a secondary or primary source. This can then be achieved with a quick prompt to the llm, and finding a reference for the quote.Joshs

    Right, and also reading the reference. If someone uses a LLM as a kind of search engine for primary or secondary sources, then there is no concern. If someone assents to the output of the LLM without consulting (i.e. reading) any of the human sources in question, or if one is relying on the LLM to summarize human sources accurately, then the problems in question do come up, and I think this is what often occurs.

    The fact that proper use of a.i. leaves one with the choice between incorporating arguments one can easily defend and elaborate on based on one’s own understanding of the subject, and connecting those arguments back to verifiable quotes, means that the danger of falsehood doesn’t come up at all.Joshs

    What do you mean, "The danger of falsehood doesn't come up at all?"

    It seems to me that you use LLMs more responsibly than most people, so there's that. But I think there is a very large temptation to slip from responsible use to irresponsible use. LLMs were built for quick answers and the outsourcing of research. I don't find it plausible that the available shortcuts will be left untrodden.

    If the LLM is merely being used to find human sources, which are in turn consulted in their own right, then I have no more objection to an LLM than to a search engine. In I give an argument to the effect that LLMs should not be directly used in philosophical dialogue (with other humans). I am wondering if you would disagree.
  • RogueAI
    3.4k
    if the ai-using students are outcompeting the non-ai using students (or if its a "punishment" as you claim to write a thesis entirely by yourself without ai help) isnt the implication the ai is producing better work than the students at your university?

    This goes back to philosophiums point back on page 1: the argument is everything in philosophy. A good sound argument produce by an ai should trump a bad argument produced by a human, right? A 40% ai written thesis thats better than a 100% human produced one should be preferable right?
  • Jamal
    11k
    If you have a group of people argue over a topic and then you appoint a person to summarize the arguments and produce a working document that will be the basis for further discussion, you haven't given them a "calculator" job. You have given them the most important job of all. You have asked them to draft the committee document, which is almost certainly the most crucial point in the process. Yet you have re-construed this as "a calculator job to avoid tedium."Leontiskos

    Arguably the most important part of the job is very often the "calculator" task, the most tedious task.

    To say, "We encourage X," is to encourage X. It is not to say, "If you are doing Y, then we would encourage you to do Y in X manner." To say "allow" or "permit" instead of "encourage" would make a large difference.Leontiskos

    I may rewrite it to avoid misreadings like yours and bongo's. But I'll keep "encourage", since the point is to encourage some uses of LLMs over others. In "We encourage X," the X stands for "using LLMs as assistants for research, brainstorming, and editing," with the obvious emphasis on the "as". But it seems it wasn't obvious enough, so as I say, I might rewrite it or add a note at the top.
  • Baden
    16.7k
    What is the end/telos? Of a university? Of a philosophy forum?

    Universities have in some ways become engines for economic and technological progress. If that is the end of the university, and if AI is conducive to that end, then there is no reason to prevent students from using AI. In that case a large part of what it means to be "a good student" will be "a student who knows how to use AI well," and perhaps the economically-driven university is satisfied with that.

    But liberal education in the traditional sense is not a servant to the economy. It is liberal; free from such servility. It is meant to educate the human being qua human being, and philosophy has always been a central part of that.
    Leontiskos

    Absolutely. I made this point to a colleague when discussing this issue. The university is not just the buildings and the abstract institution, it is the valuing of knowledge, and the process of fostering and advancing it. Similarly, here, we are not just about being efficient in getiing words on a page, we are supposed to be developing ourseves and expressing ourselves. Reflectivity and expressivity, along with intuition and imagination are at the heart of what we do here, and at least my notion of what it means to be human.

    And, while AIs can be a useful tool (like all technology, they are both a toxin and a cure), there is a point at which they become inimical to what TPF is and should be. The line for me is certainly crossed when posters begin to use them to directly write posts and particularly OPs, in full or in part. And this is something it is still currently possible to detect. The fact that it is more work for us mods is unfortunate. But I'm not for throwing in the towel.
  • Jamal
    11k
    The line for me is certainly crossed when posters begin to use them to directly write posts and particularly OPsBaden

    For the record, I agree with this, but I think it has to be put in the context of a How to use LLMs, since there is significant ambiguity even in a statement like "you are prohibited from using AI to write a post on this forum".
  • Baden
    16.7k


    Agreed. :up:
  • Baden
    16.7k
    Baden? Tell us what you think. Is my reply to you against the rules? And should it be?Banno

    You were transparent about where you got the information, so it comes down to a question of credibility, and we can make our own minds up on that. If you had asked the AI to write your reply in full or in part and had not disclosed that, we would be in the area I want to immediately address.

    We may disagree about this issue, but I appreciate your character and personality, and that has always come through in your writing. How you internally process information from different sources when you are clear about your sources is not my main concern here. It is that I think we all ought to make sure we continue to be ourselves and produce our unique style of content. That is what makes this community diverse and worthwhile---not some product, but a process.
12345Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.