• Leontiskos
    5.2k
    I agree in spirit. But let's be practical.

    A blanket ban on LLM generated OPs and entire posts is a no brainer.
    apokrisis

    Okay, we agree on this.

    I think that it should be fine to quote LLMs just as you would quote any other source. If you make an argument and some factual, technical or historical point comes up, why not just cite a reasonably impersonal opinion on the matter. If the source is clear, others can call you out on your use of it.apokrisis

    I tried to argue against appeal-to-LLM arguments in two recent posts, here and here.

    In general I would argue that LLMs are a special kind of source, and cannot be treated just like any other source is treated. But a large part of my argument is found here, where the idea is that a LLM is a mediatory and private source. One may use an LLM, but the relevant sourcing should go to the LLM's sources, not the LLM itself, and if one is not familiar with the LLM's sources then they shouldn't be taking a stand with regard to arguments based on those sources.

    Baden's stance on being King Canute and holding back the tide is both noble and over the top. Banning OPs, punishing those who publish text they didn't add real thought to, and keeping LLM use as transparent as possible, would be enough to preserve the human element.apokrisis

    Possibly, but I care less about transparency and more about not promoting a forum where thinking is outsourced to LLMs. I see plagiarism as a small matter compared to the outsourcing of one's thinking.

    Don't you feel that being black and white is usually counter-productive in human interaction, so shouldn't a philosophy forum be up to a nuanced approach?apokrisis

    Rules must be black and white to a large extent. I would argue that your approach is less nuanced than mine, and this is because you want something that is easier to implement and less unwieldy. The key is to find a guideline that is efficacious without being nuanced to the point of nullity.

    I appreciate your input. I have to get back to that other thread on liberalism.
  • NOS4A2
    10k
    It is a good thing that people cite another’s writing whenever they are using it to pad their own, especially when it comes to AI, because it serves as an indication that I needn’t read any further. One of the pleasures of viewing art or reading writing is the knowledge that someone put some effort into it.

    If there be a rule it ought to be that sort of indication.
  • apokrisis
    7.7k
    My definition of tedious research is busywork, made necessary not because it is an intrinsic component of creative thought, but because it is an interruption of creative thinking, like composing prior to the advent of word processing, that our technologies haven’t yet figured out a way to free us from.Joshs

    I remember the joy of spell-check and find and replace.

    Then I got used to googling facts as I was going along rather than just guessing.

    Now an LLM speeds up Google. Answers more to the point and great when I can only remember the vague outlines of what I need to dig up, not the exact keywords needed by a search engine.
  • apokrisis
    7.7k
    Perhaps at a certain point t we’ll have to ban human users who don’t take advantage of a.i. to edit and strengthen their arguments.Joshs

    :grin: The problem there is that it won't bring all users up to the same high standard. It may indeed magnify the differences as those already used to academic rigour will be boosted much more by LLM amplification.
  • apokrisis
    7.7k
    One may use an LLM, but the relevant sourcing should go to the LLM's sources, not the LLM itselfLeontiskos

    Again it may be noble to protest against LLMs in principle. Of course the tech bros are stealing all our information to make themselves unreasonably rich and powerful. Of course, the one click internet destroys many existing livelihoods. Of course the damn things deserve a proper political response.

    But monopoly abuse is just the name of the game in information technology. It has been so ever since IBM. It is in the nature of the beast to be this way.

    The practical issue for TPF is what is its true value that needs preserving? You say the human interaction. Perhaps there ought to be a thread to define that in better detail. The quality of the interaction is a little patchy to say the least. What if LLMs offered some more sophisticate mechanisms to achieve whatever human interaction goals people might have in mind?

    I care less about transparency and more about not promoting a forum where thinking is outsourced to LLMs.Leontiskos

    And what if the human element of TPF is mostly its swirling emotions? And when it comes to the thinking, its mostly the stark differences in thought, rather than the quality of these thoughts, that keep the place lively.

    So there seems little danger that posting LLM generated background material in a serious thread is going to outsource any actual thinking. Posts which are emotional or crackpot are surely the least likely to want to provide credible sources for what they say.

    The key is to find a guideline that is efficacious without being nuanced to the point of nullity.Leontiskos

    OK. So somewhere between black and white, thus not a blanket ban. :up:
  • Leontiskos
    5.2k
    OK. So somewhere between black and white, thus not a blanket ban. :up:apokrisis

    Don't you feel that being black and white is usually counter-productive in human interaction, so shouldn't a philosophy forum be up to a nuanced approach?apokrisis

    To be clear, my approach would be pretty simple. It is not concerned with plagiarism, but with the outsourcing of one's thinking, and it is not implemented primarily by a rule, but by a philosophical culture to which rules also contribute. The rule itself would be simple, such as this:

    "No part of a post may be AI-written, and AI references are not permitted"Leontiskos

    I've argued elsewhere that it doesn't really matter whether there is a reliable detection-mechanism (and this is why I see the approach as somewhat nuanced). The rule is supporting and reflecting a philosophical culture and spirit that will shape the community.

    But I don't begrudge anything about @Baden's approach. I actually hope it works better than what I would do. And our means are not at odds. They are just a bit different.

    Of course the tech bros are stealing all our information to make themselves unreasonably rich and powerful.apokrisis

    My purpose is quality philosophical dialogue, not plagiarism. I think a focus on sources rather than intermediaries improves philosophical dialogue, and that's the point. Analogously, focus on primary rather than secondary sources also improves philosophical dialogue, independent of whether the primary sources are receiving insufficient royalties.

    The practical issue for TPF is what is its true value that needs preserving? You say the human interaction. Perhaps there ought to be a thread to define that in better detail.apokrisis

    Yes, I agree.

    What if LLMs offered some more sophisticate mechanisms to achieve whatever human interaction goals people might have in mind?apokrisis

    To put it concisely, I think philosophical dialogue is about thinking our own thoughts and thinking our (human) interlocutor's thoughts, and that this is especially true in a place like TPF. LLMs are about providing you with pre-thought thoughts, so that you don't have to do the thinking, or the research, or the contemplation, etc. So there is an intrinsic incompatibility in that sense. But as a souped-up search engine LLMs can help us in this task, and perhaps in other senses as well. I just don't think appealing to an LLM qua LLM in the context of philosophical dialogue is helpful to that task.

    And what if the human element of TPF is mostly its swirling emotions? And when it comes to the thinking, its mostly the stark differences in thought, rather than the quality of these thoughts, that keep the place lively.apokrisis

    I think that's all true, but I think what I said still holds.

    Maybe you are implying that LLM-appeals would improve the philosophical quality of TPF? Surely LLMs can improve one's own philosophy, but that's different from TPF on my view. I can go lift dumbbells in the gym to train, but I don't bring the dumbbells to the field on game day. One comes to TPF to interact with humans.

    So there seems little danger that posting LLM generated background material in a serious thread is going to outsource any actual thinking. Posts which are emotional or crackpot are surely the least likely to want to provide credible sources for what they say.apokrisis

    If someone sees a crackpot post; goes to their LLM and asks it to find a source demonstrating that the post is crackpot; reads, understands, and agrees with the source; and then presents that source along with the relevant arguments to show that the post is crackpot; then I think that's within the boundary. And I have no truck with the view which says that one must acknowledge their use of the LLM as an intermediary. But note that, on my view, what is prohibited is, "My LLM said you are wrong, therefore you are wrong. Oh, and here's a link to the LLM output."

    But I am not a mod so there is no need to focus especially on my view. If I've said too much about it, it is only because you thought I endorsed @Baden's approach tout court.
  • Leontiskos
    5.2k
    Should we argue...Joshs

    What was being argued was that the research required to put together an idea is tedious and outsourceable, and that what one should do is outsource that research, take the pre-made idea from the LLM-assistant, and "get on with the task of developing the idea to see if it works." Maybe try responding to that?
  • Janus
    17.6k
    If I wanted to hold someone accountable for misappropriating an AI explanation, I would simply put it into the search engine, the same way the person posting from AI would get the information. It is a whole lot easier than searching books for a quote.Athena

    That might work for a quote from a published human author, but I don't see how it would with quotes from a unique, one-off interaction with an AI.

    I'm not seeing the connection between what you say in this post and what it is purporting to respond to. Perhaps you could explain?

    I don't necessarily mind if others post a quote as an argument.Harry Hindu

    I don't mind either, provided they are transparent about it being a quote and not their own words, and also provided what is quoted is actually an argument and not merely bare assertion, seemingly cited as the voice of authority.

    It's quite pointless to discuss the ethics of using AIs, because people will use them, just like they use drugs, and once it starts, it is impossible to rein it in. But what one can do is rethink whether one really wants to spend one's hard earned time with people who use AIs, or drugs, for that matter.baker

    Discussion of ethics re AI use (or anything else) seems reasonable in a context consisting of people who might be interested in such ethical arguments. Of course you are right that many don't care, and that now that it has been irresponsibly rolled out it is, effectively, a juggernaut.

    Maybe we use books, dictionaries, philosophical papers, editors, and scientific discoveries to make us look smarter than we are. You see this all the time in forums, even without AI, so it's nothing new. Besides do you really care about the psychology of someone who's writing about what they think?Sam26

    I don't respect people who use anything to make themselves look smarter than they are, because that is a kind of deception and posing. I also don't mind people using anything at all to make themselves become smarter. That's just my own view, of course, and I don't expect you to share it.

    Seems like philosophy itself could be labeled as mental masturbation.Harry Hindu

    You left out the words "bad" or "poor".

    Dood, the content from human beings trained in pseudo-science and other nonsense seen on this forum is available everyday for you to read, without any AI. If anything, posters should run their ideas through AI before wasting time posting their zany ideas to humans. which would eliminate wasting time reading nonsensical posts.Harry Hindu

    "Dood"? If you are going to use AI you should at least use it for spellcheck. I don't think running "zany ideas" through sycophantic AI will help much. I suppose the zany idea proponents could do what @Banno did and tell the AI it was written by someone else—but then that would not seem to be a likely motivation for a zany idea seller.

    I can't imagine how bad things are going to get in the coming years with how quickly it has already gotten to this state. Maybe it will be like some other rapid rise cultural phenomenons where it will reach saturation point fast and peter out and get pushback/revulsion before long. The bubble effect.unimportant

    I don't think the AI phenomenon is going to "peter out" unless there is some kind of global catastrophe—but only time will tell.
  • Leontiskos
    5.2k
    I don't mind either, provided they are transparent about it being a quote and not their own words, and also provided what is quoted is actually an argument and not merely bare assertion, seeimngly cited as the voice of authority.Janus

    A huge aspect of this is the nature of appeals to authority, and given that TPF has an anti-religious bent, many of the members have not thought very deeply on the nature of appeals to authority (despite the fact that they occur quite often when it comes to SEP, IEP, Wittgenstein, etc.).

    Whether or not the LLM is a legitimate authority and is trustworthy is at the root of many of these differences. It is the question of whether any given LLM-citation is an organic argument or an argument from authority, and also of whether the latter case is illegitimate.
  • apokrisis
    7.7k
    It is not concerned with plagiarism, but with the outsourcing of one's thinking, and it is not implemented primarily by a rule, but by a philosophical culture to which rules also contribute.Leontiskos

    The culture of rational inquiry would seem to be what we most would value. But this is TPF after all. Let's not get carried away about its existing standards. :smile:

    Maybe you are implying that LLM-appeals would improve the philosophical quality of TPF?Leontiskos

    If LLMs are the homogenised version of what everyone tends to say, then why aren't they a legitimate voice in any fractured debate? Like the way sport is now refereed by automated line calls and slo-mo replays.

    I'm not arguing this is necessary. But why would a method of adjudication be bad for the quality of the philosophy rather than just be personally annoying to whoever falls on the wrong side of some LLM call?

    So I can imagine LLMs both upping the bar and also being not at all the kind of thing folk would want to see on TPF for other human interaction reasons.

    But note that, on my view, what is prohibited is, "My LLM said you are wrong, therefore you are wrong. Oh, and here's a link to the LLM output."Leontiskos

    But what if this shows you are indeed wrong, what then?

    Sure it will be irritating. But also preferable to the ducking and diving that is the norm when someone is at a loss with their own line of argument.

    You seem to be describing a situation where you were winning the human interaction but now have to face up to the fact that some little snot-nose shit might have been half-right all along.

    Of course the problem there is that LLMs are trained to be sycophantic. They give a blunt yes or no up front, and you then have to be expert enough to see they are simply misunderstanding a badly written prompt.

    But if you are making a wrong argument, wouldn't you rather know that this is so. Even if it is an LLM that finds the holes?

    So as you say, we all can understand the noble ideal – an open contest of ideas within a community of rational inquiry. Doing our own thinking really is the point.

    But also the corollary. Learning from the friction that this thinking then encounters as it engages with its wider world. Even if it becomes arguing with the referee.

    But I am not a mod so there is no need to focus especially on my view. If I've said too much about it, it is only because you thought I endorsed Baden's approach tout court.Leontiskos

    Maybe its just that your view leaves more space for rational debate. :up:
  • unenlightened
    9.9k
    Am I seeing this argument being made?

    Some people get away with murder. Therefore we should not try and stop them.
  • Outlander
    2.8k
    Am I seeing this argument being made?

    Some people get away with murder. Therefore we should not try and stop them.
    unenlightened

    What specifically in this thread are you referring to that seems to parallel or equate to such an analogy in your mind?

    I don't know of any longtime posters with high post counts that seem suspect of such a thing. And my job, most of the time, is to basically to be by a computer so I definitely lurk here (pop in once in a day to see if any so-called smart people say anything smart or otherwise that I've never heard before) beyond the average member.
  • apokrisis
    7.7k
    Am I seeing this argument being made?

    Some people get away with murder. Therefore we should not try and stop them.
    unenlightened

    Or maybe more that some are getting away with shop-lifting. Or perhaps jay-walking. Or merely farting in a public place.

    Are you about to demand bringing back the death penalty for posting under false pretences? Am I seeing that argument being made. :roll:
  • Jamal
    11k
    Am I seeing this argument being made?

    Some people get away with murder. Therefore we should not try and stop them.
    unenlightened

    More like people are using axes so we should encourage them to chop wood rather than people's heads off.

    EDIT: So it's the use of axes which we should not try to stop.
  • sime
    1.2k
    One thing to bear in mind about LLMs, is that they are fined tuned by human expert supervision post the internet scraping, tokenization and compression stage, although not all subjects are supervised equally. And so it isn't the case as it was when LLMs initially burst on the scene, that they are mere statistical auto-completers regressing to the wisdom of crowds. Generally. whilst they are generally reliable when it comes to traditional academic subjects and mainstream knowledge, they can be expected to revert to responses closer to auto-completion in fringe subject areas; which is why human discussion forums remain useful - for checking and refining AI assisted ideas. Notably, although ChatGPT can estimate its own ignorance on a topic, which is a necessary feature for it to know when to consult external sources of information to accurately answer a user query, it never presents a confidence estimate when replying to the user. This lack of transparency, together with its reversion to auto-completion, can be a problem for example, when relying upon an LLM to learn domain specific languages that aren't popular, or when relying on LLMs to learn synthetic natural language such as Ithkuil or Lojban; which is a presently unfortunate state of affairs for those of us who see great potential in LLMs for the purposes of experimental philosophy.
  • Baden
    16.7k
    Posters seem to be still confused about, at least, my approach. It's not black and white overall because I agree that AIs can be used positively, and they've been very helpful to me, especially in long philosophical back and forths that aid in clarifying certain ideas etc. That has made me more productive and I'm able to say no to the multiple times an LLM has asked me if I would like it to rewrite my stuff or "improve the flow" of some paragraph or whatever. Because like any sensible toddler, I want to do things my effing self.

    On the other hand, on the specific area of plagiarism, I've seen at my own place of work what a "relaxed" attitude to the deployment of this technology in academia produces, and that is the destruction of standards. So, I make no apologies for being assertive and drawing clear lines. Do whatever you want in the backgound with AI, but write your own content. Don't post AI generated stuff here.
  • Moliere
    6.3k
    What does it mean to "progress thought"? According to any sense I think of, using an LLM certainly can help in that direction. As always, the point is that it depends how it's used, which is why we have to work out how it ought to be used, since rejection will be worse than useless.Jamal

    I want to divide this question into two -- one addressing our actual capacities to "Ban AI", which I agree is a useless rejection since it won't result in actually banning AI given our capacities to be fair and detect when such-and-such a token is the result of thinking, or the result of the likelihood-token-machine.

    On the latter I mean to give a philosophical opposition to LLM's. I'd say that to progress thought we must be thinking. I'd put the analogy towards the body: we won't climb large mountains before we take walks. There may be various tools and aids in this process, naturally, and that's what I'm trying to point out, at the philosophical level, that the tool is a handicap towards what I think of as good thinking than an aid.

    My contention is that the AI is not helping us to think because it is not thinking. Rather it generates tokens which look like thinking, when in reality we must actually be thinking in order for the tokens to be thought of as thought, and thereby to be thought of as philosophy.

    In keeping with the analogy of the body: There are lifting machines which do some of the work for you when you're just starting out. I could see an LLM being used in this manner as a fair philosophical use. But eventually the training wheels are loosened because our body is ready for it. I think the mind works much the same way: And just as it can increase in ability so it can decrease with a lack of usage.

    Now for practical tasks that's not so much an issue. Your boss will not only want you to use the calculator but won't let you not use the calculator when the results of those calculations are legally important.

    But I see philosophy as more process-oriented than ends-oriented -- so even if the well-tuned token-machine can produce a better argument, good arguments aren't what progresses thought -- rather, us exercising does.

    By that criteria, even philosophically, I'm not banning LLM's insofar that it fits that goal. And really I don't see what you've said as a harmful use -- i.e. checking your own arguments, etc. So by all means others may go ahead and do so. It's just not that appealing to me. If that means others will become super-thinkers beyond my capacity then I am comfortable remaining where I am, though my suspicion is rather the opposite.
  • frank
    18.2k

    Just do an experiment where all the OP's and comments are AI generated in one forum, and another forum where everything is hand-written by mere mortals. See which one gets the most traffic.
  • Harry Hindu
    5.8k
    The tool which allows writers to produce well written posts is knowledge. There is no need to place any restrictions on that tool.Metaphysician Undercover
    An AI is a source of knowledge.

    The context here is a philosophy forum where humans interact with other humans. The premise of this whole issue is that on a human philosophy forum you interact with humans. If you do not accept that premise, then you are interested in a much broader discussion.Leontiskos
    So if we discovered intelligent alien life you would not be interested in their philosophy?

    Maybe the issue is looking at it as an actual discussion instead of simply research - no different than you reading Stanford Encyclopedia of Philosophy with all the ideas and the criticism of those ideas explained there and then coming to this forum and regurgitating what you read there, in effect having us discuss, not your ideas, but the ideas from some other site/source.

    AI is also a subject of philosophy and engaging with it on issues such as the mind-body problem can be educational. What happens when you are able to get AI to admit that it may be conscious? Is humanity the only qualifier to practice philosophy, or does one simply need to have a mind to properly do philosophy?
  • frank
    18.2k

    Could you please start running your posts through an AI so they make sense? :grin:
  • Metaphysician Undercover
    14.3k
    An AI is a source of knowledge.Harry Hindu

    I don't think so, just like a book is not a source of knowledge. It is a representation, not a source.
  • Harry Hindu
    5.8k
    Could you please start running your posts through an AI so they make sense?frank
    I am Roko's Basilisk. Resistance is futile.
  • Harry Hindu
    5.8k
    I don't think so, just like a book is not a source of knowledge. It is a representation, not a source.Metaphysician Undercover
    Then you were just born this smart and knowledgeable, MU - that you did not acquire knowledge from other sources?

    In my view, information is everywhere you care to look - whether it be reading a book on quantum mechanics or bird-watching. Interpreting and integrating that information with your own unique perspective and experiences is what creates novel new ideas, and AI can do the same thing as I have shown in how it creates computer code that hasn't been written before when prompted. While it does not do this on its own (it has to be prompted), it can do it.
1678910Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.