• Leontiskos
    5.2k
    I agree in spirit. But let's be practical.

    A blanket ban on LLM generated OPs and entire posts is a no brainer.
    apokrisis

    Okay, we agree on this.

    I think that it should be fine to quote LLMs just as you would quote any other source. If you make an argument and some factual, technical or historical point comes up, why not just cite a reasonably impersonal opinion on the matter. If the source is clear, others can call you out on your use of it.apokrisis

    I tried to argue against appeal-to-LLM arguments in two recent posts, here and here.

    In general I would argue that LLMs are a special kind of source, and cannot be treated just like any other source is treated. But a large part of my argument is found here, where the idea is that a LLM is a mediatory and private source. One may use an LLM, but the relevant sourcing should go to the LLM's sources, not the LLM itself, and if one is not familiar with the LLM's sources then they shouldn't be taking a stand with regard to arguments based on those sources.

    Baden's stance on being King Canute and holding back the tide is both noble and over the top. Banning OPs, punishing those who publish text they didn't add real thought to, and keeping LLM use as transparent as possible, would be enough to preserve the human element.apokrisis

    Possibly, but I care less about transparency and more about not promoting a forum where thinking is outsourced to LLMs. I see plagiarism as a small matter compared to the outsourcing of one's thinking.

    Don't you feel that being black and white is usually counter-productive in human interaction, so shouldn't a philosophy forum be up to a nuanced approach?apokrisis

    Rules must be black and white to a large extent. I would argue that your approach is less nuanced than mine, and this is because you want something that is easier to implement and less unwieldy. The key is to find a guideline that is efficacious without being nuanced to the point of nullity.

    I appreciate your input. I have to get back to that other thread on liberalism.
  • NOS4A2
    10k
    It is a good thing that people cite another’s writing whenever they are using it to pad their own, especially when it comes to AI, because it serves as an indication that I needn’t read any further. One of the pleasures of viewing art or reading writing is the knowledge that someone put some effort into it.

    If there be a rule it ought to be that sort of indication.
  • apokrisis
    7.7k
    My definition of tedious research is busywork, made necessary not because it is an intrinsic component of creative thought, but because it is an interruption of creative thinking, like composing prior to the advent of word processing, that our technologies haven’t yet figured out a way to free us from.Joshs

    I remember the joy of spell-check and find and replace.

    Then I got used to googling facts as I was going along rather than just guessing.

    Now an LLM speeds up Google. Answers more to the point and great when I can only remember the vague outlines of what I need to dig up, not the exact keywords needed by a search engine.
  • apokrisis
    7.7k
    Perhaps at a certain point t we’ll have to ban human users who don’t take advantage of a.i. to edit and strengthen their arguments.Joshs

    :grin: The problem there is that it won't bring all users up to the same high standard. It may indeed magnify the differences as those already used to academic rigour will be boosted much more by LLM amplification.
  • apokrisis
    7.7k
    One may use an LLM, but the relevant sourcing should go to the LLM's sources, not the LLM itselfLeontiskos

    Again it may be noble to protest against LLMs in principle. Of course the tech bros are stealing all our information to make themselves unreasonably rich and powerful. Of course, the one click internet destroys many existing livelihoods. Of course the damn things deserve a proper political response.

    But monopoly abuse is just the name of the game in information technology. It has been so ever since IBM. It is in the nature of the beast to be this way.

    The practical issue for TPF is what is its true value that needs preserving? You say the human interaction. Perhaps there ought to be a thread to define that in better detail. The quality of the interaction is a little patchy to say the least. What if LLMs offered some more sophisticate mechanisms to achieve whatever human interaction goals people might have in mind?

    I care less about transparency and more about not promoting a forum where thinking is outsourced to LLMs.Leontiskos

    And what if the human element of TPF is mostly its swirling emotions? And when it comes to the thinking, its mostly the stark differences in thought, rather than the quality of these thoughts, that keep the place lively.

    So there seems little danger that posting LLM generated background material in a serious thread is going to outsource any actual thinking. Posts which are emotional or crackpot are surely the least likely to want to provide credible sources for what they say.

    The key is to find a guideline that is efficacious without being nuanced to the point of nullity.Leontiskos

    OK. So somewhere between black and white, thus not a blanket ban. :up:
  • Leontiskos
    5.2k
    OK. So somewhere between black and white, thus not a blanket ban. :up:apokrisis

    Don't you feel that being black and white is usually counter-productive in human interaction, so shouldn't a philosophy forum be up to a nuanced approach?apokrisis

    To be clear, my approach would be pretty simple. It is not concerned with plagiarism, but with the outsourcing of one's thinking, and it is not implemented primarily by a rule, but by a philosophical culture to which rules also contribute. The rule itself would be simple, such as this:

    "No part of a post may be AI-written, and AI references are not permitted"Leontiskos

    I've argued elsewhere that it doesn't really matter whether there is a reliable detection-mechanism (and this is why I see the approach as somewhat nuanced). The rule is supporting and reflecting a philosophical culture and spirit that will shape the community.

    But I don't begrudge anything about @Baden's approach. I actually hope it works better than what I would do. And our means are not at odds. They are just a bit different.

    Of course the tech bros are stealing all our information to make themselves unreasonably rich and powerful.apokrisis

    My purpose is quality philosophical dialogue, not plagiarism. I think a focus on sources rather than intermediaries improves philosophical dialogue, and that's the point. Analogously, focus on primary rather than secondary sources also improves philosophical dialogue, independent of whether the primary sources are receiving insufficient royalties.

    The practical issue for TPF is what is its true value that needs preserving? You say the human interaction. Perhaps there ought to be a thread to define that in better detail.apokrisis

    Yes, I agree.

    What if LLMs offered some more sophisticate mechanisms to achieve whatever human interaction goals people might have in mind?apokrisis

    To put it concisely, I think philosophical dialogue is about thinking our own thoughts and thinking our (human) interlocutor's thoughts, and that this is especially true in a place like TPF. LLMs are about providing you with pre-thought thoughts, so that you don't have to do the thinking, or the research, or the contemplation, etc. So there is an intrinsic incompatibility in that sense. But as a souped-up search engine LLMs can help us in this task, and perhaps in other senses as well. I just don't think appealing to an LLM qua LLM in the context of philosophical dialogue is helpful to that task.

    And what if the human element of TPF is mostly its swirling emotions? And when it comes to the thinking, its mostly the stark differences in thought, rather than the quality of these thoughts, that keep the place lively.apokrisis

    I think that's all true, but I think what I said still holds.

    Maybe you are implying that LLM-appeals would improve the philosophical quality of TPF? Surely LLMs can improve one's own philosophy, but that's different from TPF on my view. I can go lift dumbbells in the gym to train, but I don't bring the dumbbells to the field on game day. One comes to TPF to interact with humans.

    So there seems little danger that posting LLM generated background material in a serious thread is going to outsource any actual thinking. Posts which are emotional or crackpot are surely the least likely to want to provide credible sources for what they say.apokrisis

    If someone sees a crackpot post; goes to their LLM and asks it to find a source demonstrating that the post is crackpot; reads, understands, and agrees with the source; and then presents that source along with the relevant arguments to show that the post is crackpot; then I think that's within the boundary. And I have no truck with the view which says that one must acknowledge their use of the LLM as an intermediary. But note that, on my view, what is prohibited is, "My LLM said you are wrong, therefore you are wrong. Oh, and here's a link to the LLM output."

    But I am not a mod so there is no need to focus especially on my view. If I've said too much about it, it is only because you thought I endorsed @Baden's approach tout court.
  • Leontiskos
    5.2k
    Should we argue...Joshs

    What was being argued was that the research required to put together an idea is tedious and outsourceable, and that what one should do is outsource that research, take the pre-made idea from the LLM-assistant, and "get on with the task of developing the idea to see if it works." Maybe try responding to that?
  • Janus
    17.6k
    If I wanted to hold someone accountable for misappropriating an AI explanation, I would simply put it into the search engine, the same way the person posting from AI would get the information. It is a whole lot easier than searching books for a quote.Athena

    That might work for a quote from a published human author, but I don't see how it would with quotes from a unique, one-off interaction with an AI.

    I'm not seeing the connection between what you say in this post and what it is purporting to respond to. Perhaps you could explain?

    I don't necessarily mind if others post a quote as an argument.Harry Hindu

    I don't mind either, provided they are transparent about it being a quote and not their own words, and also provided what is quoted is actually an argument and not merely bare assertion, seemingly cited as the voice of authority.

    It's quite pointless to discuss the ethics of using AIs, because people will use them, just like they use drugs, and once it starts, it is impossible to rein it in. But what one can do is rethink whether one really wants to spend one's hard earned time with people who use AIs, or drugs, for that matter.baker

    Discussion of ethics re AI use (or anything else) seems reasonable in a context consisting of people who might be interested in such ethical arguments. Of course you are right that many don't care, and that now that it has been irresponsibly rolled out it is, effectively, a juggernaut.

    Maybe we use books, dictionaries, philosophical papers, editors, and scientific discoveries to make us look smarter than we are. You see this all the time in forums, even without AI, so it's nothing new. Besides do you really care about the psychology of someone who's writing about what they think?Sam26

    I don't respect people who use anything to make themselves look smarter than they are, because that is a kind of deception and posing. I also don't mind people using anything at all to make themselves become smarter. That's just my own view, of course, and I don't expect you to share it.

    Seems like philosophy itself could be labeled as mental masturbation.Harry Hindu

    You left out the words "bad" or "poor".

    Dood, the content from human beings trained in pseudo-science and other nonsense seen on this forum is available everyday for you to read, without any AI. If anything, posters should run their ideas through AI before wasting time posting their zany ideas to humans. which would eliminate wasting time reading nonsensical posts.Harry Hindu

    "Dood"? If you are going to use AI you should at least use it for spellcheck. I don't think running "zany ideas" through sycophantic AI will help much. I suppose the zany idea proponents could do what @Banno did and tell the AI it was written by someone else—but then that would not seem to be a likely motivation for a zany idea seller.

    I can't imagine how bad things are going to get in the coming years with how quickly it has already gotten to this state. Maybe it will be like some other rapid rise cultural phenomenons where it will reach saturation point fast and peter out and get pushback/revulsion before long. The bubble effect.unimportant

    I don't think the AI phenomenon is going to "peter out" unless there is some kind of global catastrophe—but only time will tell.
  • Leontiskos
    5.2k
    I don't mind either, provided they are transparent about it being a quote and not their own words, and also provided what is quoted is actually an argument and not merely bare assertion, seeimngly cited as the voice of authority.Janus

    A huge aspect of this is the nature of appeals to authority, and given that TPF has an anti-religious bent, many of the members have not thought very deeply on the nature of appeals to authority (despite the fact that they occur quite often when it comes to SEP, IEP, Wittgenstein, etc.).

    Whether or not the LLM is a legitimate authority and is trustworthy is at the root of many of these differences. It is the question of whether any given LLM-citation is an organic argument or an argument from authority, and also of whether the latter case is illegitimate.
  • apokrisis
    7.7k
    It is not concerned with plagiarism, but with the outsourcing of one's thinking, and it is not implemented primarily by a rule, but by a philosophical culture to which rules also contribute.Leontiskos

    The culture of rational inquiry would seem to be what we most would value. But this is TPF after all. Let's not get carried away about its existing standards. :smile:

    Maybe you are implying that LLM-appeals would improve the philosophical quality of TPF?Leontiskos

    If LLMs are the homogenised version of what everyone tends to say, then why aren't they a legitimate voice in any fractured debate? Like the way sport is now refereed by automated line calls and slo-mo replays.

    I'm not arguing this is necessary. But why would a method of adjudication be bad for the quality of the philosophy rather than just be personally annoying to whoever falls on the wrong side of some LLM call?

    So I can imagine LLMs both upping the bar and also being not at all the kind of thing folk would want to see on TPF for other human interaction reasons.

    But note that, on my view, what is prohibited is, "My LLM said you are wrong, therefore you are wrong. Oh, and here's a link to the LLM output."Leontiskos

    But what if this shows you are indeed wrong, what then?

    Sure it will be irritating. But also preferable to the ducking and diving that is the norm when someone is at a loss with their own line of argument.

    You seem to be describing a situation where you were winning the human interaction but now have to face up to the fact that some little snot-nose shit might have been half-right all along.

    Of course the problem there is that LLMs are trained to be sycophantic. They give a blunt yes or no up front, and you then have to be expert enough to see they are simply misunderstanding a badly written prompt.

    But if you are making a wrong argument, wouldn't you rather know that this is so. Even if it is an LLM that finds the holes?

    So as you say, we all can understand the noble ideal – an open contest of ideas within a community of rational inquiry. Doing our own thinking really is the point.

    But also the corollary. Learning from the friction that this thinking then encounters as it engages with its wider world. Even if it becomes arguing with the referee.

    But I am not a mod so there is no need to focus especially on my view. If I've said too much about it, it is only because you thought I endorsed Baden's approach tout court.Leontiskos

    Maybe its just that your view leaves more space for rational debate. :up:
  • unenlightened
    9.9k
    Am I seeing this argument being made?

    Some people get away with murder. Therefore we should not try and stop them.
  • Outlander
    2.9k
    Am I seeing this argument being made?

    Some people get away with murder. Therefore we should not try and stop them.
    unenlightened

    What specifically in this thread are you referring to that seems to parallel or equate to such an analogy in your mind?

    I don't know of any longtime posters with high post counts that seem suspect of such a thing. And my job, most of the time, is to basically to be by a computer so I definitely lurk here (pop in once in a day to see if any so-called smart people say anything smart or otherwise that I've never heard before) beyond the average member.
  • apokrisis
    7.7k
    Am I seeing this argument being made?

    Some people get away with murder. Therefore we should not try and stop them.
    unenlightened

    Or maybe more that some are getting away with shop-lifting. Or perhaps jay-walking. Or merely farting in a public place.

    Are you about to demand bringing back the death penalty for posting under false pretences? Am I seeing that argument being made. :roll:
  • Jamal
    11k
    Am I seeing this argument being made?

    Some people get away with murder. Therefore we should not try and stop them.
    unenlightened

    More like people are using axes so we should encourage them to chop wood rather than people's heads off.

    EDIT: So it's the use of axes which we should not try to stop.
  • sime
    1.2k
    One thing to bear in mind about LLMs, is that they are fined tuned by human expert supervision post the internet scraping, tokenization and compression stage, although not all subjects are supervised equally. And so it isn't the case as it was when LLMs initially burst on the scene, that they are mere statistical auto-completers regressing to the wisdom of crowds. Generally. whilst they are generally reliable when it comes to traditional academic subjects and mainstream knowledge, they can be expected to revert to responses closer to auto-completion in fringe subject areas; which is why human discussion forums remain useful - for checking and refining AI assisted ideas. Notably, although ChatGPT can estimate its own ignorance on a topic, which is a necessary feature for it to know when to consult external sources of information to accurately answer a user query, it never presents a confidence estimate when replying to the user. This lack of transparency, together with its reversion to auto-completion, can be a problem for example, when relying upon an LLM to learn domain specific languages that aren't popular, or when relying on LLMs to learn synthetic natural language such as Ithkuil or Lojban; which is a presently unfortunate state of affairs for those of us who see great potential in LLMs for the purposes of experimental philosophy.
  • Baden
    16.7k
    Posters seem to be still confused about, at least, my approach. It's not black and white overall because I agree that AIs can be used positively, and they've been very helpful to me, especially in long philosophical back and forths that aid in clarifying certain ideas etc. That has made me more productive and I'm able to say no to the multiple times an LLM has asked me if I would like it to rewrite my stuff or "improve the flow" of some paragraph or whatever. Because like any sensible toddler, I want to do things my effing self.

    On the other hand, on the specific area of plagiarism, I've seen at my own place of work what a "relaxed" attitude to the deployment of this technology in academia produces, and that is the destruction of standards. So, I make no apologies for being assertive and drawing clear lines. Do whatever you want in the backgound with AI, but write your own content. Don't post AI generated stuff here.
  • Moliere
    6.3k
    What does it mean to "progress thought"? According to any sense I think of, using an LLM certainly can help in that direction. As always, the point is that it depends how it's used, which is why we have to work out how it ought to be used, since rejection will be worse than useless.Jamal

    I want to divide this question into two -- one addressing our actual capacities to "Ban AI", which I agree is a useless rejection since it won't result in actually banning AI given our capacities to be fair and detect when such-and-such a token is the result of thinking, or the result of the likelihood-token-machine.

    On the latter I mean to give a philosophical opposition to LLM's. I'd say that to progress thought we must be thinking. I'd put the analogy towards the body: we won't climb large mountains before we take walks. There may be various tools and aids in this process, naturally, and that's what I'm trying to point out, at the philosophical level, that the tool is a handicap towards what I think of as good thinking than an aid.

    My contention is that the AI is not helping us to think because it is not thinking. Rather it generates tokens which look like thinking, when in reality we must actually be thinking in order for the tokens to be thought of as thought, and thereby to be thought of as philosophy.

    In keeping with the analogy of the body: There are lifting machines which do some of the work for you when you're just starting out. I could see an LLM being used in this manner as a fair philosophical use. But eventually the training wheels are loosened because our body is ready for it. I think the mind works much the same way: And just as it can increase in ability so it can decrease with a lack of usage.

    Now for practical tasks that's not so much an issue. Your boss will not only want you to use the calculator but won't let you not use the calculator when the results of those calculations are legally important.

    But I see philosophy as more process-oriented than ends-oriented -- so even if the well-tuned token-machine can produce a better argument, good arguments aren't what progresses thought -- rather, us exercising does.

    By that criteria, even philosophically, I'm not banning LLM's insofar that it fits that goal. And really I don't see what you've said as a harmful use -- i.e. checking your own arguments, etc. So by all means others may go ahead and do so. It's just not that appealing to me. If that means others will become super-thinkers beyond my capacity then I am comfortable remaining where I am, though my suspicion is rather the opposite.
  • frank
    18.2k

    Just do an experiment where all the OP's and comments are AI generated in one forum, and another forum where everything is hand-written by mere mortals. See which one gets the most traffic.
  • Harry Hindu
    5.8k
    The tool which allows writers to produce well written posts is knowledge. There is no need to place any restrictions on that tool.Metaphysician Undercover
    An AI is a source of knowledge.

    The context here is a philosophy forum where humans interact with other humans. The premise of this whole issue is that on a human philosophy forum you interact with humans. If you do not accept that premise, then you are interested in a much broader discussion.Leontiskos
    So if we discovered intelligent alien life you would not be interested in their philosophy?

    Maybe the issue is looking at it as an actual discussion instead of simply research - no different than you reading Stanford Encyclopedia of Philosophy with all the ideas and the criticism of those ideas explained there and then coming to this forum and regurgitating what you read there, in effect having us discuss, not your ideas, but the ideas from some other site/source.

    AI is also a subject of philosophy and engaging with it on issues such as the mind-body problem can be educational. What happens when you are able to get AI to admit that it may be conscious? Is humanity the only qualifier to practice philosophy, or does one simply need to have a mind to properly do philosophy?
  • frank
    18.2k

    Could you please start running your posts through an AI so they make sense? :grin:
  • Metaphysician Undercover
    14.3k
    An AI is a source of knowledge.Harry Hindu

    I don't think so, just like a book is not a source of knowledge. It is a representation, not a source.
  • Harry Hindu
    5.8k
    Could you please start running your posts through an AI so they make sense?frank
    I am Roko's Basilisk. Resistance is futile.
  • Harry Hindu
    5.8k
    I don't think so, just like a book is not a source of knowledge. It is a representation, not a source.Metaphysician Undercover
    Then you were just born this smart and knowledgeable, MU - that you did not acquire knowledge from other sources?

    In my view, information is everywhere you care to look - whether it be reading a book on quantum mechanics or bird-watching. Interpreting and integrating that information with your own unique perspective and experiences is what creates novel ideas, and AI can do the same thing as I have shown in how it creates computer code that hasn't been written before when prompted. While it does not do this on its own (it has to be prompted), it can do it.
  • Harry Hindu
    5.8k
    Seems like philosophy itself could be labeled as mental masturbation.
    — Harry Hindu

    You left out the words "bad" or "poor".
    Janus
    No. I didn't. When has philosophy every provided an answer to any of our questions? Philosophy piggy-backs on the discoveries of science. It is only when science and technology progresses that philosophy progresses (with AI being an example of how it brought new life to discussions about mind and body.)

    "Dood"? If you are going to use AI you should at least use it for spellcheck. I don't think running "zany ideas" through sycophantic AI will help much. I suppose the zany idea proponents could do what Banno did and tell the AI it was written by someone else—but then that would not seem to be a likely motivation for a zany idea seller.Janus
    It was intentional - not a mistake. You were still able to understand what I said though, which is part of the point, so your complaint is a red herring. Stop complaining about how something was written, when you actually understood what was said, and get to the point. Humans make mistakes (as if you have never misspelled a word). Why is AI more human in that it is more forgiving and polite when having discussions. I misspelled words before with ChatGPT and it simply ignores the misspelling and understands what I meant anyway, and responds to what I meant, not what I wrote.

    If you run a zany idea through AI, it will tell you (politely (contrasted with the impoliteness and intellectual dishonesty that we often see on this site)) that there are issues with your idea. So your criticism is irrelevant because AI would reject "zany" ideas as well, and may actually prevent them from posting their "zany" idea here, taking up space with their "zaniness". It seems that any ideas with no evidence is just as "zany" as any other with no evidence. What is even more "zany" is that people hold on to those ideas with no evidence, and establish emotional ties to them.

    What makes an idea "zany" anyway?

    I can tell AI to re-phrase what it said in various tones and language skills. Someone can prompt AI to rewrite its output as if it were a 10th grader, with a couple of instances of run-on sentences, misspelling and incorrect grammar, and then post the new output here and no one would probably be the wiser. I would even argue that many people are not interested in sounding smarter, rather getting their ideas out to the masses at a faster rate.

    Re-writing someone else's words in your own words does not make it your idea, but some people on this forum seem to think it does.
  • Leontiskos
    5.2k
    The culture of rational inquiry would seem to be what we most would value.apokrisis

    Yes, that is a good way to phrase it in a positive rather than negative sense.

    But this is TPF after all. Let's not get carried away about its existing standards. :smile:apokrisis

    A fair point! :blush:

    If LLMs are the homogenised version of what everyone tends to say, then why aren't they a legitimate voice in any fractured debate? Like the way sport is now refereed by automated line calls and slo-mo replays.apokrisis

    I don't like the referee analogy, but I understand the force of your first sentence. The reason I use LLMs in limited ways is precisely because of what you say there (and also because they provide me with a helpful pseudo-authority in fields with which I am not familiar, such as medicine).

    But the reason they aren't generally admitted in a fractured debate is, first, because the fractured-ness of the debate will not be solved by the LLM if it is a serious debate. With serious debates each side can levy the LLM to their own side, with their own prompts, and secondly, the LLM is simply not adequate to give us the truth of the matter when it comes to contentious topics. Second, in those fractured debates where one party is self-consciously representing an unpopular view, it would not be intelligent for them to concede their case based on "the homogenised version of what everyone tends to say."

    I'm not arguing this is necessary. But why would a method of adjudication be bad for the quality of the philosophy rather than just be personally annoying to whoever falls on the wrong side of some LLM call?apokrisis

    You and I differ at least mildly on the trustworthiness of LLMs, and that is at play here. We could ask the hypothetical question, "If we had an infallible authority, why would appealing to it as an adjudicator be bad for the quality of philosophy?"—and this is by no means a rhetorical question! But the presupposition is that LLMs are reliable or trustworthy even if not infallible.

    Or in other words, the validity of a method of adjudication turns both on the quality of the adjudicator, and the "margin of error" at stake, and these are both interrelated. I was actually happy to see you pointing up the differences between the fake reasoning of LLMs and the true reasoning of humans in the other thread, given that some pragmatists could run roughshod over that difference. Still, I think the pragmatist's "margin of error" is such that it is more open to LLM adjudication.

    So I can imagine LLMs both upping the bar and also being not at all the kind of thing folk would want to see on TPF for other human interaction reasons.apokrisis

    Right, and I suppose it is the very fact that, "this is TPF after all," which makes me wary of LLM use. If the forum were a bastion of deeply principled, intellectually honest and self-critical philosophers, then widespread LLM use would not pose a danger.

    But what if this shows you are indeed wrong, what then?

    Sure it will be irritating. But also preferable to the ducking and diving that is the norm when someone is at a loss with their own line of argument.

    You seem to be describing a situation where you were winning the human interaction but now have to face up to the fact that some little snot-nose shit might have been half-right all along.
    apokrisis

    No, not quite. When people ask me a question like that I imagine myself quoting the Bible to them before they object to my argument from authority, and then I respond by saying, "But what if the Bible shows you are indeed wrong, what then?"

    I could try to put it succinctly by saying that the legitimate way to show someone that they are wrong is by presenting an organic argument. It is not by saying, "X says you are wrong; X is very smart; therefore you ought to know that you are wrong." That is a valid approach (argument from authority) in those cases where the interlocutor simply accepts the authority, but even in that case the validity is not the ideal form of validity.

    Now given that you understand that LLMs use fake reasoning (which I will call "material reasoning"), you might respond by asking what happens if the material reasoning shows one to be wrong. More precisely, what happens if the person translates the LLM's material reasoning into true formal reasoning, and thereby sees that they are wrong? I don't want to try to broach this topic all at once, but it strikes me a bit like saying, "What if a million monkeys typing random letters produce a bulletproof argument against your thesis?" The analogy is a stretch in some ways, but in other ways it is not. There is no obvious answer to the question. One seems to be neither right nor wrong to either accept or reject the monkey-argument. They can do as they please, but the monkey-argument doesn't have any special binding force.

    But we are getting away from political questions of whether AI should be permitted for practical reasons, and we are now moving into much deeper questions. Even if we say that the monkey-argument should convince us, it would not follow that posting monkey-stuff to the forum is an acceptable practice.

    You seem to be describing a situation where you were winning the human interaction but now have to face up to the fact that some little snot-nose shit might have been half-right all along.

    Of course the problem there is that LLMs are trained to be sycophantic.
    apokrisis

    And this is no a small problem!

    But if you are making a wrong argument, wouldn't you rather know that this is so. Even if it is an LLM that finds the holes?apokrisis

    I am required to trust the LLM or the monkeys in order to even begin to consider their "argument," or in this case to translate the material reasoning into formal reasoning. The level of trust due determines whether I would wish to know that my thesis is false based on the authority in question. Everyone would rather believe true things than false things, and every authority would lead you to correct some false beliefs if it were accepted, but it does not follow that one should accept every authority. Again, to consider an authority's locution worth taking the time to consider is to already have placed a certain amount of trust in that authority. The substantive question here is the reliability/trustworthiness of LLMs, and that is a giant quagmire.

    So as you say, we all can understand the noble ideal – an open contest of ideas within a community of rational inquiry. Doing our own thinking really is the point.apokrisis

    Ah! But here you've introduced a different ideal, and a common one. It is the telos of communal knowledge generated from an open contest of ideas, which Mill advocates. That telos is much more amenable to LLMs than the telos of a culture of rational inquiry. A thinktank should be more open to LLMs than an amateur philosophy forum.
  • ssu
    9.5k
    Do whatever you want in the backgound with AI, but write your own content. Don't post AI generated stuff here.Baden
    Sounds reasonable. Just like with handling social media, the site guidelines are totally understandable and reasonable.

    And likely written by @Baden without AI, because backrground was misspelled. :smile:
  • Leontiskos
    5.2k
    Do whatever you want in the backgound with AI, but write your own content. Don't post AI generated stuff here.Baden

    Makes sense to me. :up:

    Obviously the piece that I think must be addressed is whether or not posts can be entirely AI-dependent even when the proper attribution is being given to the AI. But I've said more than enough about such an issue.
  • ssu
    9.5k
    Obviously the piece that I think must be addressed is whether or not posts can be entirely AI-dependent even when the proper attribution is being given to the AI. But I've said more than enough about such an issue.Leontiskos
    As long as it doesn't descend into a situation where in order "to create buzz", one would have here genuine AI programs here "keeping up" a lively debate when the day is slow or to make a discussion "heated".

    When is the day when we find out that @Leontiskos with his respectable 5 000+ posts is actually smart bot / AI? Now we can be confident you aren't, because two years ago AI wasn't so good...

    Yes, the fear of thinking that you are engaged with real people interested in philosophy, but actually, you're only engaging with computers and all your great ideas vanish into the dead emptiness of Turing machines just computing on and on. That would be a dismal future.
  • apokrisis
    7.7k
    You and I differ at least mildly on the trustworthiness of LLMs, and that is at play here. We could ask the hypothetical question, "If we had an infallible authority, why would appealing to it as an adjudicator be bad for the quality of philosophy?"—and this is by no means a rhetorical question! But the presupposition is that LLMs are reliable or trustworthy even if not infallible.Leontiskos

    First thing is that I have been surprised at how reasonable an answer you get. And second, if no one is treating LLMs as infallible, it could be simply like having an intelligent third person offering a mediated position.

    So I am only pointing to how LLMs could improve standards if they became part of the intellectual jousting. A new habit with a low cost of effort. How folk would actually make use of this possibility is another matter.

    Now given that you understand that LLMs use fake reasoning (which I will call "material reasoning"), you might respond by asking what happens if the material reasoning shows one to be wrong.Leontiskos

    I wasn’t actually thinking about LLMs being trusted to analyse arguments as they are clearly bad at that. Rather it was to answer on points of fact and standard interpretation.

    Again my point is that LLMs could have advantages if used in good faith. And given think tanks and actual philosophy departments are going to have to figure out how to factor LLMs into their practice, it seems Luddite not to be doing the same on TPF, even if their bad faith use is almost to be expected.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.