• Leontiskos
    5.2k
    So in this case the LLM carried out the tedious part of the task;Jamal

    But is your argument sound? If you have a group of people argue over a topic and then you appoint a person to summarize the arguments and produce a working document that will be the basis for further discussion, you haven't given them a "calculator" job. You have given them the most important job of all. You have asked them to draft the committee document, which is almost certainly the most crucial point in the process. Yet you have re-construed this as "a calculator job to avoid tedium." This is what always seems happen with LLMs. People use them in substantial ways and then downplay the ways in which they are using them. In cases such as these one seems to prefer outsourcing to a "neutral source" so as to avoid the natural controversy which always attends such a draft.

    It didn't occur to me that anyone would interpret those guidelines as suggesting that posts written by people who are usng AI tools are generally superior to those written by people who don't use AI,Jamal

    It could have been made more irenically, but @bongo fury's basic point seems uncontroversial. You said:

    We encourage using LLMs as assistants for research, brainstorming, and editing. — Deepseek

    To say, "We encourage X," is to encourage X. It is not to say, "If you are doing Y, then we would encourage you to do Y in X manner." To say "allow" or "permit" instead of "encourage" would make a large difference.
  • baker
    5.8k
    Some of us might be in modes to reject some readings as out and out false. But if we do that, our search for the ‘true’ interpretation may incline us to shape our prompts away from variety of readings and toward tunnel vision.

    Apart from our biases, our lack of exposure to certain influences on a philosopher can limit the range of prompts we can think of.
    Joshs
    Are students at schools nowadays, at any level, actually encouraged to have their own opinion about philosophers?
    Are they encouraged to think in terms that there may be several valuable interpretations?

    Back when I went to school, we weren't expected to have our own opinion about anything, and there was this belief that there was only one true way to understand something.

    Most people I know, including Americans, think this way: there is only one true way to understand something. An "interpretation" is something that needs to be overcome. "I don't interpret, I don't take a perspective, I tell it like it is" goes the maxim.


    I'm getting at a more fundamental issue here: If people generally think this way, their use of AI is only going to strengthen them in their single-mindedness.
  • Joshs
    6.5k

    But, of course, that means each of us will prefer certain reading soccer others.
    — Joshs

    How did this come to be?
    Are you using a voice-to-text app?

    Hold on. Are you an AI?
    baker

    The worst of it is I dont remember what I was trying to say.
  • Leontiskos
    5.2k
    This is the sort of appeal-to-LLM-authority that I find disconcerting, where one usually does not recognize that they have appealed to the AI's authority at all.Leontiskos

    All of which makes using AI for philosophy, on one level, like using any one else’s words besides your own to do philosophy.Fire Ologist

    So if you use someone else's words to do philosophy, you are usually appealing to them as an authority. The same thing is happening with LLMs. This will be true whether or not we see LLMs as a tool. I got into some of this in the following and the posts related to it:

    This becomes rather subtle, but what I find is that people who tell themselves that they are merely using AI to generate candidate theories which they then assess the validity of in a posterior manner, are failing to understand their own interaction with AI. They are failing to appreciate the trust they place in AI to generate viable candidate theories, for example. But they also tend to ignore the fact that they are very often taking AI at its word.Leontiskos

    -

    Further, it makes no sense to give AI the type of authority that would settle a dispute, such as: “you say X is true - I say Y is true; but because AI says Y is true, I am right and you are wrong.” Just because AI spits out a useful turn of phrase and says something you happen to agree is true, that doesn’t add any authority to your position.Fire Ologist

    I tend to agree, but I don't think anyone who uses AI is capable of using it this way (including myself). If one did not think AI added authority to a position then one wouldn't use it at all.

    The presence and influence of AI in a particular writing needs to never be hidden from the reader.Fire Ologist

    I would argue that the presence and influence of AI is always hidden from us in some ways, given that we don't really know what we are doing when we consult it.

    You need to be able to make AI-generated knowledge your own, just as you make anything you know your own.Fire Ologist

    LLMs are sui generis. They have no precedent, and that's the difficulty. What this means is that your phrase, "just as you make anything you know your own," creates a false equivalence. It presumes that artificial intelligence is not artificial, and is on par with all previous forms of intelligence. This is the petitio principii that @Banno and others engage in constantly. For example:

    Unlike handing it to a human editor, which is what authors have been doing for yonks?
    — SophistiCat

    Nah. You are engaging in the same basic equivocation between a human and an AI. The whole point is that interacting with humans is different from interacting with AI, and the two should not be conflated. You've begged the question in a pretty basic manner, namely by implying that interacting with a human duo is the same as interacting with a human and AI duo.
    Leontiskos

    Given all of this, it would seem that @bongo fury's absolutist stance is in some ways the most coherent and intellectually rigorous, even though I realize that TPF will probably not go that route, and should not go that route if there are large disagreements at stake.
  • Leontiskos
    5.2k
    I've mentioned this in the mod forum, so I'll mention it here too. I disagree with diluting the guidelines. I think we have an opportunity to be exceptional on the web in keeping this place as clean of AI-written content as possible. And given that the culture is veering more and more towards letting AI do everything, we are likely over time to be drowned in this stuff unless we assertively and straightforwardly set enforceable limitations. That is, I don't see any reward from being less strict that balances the risk of throwing away what makes up special and what, in the future, will be even rarer than it is now, i.e. a purely human online community.

    The idea that we should keep up with the times to keep up with the times isn't convincing. Technocapitalism is definitive of the times we're in now, and it's a system that is not particularly friendly to human creativity and freedom. But you don't even have to agree with that to agree with me, only recognize that if we don't draw a clear line, there will effectively be no line.
    Baden

    :up: :fire: :up:

    I couldn't agree more, and I can't but help think that you are something like the prophet whose word of warning will inevitably go unheeded—as always happens for pragmatic reasons.

    Relatedly:

    It seems to me difficult to argue against the point, made in the OP, that since LLMs are going to be used, we have to work out how to use them well...Jamal

    Why does it matter that LLMs are going to be used? What if there were a blanket rule, "No part of a post may be AI-written, and AI references are not permitted"? The second part requires that someone who is making use of AI find—and hopefully understand—the primary human sources that the AI is relying on in order to make the salutary reference they wish to make.

    The curious ignoratio elenchus that @Banno wishes to rely on is, "A rule against AI use will not be heeded, therefore it should not be made." Is there any force to such an argument? Suppose someone writes all of their posts with LLMs. If they are found out, they are banned. But suppose they are not found out. Does it follow that the rule has failed? Not in the least. Everyone on the forum is assuming that all of the posts are human-written and human-reasoned, and the culture of the forum will track this assumption. Most of the posts will be human-written and human-reasoned. The fact that someone might transgress the rule doesn't really matter. Furthermore, the culture that such a rule helps establish will be organically opposed to the sorts of superficial AI-appeals. Someone attempting to rely on LLMs in that cultural atmosphere will in no way prosper. If they keep pressing the LLM-button to respond to each reply of increasing complexity, they will quickly be found out as a silly copy-and-paster. The idea that it would be easy to overtly shirk that cultural stricture is entirely unreasonable, and there is no significant motive for someone to rely on LLMs in that environment. It is parallel to the person who uses chess AI to win online chess games, for no monetary benefit and to the detriment of their chess skills and their love of chess.

    Similarly, a classroom rule against cheating could be opposed on @Banno's same basis: kids will cheat either way, so why bother? But the culture which stigmatizes cheating and values honest work is itself a bulwark against cheating, and both the rule and the culture make it much harder for the cheater to prosper. Furthermore, even if the rule cannot be enforced with perfection, the cheater is primarily hurting themselves and not others. We might even say that the rule is not there to protect cheaters from themselves. It is there to ensure that those who want an education can receive one.

    that will lead people to hide their use of it generally.Jamal

    Would that be a bad thing? To cause someone to hide an unwanted behavior is to disincentivize that behavior. It also gives such people a string to pull on to understand why the thing is discouraged.
  • Leontiskos
    5.2k
    The arguments are similar to what I see here. "AI is inevitable and therefore" etc. Some teachers---the good ones---are appalled.Baden

    I think it goes back to telos:

    I think it would be helpful to continue to reflect on what is disagreeable about AI use and why. For example, if we don't know why we want to engage in human communication rather than non-human communication, then prohibitions based on that axiom will become opaque.Leontiskos

    What is the end/telos? Of a university? Of a philosophy forum?

    Universities have in some ways become engines for economic and technological progress. If that is the end of the university, and if AI is conducive to that end, then there is no reason to prevent students from using AI. In that case a large part of what it means to be "a good student" will be "a student who knows how to use AI well," and perhaps the economically-driven university is satisfied with that.

    But liberal education in the traditional sense is not a servant to the economy. It is liberal; free from such servility. It is meant to educate the human being qua human being, and philosophy has always been a central part of that.

    Think of it this way. If someone comes to TPF and manages to discreetly use AI to look smart, to win arguments, to satisfy their ego, then perhaps, "They have their reward." They are using philosophy and TPF to get something that is not actually in accord with the nature of philosophy. They are the person Socrates criticizes for being obsessed with cosmetics rather than gymnastics; who wants their body to look healthy without being healthy.

    The argument, "It's inevitable, therefore we need to get on board," looks something like, "The cosmetics-folk are coming, therefore we'd better aid and abet them." I don't see why it is inevitable that every sphere of human life must substitute human thinking for machine "thinking." If AI is really inevitable, then why oppose it at all? Why even bother with the half-rules? It seems to me that philosophy arenas such as TPF should be precisely the places where that "inevitability" is checked. There will be no shortage of people looking for refuge from a cosmetic culture.

    Coming back to the point, if the telos of TPF is contrary to LLM-use, then LLMs should be discouraged. If the telos of TPF is helped by LLM-use, then LLMs should be encouraged. The vastness and power of the technology makes a neutral stance impossible. But the key question is this: What is the telos of TPF?
  • Banno
    28.9k
    Treating an AI as authoritative in a debate would be an error. That's not what AI is useful for.
  • Banno
    28.9k
    it's clear that the strongest objection is aesthetic.
    — Banno

    I'm seeing the opposite.
    bongo fury
    Then I've no followed your argument here: . I took you to be pointing out that the difference between a genuine masterpiece and a forgery - an aesthetic difference - was the authenticity of the masterpiece.

    An aesthetic difference because, given two identical artefacts, the authentic artefact is to be preferred. Hence, given two identical texts, one human generated, the other AI generated, the human generated one is preferable, on aesthetic grounds.

    Now I think that argument is sound.

    But it's not what you were saying?
  • Banno
    28.9k
    And the only thing that we can practically control here is what shows up on our site. If it looks AI generated, we ought investigate and delete as necessary. Our goal imo should be that a hypothetical AI checker sweeping our site should come up with the result "written by humans". AI content ought ideally be zero.Baden

    You say "If it looks AI generated, we ought investigate and delete as necessary"; the "we" here is you and the other mods. But of course they can't tell what is AI generated and what isn't. That hypothetical AI checker does not work. Further, mixed authorship is now the norm. You yourself say you are using AI in research.

    It would be much preferred to have the mods spend their time removing poor posts, AI generated or not, rather than playing a loosing war of catch-up against Claude.
  • baker
    5.8k
    There goes your use of AI! Heh.

    Given the sense of your sentence, it should probably be "over" instead of "soccer".
  • Banno
    28.9k
    Thanks for providing the prompt.

    I think the most intellectually honest way of working with a.i. in interpreting philosophical texts is to strive to produce prompts which cover as wide a variety of readings as possible.Joshs
    That might be a partial answer, and should be a result of the protocol set out earlier in this thread. called what you describe "sandbagging". I think the best defence we have against it is not a ban on using AI, but an open discussion in which others can point to the sandbags.

    The remedy for the absence of the Nietzsche-Deleuze connection is not found in rejecting AI, but in seeking your input into the discussion.


    My guess is that your finger was a bit to the left on the "V", you typed "ocer" instead of "over" and it was autocorrected.
  • Joshs
    6.5k
    My guess is that your finger was a bit to the left on the "V", you typed "ocer" instead of "over" and it was autocorrected.Banno

    I write most of my forum posts on an iphone while hiking. Not conducive for accurate spelling.
  • baker
    5.8k
    What is the telos of TPF?Leontiskos

    A pissing contest, combined with quasi-efforts at healing existential anxiety.

    Even the serious folks here aren't all that serious, or at least the serious ones aren't serious enough about posting much.
  • baker
    5.8k
    I write most of my forum posts on an iphone while hiking.Joshs

    You hike a lot!
  • Joshs
    6.5k
    You hike a lot!baker

    7 days a week, averaging 10 miles a day
  • baker
    5.8k
    Why??
    I mean, why not focus on one thing at a time?
    It mars the hike to do something else while on the hike.
  • Banno
    28.9k
    The curious ignoratio elenchus that Banno wishes to rely on is, "A rule against AI use will not be heeded, therefore it should not be made."Leontiskos

    I make a point of not reading Leon's posts, but this drew itself to my attention as a direct reply. I've learned that he confabulates the arguments of others so as to suit his purposes. Here is a case in point. I have not made the argument he here attributes to me. I have, amongst other things pointed out that a rule against AI cannot be reliably enforce, which is quite different.

    Over and above all that, there is the theme of this thread, which is to explore ways in which AI might be used to improve the quality of the discussion.

    For those who think philosophy consists in a series of appeals to authority, AI must be quite confounding.
  • Joshs
    6.5k
    If the telos of TPF is helped by LLM-use, then LLMs should be encouraged. The vastness and power of the technology makes a neutral stance impossible. But the key question is this: What is the telos of TPF?

    …If someone comes to TPF and manages to discreetly use AI to look smart, to win arguments, to satisfy their ego, then perhaps, "They have their reward." They are using philosophy and TPF to get something that is not actually in accord with the nature of philosophy. They are the person Socrates criticizes for being obsessed with cosmetics rather than gymnastics; who wants their body to look healthy without being healthy.
    Leontiskos

    I tend to think that a very small percentage of those who a.i. have that aim in mind. Can you think of a telos for this forum which includes a.i. but not in a way that needs to be characterized as ‘cosmetic’ or ‘machine-like’? I know more pay attention to the fact that I am using a machine when I consult a.i. than when I use the world-processing features of my iphone to type this. It’s not the machine I am beholden to when I expose myself to the ideas it delivers up, it’s the human thinkers it puts me in touch with. If you have ever been prompted to seek out relevant literature to aid in the composing of an OP, or your response to an OP, then your telos in consulting that textual material is the same as that of the many here who consult a.i while engaging in TPF discussions.
  • Baden
    16.6k
    You yourself say you are using AI in research.Banno

    I use it to research not write the results of my research. I also use books to research and don't plagiarise from them.

    Been through this already.

    That hypothetical AI checker does not work.Banno

    Says who?

    It would be much preferred to have the mods spend their time removing poor posts, AI generated or not, rather than playing a loosing war of catch-up against Claude.Banno

    Maybe. Maybe not. But I'll take heroic failure over cowardly capitulation.
  • Joshs
    6.5k
    ↪Joshs Why??
    I mean, why not focus on one thing at a time?
    It mars the hike to do something else while on the hike.
    baker

    You sound like my hiking friend. I used to do all my philosophy research and writing at home or in a library. But such things as unlimited cellular data, air pods and pdf audio readers freed me to use the great outdoors as my library. I’ve always needed to pace in order to generate ideas, and I’m a lot more productive out here than couped up facing 4 walls. Did you know Nietzsche composed his work while walking 7-10 miles a day? And Heidegger did his thinking walking around a farm in Freiburg.

    Aristotle: Associated with the term "peripatetic" for his habit of walking around while lecturing and thinking.

    Søren Kierkegaard: Believed walking was a way to find a state of well-being and walk away from burdens and illness, stating, "I have walked myself into my best thoughts".

    "Above all, do not lose your desire to walk: every day I walk myself into a state of well-being and walk away from every illness; I have walked myself into my best thoughts, and I know of no thought so burdensome that one cannot walk away from it. Even if one were to walk for one's health and it were constantly one station ahead-I would still say: Walk!
    Besides, it is also apparent that in walking one constantly gets as close to well-being as possible, even if one does not quite reach it—but by sitting still, and the more one sits still, the closer one comes to feeling ill. Health and salvation can be found only in motion... if one just keeps on walking, everything will be all right."

    Friedrich Nietzsche: A dedicated walker who believed thoughts not formed while walking were less trustworthy. He spent significant time hiking in the Swiss mountains to write and think, finding that walking facilitated his thought process.

    Henry David Thoreau: Argued that walking in nature, even enduring discomfort like getting dirty or tired, builds toughness of character that makes one more resilient to future hardships.

    Jean-Jacques Rousseau: Used walking as a way to think, particularly during solitary mountain walks.

    Immanuel Kant: Had a very structured walking routine, marching through his hometown at the exact same time every day as a way to escape the compulsion of his own thoughts.
  • baker
    5.8k
    Oh, I get my "best ideas" while cooking and washing the dishes and when working in the garden. Neverthelss, this seems mostly just like "the churning of the mind", production of thought for the sake of production of thought.


    To say nothing of how dangerous it is to allow oneself to be distracted while out hiking.
  • Banno
    28.9k
    I use it to research not write the results of my research.Baden
    Do you use a quill?
  • Joshs
    6.5k


    To say nothing of how dangerous it is to allow oneself to be distracted while out hiking.baker

    Now you sound like my brother. Keep in mind I live in the Midwest , not the Rockies. There are no vicious or poisonous beasts here ( except for Republicans) , just small tracts of forest preserve with a road no more than a few minutes away.
  • ProtagoranSocratist
    68
    i can't comment on what's best for anyone else here, but i find the most productive way to use it is using it for very specific purposes, rather than generating a whole body of thought...like if you need to verify something you or someone else is saying, that is appropriate, but don't use it to write an essay as that could easily backfire (unless it's an experiment). You can also use it reasonably as creative innovation, even if it never gets off the ground.
  • Banno
    28.9k
    Says who?Baden
    With intended irony...

    Prompt: find peer-reviewed academic studies that show the effectiveness of any capacity to recognise AI generated text.

    The result.

    "...there is peer-reviewed evidence that both humans... and automated tools can sometimes detect AI-generated text above chance. Effectiveness is highly conditional. Measured accuracy is often only modest."

    So yes, I overstated my case. You may be able to recognise posts as AI generated at a slightly better rate than choosing at random.
  • bongo fury
    1.8k
    I use it to research not write the results of my research. I also use books to research and don't plagiarise from them.Baden

    Yep :100:

    And it's not like it's a rocket science distinction? Not a line that's hard to draw?

    (Some of us draw it further back... I prefer not to interact with the man in the Chinese room if I don't think he understands; but I suppose that's a matter of taste, and can imagine being persuaded. I'm more likely to be persuaded by those not apparently desensitized to the problem with plagiarism.)
  • Leontiskos
    5.2k
    I know more pay attention to the fact that I am using a machine when I consult a.i. than when I use the world-processing features of my iphone to type this.Joshs

    You wouldn't see this claim as involving false equivalence?

    If you have ever been prompted to seek out relevant literature to aid in the composing of an OP, or your response to an OP, then your telos in consulting that textual material is the same as that of the many here who consult a.i while engaging in TPF discussions.Joshs

    No, not really. There are primary sources, there are secondary sources, there are search engines, and then there is the LLM. Consulting a secondary source and consulting an LLM are not the same thing.

    It is worth noting that those who keep arguing in favor of LLMs seem to need to make use of falsehoods, and especially false equivalences.

    ---

    A pissing contest, combined with quasi-efforts at healing existential anxiety.baker

    Lol!

    ---

    Here is a case in point. I have not made the argument he here attributes to me. I have, amongst other things pointed out that a rule against AI cannot be reliably enforce, which is quite different.Banno

    Which is the same thing, and of course the arguments I have given respond to this just as well. So you're quibbling, like you always do. Someone who is so indisposed to philosophy should probably not be creating threads instructing others how to do philosophy while at the same time contravening standing TPF rules.

    For those who think philosophy consists in a series of appeals to authority, AI must be quite confounding.Banno

    The sycophantic appeal-to-AI-authority you engage in is precisely the sort of thing that is opposed.
  • Banno
    28.9k
    I use it in this way, too, but make a point to guard against confabulation by asking for sources and checking them.
  • Leontiskos
    5.2k
    With intended irony...

    Prompt: find peer-reviewed academic studies that show the effectiveness of any capacity to recognise AI generated text.

    The result.

    "...there is peer-reviewed evidence that both humans... and automated tools can sometimes detect AI-generated text above chance. Effectiveness is highly conditional. Measured accuracy is often only modest."

    So yes, I overstated my case. You may be able to recognise posts as AI generated at a slightly better rate than choosing at random.
    Banno

    That's not irony. That's incoherent self-contradiction. It's also against the rules of TPF.
  • Banno
    28.9k
    So you cannot see the difference between "A rule against AI use will not be heeded" and "A rule against AI use cannot be enforced". Ok.

    It's also against the rules of TPF.Leontiskos
    @Baden? Tell us what you think. Is my reply to you against the rules? And should it be?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.