• Banno
    28.9k
    I've chatted with it about it's application in the practice of law.Ciceronianus

    There's trouble there, too.
  • Ciceronianus
    3k

    In fairness I should note that I find it difficult to attribute any significance to questions regarding Being. So, naturally enough, Nothing means nothing to me.

    My reference was merely to the fact that the obscurity of H's work has prompted his admirers to, seemingly, compete with each other in providing explanations of it.
  • Ciceronianus
    3k

    Yes, I've heard about this sort of thing.
  • Banno
    28.9k
    , next, consider this synthesis, from the following prompt:

    Take these two responses: https://chatgpt.com/share/68f2fce5-d428-800f-9eda-49adc3103d07 https://chatgpt.com/share/68f2e398-3fb8-800f-87cc-7a8e94ba48cc Wittgenstein is usually considered to be in the analytic tradition, but it would be a fairly direct task to synthesise these two links in a Wittgenstein fashion.

    Now that is perhaps something I think we could all work with.
  • Jamal
    11k


    It seems to me difficult to argue against the point, made in the OP, that since LLMs are going to be used, we have to work out how to use them well, precisely to address the concerns you have been expressing. That is, the best way of preventing the kind of things you don't like is to create a culture in which some uses of LLMs are stigmatized and others are encouraged. The stigmatization of all LLM use is likely to be counterproductive, since it will inhibit people from disclosing their use of it. You are more likely to be confident in reading a non-plagiarized post on a forum where it has become habitual not to pass off LLM-produced text as your own.

    Failing that, a clear statement of,

    We are not encouraging people to use it if they're not already. — Jamal

    ?
    bongo fury

    It didn't occur to me that anyone would interpret those guidelines as suggesting that posts written by people who are usng AI tools are generally superior to those written by people who don't use AI, nor that they are suggesting to people that they take up AI if they don't already use it. But it's no problem to include a statement like "Don't use LLMs if you don't want to :-)"

    Isn't it a bit ironic to have AI write the AI rules for the forum? This is the sort of appeal-to-LLM-authority that I find disconcerting, where one usually does not recognize that they have appealed to the AI's authority at all. In this case one might think that by allowing revisions to be made to the AI's initial draft, or because the AI was asked to synthesize member contributions, one has not outsourced the basic thinking to the AI. This highlights why "responsible use" is so nebulous: because everyone gives themselves a pass whenever it is expedient.Leontiskos

    I was aware of the irony, yes. But doing it manually would have been very tedious, so I used the LLM as a labour-saving tool. It's true that the labour involved would have been mental labour, but is all mental labour valuable in itself? I suspect this is a prejudice. Calculators similarly save us from tedious labour which is mental. Maybe a thesaurus does too: maybe there was a time when members of the educated elite could quote verbatim long passages from classical literature, giving ready access to, and understanding of, a wealth of diverse vocabulary—but now we just look it up. Are we intellectually degraded? In some ways it seems so, but in other ways probably not: we can probably point to ways in which intellectual culture is superior now to a hundred years ago when they were quoting Virgil and Milton at length in the quadrangles. Intellectual labour has been redirected, away from memorization (which was always employed as a cultural marker as much as in honest intellectual enquiry or aesthetic pleasure) and towards higher-order tasks involving creativity and evaluation.

    So in this case the LLM carried out the tedious part of the task; we effectively did the creation, in the original discussion, and we are now doing the evaluation.
  • Fire Ologist
    1.7k
    It also depends on the prompt. Prompt engineering is a "thing", as the kids say.Banno

    That is interesting. And also makes sense, given AI is like a text calculator. The prompt feeds into the whole chain of events that one might call “AI doing philosophy” so to speak.

    This is the sort of appeal-to-LLM-authority that I find disconcerting, where one usually does not recognize that they have appealed to the AI's authority at all.Leontiskos

    I see AI as a tool. We can wonder about personhood and consciousness, but we can ignore that. It’s a tool that generates hypotheticals we can then evaluate, test and prove, and believe and adopt, or not. All of which makes using AI for philosophy, on one level, like using any one else’s words besides your own to do philosophy.

    However, simultaneously, I agree that it would be disconcerting to let AI (or anyone/anything) be my authority without my consent. And AI is facilitating such recklessness and discord. The presence and influence of AI in a particular writing needs to never be hidden from the reader.

    Further, it makes no sense to give AI the type of authority that would settle a dispute, such as: “you say X is true - I say Y is true; but because AI says Y is true, I am right and you are wrong.” Just because AI spits out a useful turn of phrase and says something you happen to agree is true, that doesn’t add any authority to your position.

    You need to be able to make AI-generated knowledge your own, just as you make anything you know your own. Making it your own is just another way of saying “understand it”. So I don’t care if AI is used verbatim with no changes (and find it fascinating when it seems to say something thst can’t be improved on), but only when one can restate it in different words, one understands it.
  • bongo fury
    1.8k
    From what you and others have said, it's clear that the strongest objection is aesthetic.Banno

    I'm seeing the opposite. Google search is an unprepossessing but epistemically valuable (e.g. falsifiable) tool for connecting physically real and identifiable texts to their physically real and accountable authors and sources. The prettier AI summary presumes to cut out the middle man and connect up the ideas directly, as though they were physically real and identifiable.

    Good stuff.Banno

    Cheers - I wrote it myself :wink:
  • Baden
    16.6k
    I've mentioned this in the mod forum, so I'll mention it here too. I disagree with diluting the guidelines. I think we have an opportunity to be exceptional on the web in keeping this place as clean of AI-written content as possible. And given that the culture is veering more and more towards letting AI do everything, we are likely over time to be drowned in this stuff unless we assertively and straightforwardly set enforceable limitations. That is, I don't see any reward from being less strict that balances the risk of throwing away what makes up special and what, in the future, will be even rarer than it is now, i.e. a purely human online community.

    The idea that we should keep up with the times to keep up with the times isn't convincing. Technocapitalism is definitive of the times we're in now, and it's a system that is not particularly friendly to human creativity and freedom. But you don't even have to agree with that to agree with me, only recognize that if we don't draw a clear line, there will effectively be no line.
  • Baden
    16.6k
    This anecdote might help my case: At another department of the university where I work, the department heads in their efforts to "keep up with the times" are now allowing Master's students to use AI to directly write up to 40% of their theses. The arguments are similar to what I see here. "AI is inevitable and therefore" etc. Some teachers---the good ones---are appalled. Students who want to write their own theses will now be punished by being out-competed by AI-using peers. There will be an incentive to do less, think less, research less, and develop less.

    Worst of all, it's to the point of being almost unenforceable. How does one know a student has only written 40% using AI? Some students are likely to get away with writing the majority of their theses using it, and at that point it becomes almost a hopeless task to prevent a downward spiral.

    The department thinks it's very clever, "keeping up with the times", "acknowledging the new reality" etc etc. I don't. I think they are tying a noose for themselves and academic progress as a human quality. The proposal here is not so obviously dramatic but it will at least in my opinion push us in the wrong direction. And I see no pressing need for it.
  • Pierre-Normand
    2.8k
    This anecdote might help my case: At another department of the university where I work, the department heads in their efforts to "keep up with the times" are now allowing Master's students to use AI to directly write up to 40% of their theses.Baden

    On an optimistic note, those department heads may soon be laid off and replaced with AI administrators who will have the good sense to reverse this airheaded policy.
  • Baden
    16.6k
    (None of the above should be taken to mean that I am anti-AI tout court. AI has been exceptionally helpful to me in my own research. What I am against is anything that would lessen our ability to detect content that is directly AI written. The extent users are employing AI in the background and paraphrasing things is beyond our control, and, at least, in paraphrasing, some of the user's own personality is injected into the process. That is not so dissimilar from reading a book and using the knowledge from it. But copying directly from a book without citation is plagiarism and copy-pasting posts whole or in part from AI without highlighting that is also plagiarism.)
  • Jamal
    11k


    I agree. The risk is that you or we do appear to be anti-LLM tout court, because that will lead people to hide their use of it generally.
  • Baden
    16.6k


    Yes, I see the danger of giving that impression.
  • frank
    18.1k
    This anecdote might help my case: At another department of the university where I work, the department heads in their efforts to "keep up with the times" are now allowing Master's students to use AI to directly write up to 40% of their theses.Baden

    How do they police that?
  • Outlander
    2.8k
    How do they police that?frank

    Sounds like a modern for-profit college. Daddy's money buys anything these days. At the cost of the future of society. But oh well, when Country XYZ who actually punishes fraud and rewards honest intellectualism invents the Super Nuke(TM), everyone I know or care about will be dead by then. So, why worry? Logic 101. Good luck, great grandchildren.
  • unenlightened
    9.9k
    How to use AI effectively to do philosophy.

    I have nothing to say about this topic.
  • bongo fury
    1.8k
    It didn't occur to me that anyone would interpret those guidelines as suggesting that posts written by people who are usng AI tools are generally superior to those written by people who don't use AI,Jamal

    It hadn't occurred to me that I was ranting, until you showed that to be a plausible reading. I humbly (well, I hope not smugly) ask you to consider that many of your comments do indeed characterise the more proper use of AI as due diligence?
  • Jamal
    11k


    I'm eager to agree, but I'm not sure what you're asking.
  • bongo fury
    1.8k


    Due as in obligatory?
  • Baden
    16.6k
    How do they police that?frank

    I don't know. It's kind of like saying that you can steal 40% of the bank's money, but no more. At that point, the concept of policing has already sort of gone out the window.



    Surprisingly, it's part of a public government-funded university. Which makes it worse.
  • frank
    18.1k
    I don't know. It's kind of like saying that you can steal 40% of the bank's money, but no more.Baden

    I guess in an educational setting that makes sense, but if AI cures cancer, we'll all own AI as an extension of human creativity.

    Once it becomes that kind of tool, won't universities embrace it?
  • Outlander
    2.8k
    Surprisingly, it's part of a public government-funded university. Which makes it worse.Baden

    In the dark, a lit tea light is better than nothing I suppose. :grin:
  • Baden
    16.6k
    Once it becomes that kind of tool, won't universities embrace it?frank

    Well, it's already embraced for research and rightly so. But plagiarism generally isn't and shouldn't be.

    We ought not conflate the two things. I personally embrace AI for research and have had conversations amounting to hundreds of thousands of words with it, which have been very helpful. That's different from letting it write my posts for me.

    And the only thing that we can practically control here is what shows up on our site. If it looks AI generated, we ought investigate and delete as necessary. Our goal imo should be that a hypothetical AI checker sweeping our site should come up with the result "written by humans". AI content ought ideally be zero.
  • frank
    18.1k
    We ought not conflate the two things. I personally embrace AI for research and have had conversations amounting to hundreds of thousands of words with it, which have been very helpful. That's different from letting it write my posts for me.Baden

    I suppose so. I don't have any opinion one way or the other. :smile:
  • Baden
    16.6k


    Well, you sound like you, gratifyingly. AI don't make them typos. :party:
  • Joshs
    6.4k


    Joshs Ok, but what was the prompt used? That's the topic here.Banno

    “the analytic tradition has historically responded, roughly from the 1920s onward, to heidegger’s phrase from what is metaphysics: the nothing nothings. I want you to produce a strong counter-critique of the analytic tradition’s critique of this phrase, mentioning the lack of an effective conceptual ground on which to situate the thinking behind it.”
  • Joshs
    6.4k
    ↪Ciceronianus, ↪Joshs next, consider this synthesis, from the following prompt:

    Take these two responses: https://chatgpt.com/share/68f2fce5-d428-800f-9eda-49adc3103d07 https://chatgpt.com/share/68f2e398-3fb8-800f-87cc-7a8e94ba48cc Wittgenstein is usually considered to be in the analytic tradition, but it would be a fairly direct task to synthesise these two links in a Wittgenstein fashion.

    Now that is perhaps something I think we could all work with
    Banno

    I think the most intellectually honest way of working with a.i. in interpreting philosophical texts is to strive to produce prompts which cover as wide a variety of readings as possible. That’s certainly something I could live with. But, of course, that means each of us will prefer certain reading soccer others. Some of us might be in modes to reject some readings as out and out false. But if we do that, our search for the ‘true’ interpretation may incline
    us to shape our prompts away from variety of readings and toward tunnel vision.

    Apart from our biases, our lack of exposure to certain influences on a philosopher can limit the range of prompts we can think of. For instance, your own prompts on the Nothing includes poetic thinking and Gadamerian hermeneutics, but says nothing about the Nietzsche-Deleuze connection. They write about the historical connection between nothingness , negation and lack. Whereas Metaphysics up through Hegel, Marx and Freud have treated negation and absence in a way which prioritizes presence and unification, post-Nietzschean thought treats negation affirmatively, as creative.
  • baker
    5.8k
    But, of course, that means each of us will prefer certain reading soccer others.Joshs

    How did this come to be?
    Are you using a voice-to-text app?

    Hold on. Are you an AI?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.