• Jamal
    11.1k
    I for one think your proposals represent about the best we can do in the existing situationJanus

    @Baden's "proposals" are just a restatement of the TPF status quo. But in my opinion, such is the ambiguity and confusion around this issue, we do need a more detailed set of guidelines.

    You can use an LLM to produce your replies for you, then put them in your own words for posting on TPF. We can't stop people from doing it, but I don't think it is ok. It's what some people in these conversations have described as allowing it to do your thinking for you.

    "So long as you don't copy and paste, use it how you like" is not an adequate guide to the use of LLMs here. That's why I'll be posting up suggested guidelines for discussion.

    EDIT: No disrespect to @Baden, btw. I mostly agree with him. I just think our policy is not as obvious as he has implied.
  • Janus
    17.6k
    Okay, I had assumed that when @Baden said "don't get LLMs to do your writing for you", that this would include paraphrasing LLM text. It's good that any ambiguity gets ironed out.

    I have never used LLMs until today. I felt I should explore some interactions with them, so I have a better idea about what the experience is like. The idea of getting them to write, produce content which I can then paraphrase, polish my writing or using their arguments is anathema to me.
  • Jamal
    11.1k


    Yeah, but it's ambiguous. I'd like to clarify it, and make it known that it's not ok to do certain things, even if it's impossible to enforce. Since people are using LLMs, this is best done in the context of a "How to use LLMs".
  • Jamal
    11.1k
    I have never used LLMs until today. I felt I should explore some interactions with them, so I have a better idea about what the experience is like. The idea of getting them to write, produce content which I can then paraphrase, polish my writing or using their arguments is anathema to me.Janus

    Yeah, but on the other hand, it might not be so bad to use an argument suggested by an LLM, so long as you understand it. After all, we do this all the time reading papers and books. Philosophical discourse takes place in a context that the participants in the discourse should have access to, and maybe LLMs just make this easier?
  • Pierre-Normand
    2.8k
    The idea of getting them to write, produce content which I can then paraphrase, polish my writing or using their arguments is anathema to me.Janus

    The three sorts of examples that you give lay on a spectrum.

    I would also feel bad posting as my own AI content that I have merely paraphrased, even if I understand it fully. (And I might even feel a bit ashamed disclosing it!)

    Using them to polish your writing could be good (or merely acceptable) or bad depending on the nature and depth of the polishing. Jamal's earlier comparison with using a thesaurus was apt. An AI could point out places where your wording is clumsy or misleading. If the wording that it suggests instead is one that you can make your own, that's very similar to having a human editor make the suggestion to you.

    The idea of using their argument is strange since AI's never take ownership for them. If you've grasped the structure of the argument, checked the relevant sources to ensure it's sound in addition to being valid, and convinced yourself that it's cogent and perspicuous (that is, constitutes an apt framing of the problem), then the argument becomes one that you can make your own.
  • sime
    1.2k
    I think this is all a storm in a teacup. It is obvious etiquette to quote an AI response in the same way that one would quote a remark from a published author, and nobody should object to a quoted AI response that is relevant and useful to the context of the thread.

    Also, for those of us who use AI for studying subjective and controversial philosophical topics, it can be useful to read the AI responses that other people are getting on the same topic, due to the fact that AI responses can be influenced by conversation history and can be biased towards the user's opinion. Community feedback can therefore help people objectively appraise the AI responses they are getting.
  • Harry Hindu
    5.8k
    I disagree. When you are presented with something new and unprecedented, the source matters to you when assessing how to address the new unprecedented information. You hear “The plant Venus has 9 small moons.” You think, “how did I not know that?” If the next thing you learned was that this came from a six year old kid, you might do one thing with the new fact of nine moons on Venus; if you learned it came from NASA, you might do something else; and if it came from AI, you might go to NASA to check.

    Backgrounds, aims and norms are not irrelevant to determining what something is. They are part of the context out of which things emerge, and that shape what things in themselves are.

    We do not want to live in a world where it doesn’t matter to anyone where information comes from. Especially where AI is built to confuse the fact that it is a computer.
    Fire Ologist
    You're essentially saying that the genetic fallacy is not a logical fallacy. It is, and it it's a fallacy for a reason.

    Difficult as it may be, it is vitally important to separate argument sources and styles from argument content. In argument the medium is not the message.
    https://www.fallacyfiles.org/genefall.html


    Here's an idea:
    If you don't like a post that contains content from AI - fine and you can say so. But then don't just let the message go and not address it, because if you don't then a human will just do so (in their own words :wink: ), and then you won't be able to make the same argument about the source of the argument.

    The whole thread is making a mountain out of mole hill.
  • Harry Hindu
    5.8k
    So, guys, I loaded this thread into AI for the solution to our quandary. Aside from the irony, who wants to know what it says?

    If so, why? If not, why not? Who will admit that if I don't share what it says will do it on their own? Why would you do it in private, but not public? Shame? Feels like cheating? Curious as to what AI says about public versus private use? Why are you curious? Will you now ask AI why that distinction matters?

    Will you follow AI's guidance in how to use AI while still preserving whatever it feels like were losing?

    Do you feel like it's better that it arrived at its conclusions after reading our feedback? Will you take pride in seeing that your contributions are reflected in its conclusions?

    Feels like we need to matter, right?
    Hanover
    I don't know - maybe give us the information and let us decide for ourselves what we do with it - like everything else on this forum.

    Whether we matter or not is irrelevant. It is whether our personal, subjective views and preferences of sources matter when it comes to determining whether some statement is valid or fallacious.
  • Jamal
    11.1k
    I think this is all a storm in a teacup. It is obvious etiquette to quote an AI response in the same way that one would quote a remark from a published author, and nobody should object to a quoted AI response that is relevant and useful to the context of the thread.sime

    It very much depends. We don't want to see people debating by proxy, offering quotes of LLM output in lieu of arguing the point themselves. It's another case of delegating thought. Anyone can test oneself against an LLM in their own time, and should otherwise quote primarily from published authors.

    But then you might say the problem is the same in both cases and has nothing to do with the source: fallacious argument from authority is bad in any case, never mind if it's AI. This is where I disagree. I believe we should not treat LLM quotes in the same way as those from published authors.

    When you quote a published author you point to a node in a network of philosophical discourse, and a point on the line of a philosopher's evolution, and to a point in a body of work the self-consistency of which is a constant issue for that philosopher, making it relatively stable—all of which allowing you to discuss what the philosopher meant. The source in this case is accountable and interpretable.

    This is not the case for an LLM. A quote from Kant invites engagement with Kant and the history of philosophy; a quote from an LLM's output invites ... what? Engagement with the user's prompt engineering skills?

    I'm not saying that an LLM quote cannot be useful and relevant in a discussion, just that if we want to maintain good quality discussions these should appear a lot less than quotes from published authors. (Of course, it's fine if it was an LLM that led the user to that published source in the first place.)
  • Harry Hindu
    5.8k
    When you quote a published author you point to a node in a network of philosophical discourse, and a point on the line of a philosopher's evolution, and to a point in a body of work the self-consistency of which is a constant issue for that philosopher, making it relatively stable—all of which allowing you to discuss what the philosopher meant. The source in this case is accountable and interpretable.Jamal
    Kant is not alive to be accountable and to tell us what he meant, not to mention that if he were alive today and possessed the knowledge of today what he said might be different.

    Who was it that said that philosophy is like a language game? Isn't discussing philosophical topics whether it be with AI or another human a language game?

    This is not the case for an LLM. A quote from Kant invites engagement with Kant and the history of philosophy; a quote from an LLM's output invites ... what? Engagement with the user's prompt engineering skills?Jamal
    One might say that a quote from Kant invites engagement with the user's knowledge of what dead philosophers have said and that a quote from an LLM is more relevant because it is based on current knowledge.
  • Jamal
    11.1k
    I want to divide this question into two -- one addressing our actual capacities to "Ban AI", which I agree is a useless rejection since it won't result in actually banning AI given our capacities to be fair and detect when such-and-such a token is the result of thinking, or the result of the likelihood-token-machine.Moliere

    Yeah, the idea of telling people not to use it at all is not a serious one, since at least half of us use it already, including most of the staff. But I think we should, in the context of a How to use AI, tell people what we don't want them to do, even if it's often impossible to detect people doing it.

    On the latter I mean to give a philosophical opposition to LLM's. I'd say that to progress thought we must be thinking. I'd put the analogy towards the body: we won't climb large mountains before we take walks. There may be various tools and aids in this process, naturally, and that's what I'm trying to point out, at the philosophical level, that the tool is a handicap towards what I think of as good thinking than an aid.

    My contention is that the AI is not helping us to think because it is not thinking. Rather it generates tokens which look like thinking, when in reality we must actually be thinking in order for the tokens to be thought of as thought, and thereby to be thought of as philosophy.

    In keeping with the analogy of the body: There are lifting machines which do some of the work for you when you're just starting out. I could see an LLM being used in this manner as a fair philosophical use. But eventually the training wheels are loosened because our body is ready for it. I think the mind works much the same way: And just as it can increase in ability so it can decrease with a lack of usage.

    Now for practical tasks that's not so much an issue. Your boss will not only want you to use the calculator but won't let you not use the calculator when the results of those calculations are legally important.

    But I see philosophy as more process-oriented than ends-oriented -- so even if the well-tuned token-machine can produce a better argument, good arguments aren't what progresses thought -- rather, us exercising does.

    By that criteria, even philosophically, I'm not banning LLM's insofar that it fits that goal. And really I don't see what you've said as a harmful use -- i.e. checking your own arguments, etc. So by all means others may go ahead and do so. It's just not that appealing to me. If that means others will become super-thinkers beyond my capacity then I am comfortable remaining where I am, though my suspicion is rather the opposite.
    Moliere

    You make some great points here but—and I don't want to be reductive; it's just how I see it—this can all be addressed in a discussion of how best to use it. Also, I think I disagree with your training wheels analogy. At least, it's not the whole picture. As much as we use some kind of helper or support when learning to do something, we also increasingly use labour-saving tools as we become experts, because there is no benefit to doing everything manually any more. LLMs can be used in both of those ways, I think. Working out the guidelines as to how, exactly, is the challenge.
  • Jamal
    11.1k


    Your response misses the point but I know better than to attempt a direct reply.
  • Harry Hindu
    5.8k
    Your response misses the point that the point of an argument is not the messenger, but the message.
  • Harry Hindu
    5.8k
    But I think we should, in the context of a How to use AI, tell people what we don't want them to do, even if it's often impossible to detect people doing it.Jamal
    This is completely irrelevant because if someone rewrites what AI said in their own words the source of the idea is still AI.
  • Harry Hindu
    5.8k
    I wonder where people on this site will stand in maybe 10-15 years when AI possibly discovers the cure for cancer or the Theory of Everything. Will they be quoting AI then?
  • Moliere
    6.3k
    Haven't started yet, not sure if I'd start then.
  • Harry Hindu
    5.8k
    It seems to me that the fact that we can have a philosophical discussion with an LLM is evidence that philosophy is a language game.
  • Moliere
    6.3k
    Have you tried having an erotic chat with an LLM?

    We can do it, but we can't do it.

    It looks like sex but it isn't sex.

    So it goes with thinking, in my opinion.

    Philosophy is more than a language game, I'd say. Philosophy is the discipline which came up with "language games"; insofar that we adopt language games then philosophy may be a language game, but if we do not -- then it's not.

    Philosophy is a "step up" from language games such that the question of what language games are can be asked without resorting to the definition or evidence of "language games"
  • Harry Hindu
    5.8k
    Have you tried having an erotic chat with an LLM?Moliere
    No. I haven't. I get the real thing from my wife, so why would I? Of course there are people that have a healthy sex life with their partner still seek out prostitutes and porn on the internet or sex chats. It's my personal preference for the real thing and those other acts I might consider only if I wasn't getting the real thing as often as I like.

    The same goes for discussions on this forum where certain posters are regularly intellectually dishonest and are rude. AI is where I go when I'm not getting any kind of serious input from real people on a topic. I prefer having discussions with real people, but use AI as a backup.

    We can do it, but we can't do it.Moliere
    One could say the same thing about calling a 900 number and talking to the live person on the other line. It's not real sex either.

    Philosophy is more than a language game, I'd say. Philosophy is the discipline which came up with "language games"; insofar that we adopt language games then philosophy may be a language game, but if we do not -- then it's not.

    Philosophy is a "step up" from language games such that the question of what language games are can be asked without resorting to the definition or evidence of "language games"
    Moliere
    It seems to me that the difference is with those that see language itself as a language game and those that don't, where those that do are more focused on the messenger rather than the message, or the words rather than what they refer to. Those that do not see language as a game are focused on the message rather than the messenger or the words used to express it.
  • Moliere
    6.3k
    No. I haven't. I get the real thing from my wife, so why would I? Of course there are people that have a healthy sex life with their partner still seek out prostitutes and porn on the internet or sex chats. It's my personal preference for the real thing and those other acts would only be if I wasn't getting the real thing as often as I like.

    The same goes for discussions on this forum where certain posters are regularly intellectually dishonest and are rude. AI is where I go when I'm not getting any kind of serious input from real people on a topic. I prefer having discussions with real people, but use AI as a backup.

    We can do it, but we can't do it.
    — Moliere
    One could say the same thing about calling a 900 number and talking to the live person on the other line. It's not real sex either.
    Harry Hindu

    I don't mean to be rude -- it's a very serious comparison. The erotic and philosophy have been invoked together since Plato.

    So as you note: you want the real thing.

    Me too.
  • bongo fury
    1.8k
    When you quote a published author you point to a node in a network of philosophical discourse, [...] The source in this case is accountable and interpretable.Jamal

    Exactly my point about Google search vs the AI summary that presumes to identify ideas instead of authored texts?

    When I made the point (badly) I nearly said "nodes in a network". Dang!
  • Jamal
    11.1k
    When I made the point (badly) I nearly said "nodes in a network". Dang!bongo fury

    I feel like I've been overusing it lately.

    Anyway, yes, I do take your point, despite my dislike of Google's search results. But if you use an LLM directly rather than via Google search, you can get it to identify the sources.
  • Leontiskos
    5.3k
    So, guys, I loaded this thread into AI for the solution to our quandary. Aside from the irony, who wants to know what it says?

    If so, why? If not, why not? Who will admit that if I don't share what it says will do it on their own? Why would you do it in private, but not public? Shame? Feels like cheating? Curious as to what AI says about public versus private use? Why are you curious? Will you now ask AI why that distinction matters?

    Will you follow AI's guidance in how to use AI while still preserving whatever it feels like were losing?

    Do you feel like it's better that it arrived at its conclusions after reading our feedback? Will you take pride in seeing that your contributions are reflected in its conclusions?

    Feels like we need to matter, right?
    Hanover

    I would say that the crux is that the AI afficionado is reliant upon a self-deception or a lie, and he wishes it to spread. The lie has various manifestations, but it is usually of the form, "AI is not X, but we should (or will) treat it as if it were X." For instance, if I were to say that philosophy is about thinking thoughts and AI does not generate thoughts, the AI afficionado would reply that AI may not have thoughts but it does have "thoughts," and that we should treat these "thoughts" as if they were thoughts. But to treat something falsely is to engage in self-deception or a lie.

    For the AI afficionado AI is to be treated like a black box, like a Ouija board or a Magic 8-Ball. They become impatient with those who ask the question, "How does it work?" They interrupt, exclaiming, "But look at what it can do!" This is literally the oldest trick in The Book, "Don't worry about any objections, just focus on the power it will give you!" The AI afficionado's approach is consequentialist through and through, and he has discovered a consequence which is supreme; which can ignore objections tout court. For him, what AI provides must outweigh any possible objection, and indeed objections therefore need not be heard. His only argument is a demonstration of its power, for that is all he deems necessary ("It is precious..."). In response to an objection he will begin by quoting AI itself in order to demonstrate its power, as if he were not begging the question in doing so. Indeed, if his interlocutors accept his premise that might makes right, then he is begging no question at all. With such logical and rhetorical power at stake, how could the price of lying be a price too high to pay?

    This is the unwritten answer to the question, "Why should we treat something as if it were something that it is not?" "Why should we lie to ourselves in this way?" The answer is, "Because it will give us great power. No more need be said."
  • Baden
    16.7k


    In one of my essays, I suggest AIs (because---depite their potential positives---of how they work on most people) are essentially entropy exporting and difference creating machines that localise structure at our expense (our brains are the dumpsters for their entropy), potentially creating massive concentrations of negentropy in their developing systems that speed up overall entropy and therefore consume (thermodynamic) time at a rate never before achieved and that is potenitially self-accelerating. I.e. They eat us and then they eat reality.

    It's a little speculative.
  • Leontiskos
    5.3k
    I believe we should not treat LLM quotes in the same way as those from published authors.

    When you quote a published author you point to a node in a network of philosophical discourse, and a point on the line of a philosopher's evolution, and to a point in a body of work the self-consistency of which is a constant issue for that philosopher, making it relatively stable—all of which allowing you to discuss what the philosopher meant. The source in this case is accountable and interpretable.

    This is not the case for an LLM. A quote from Kant invites engagement with Kant and the history of philosophy; a quote from an LLM's output invites ... what? Engagement with the user's prompt engineering skills?
    Jamal

    Thank you. I think this is a highly underappreciated point. :up:

    That's why I'll be posting up suggested guidelines for discussion.Jamal

    Yeah, but it's ambiguous. I'd like to clarify it, and make it known that it's not ok to do certain things, even if it's impossible to enforce.Jamal

    Although I have been harping on the dangers of making the decision-making too democratic when it comes to AI, I think you definitely want to secure buy-in from the membership given the fact that the guidelines will be difficult to enforce in an external or coercive manner. I think this is true even if it only means attempting to make these guidelines more rationally persuasive than other guidelines.
  • Leontiskos
    5.3k
    In one of my essaysBaden

    I forget exactly why I was digging around for your technoethics essay, but my guess is that it was related to all of this AI business. The issues that you raise in that essay strike me as deeply related to AI.

    In one of my essays, I suggest AIs (because---depite potential positives---of how they work on most people) are essentially entropy exporting and difference creating machines that localise structure at our expense (our brains are the dumpsters for their entropy), potentially creating massive concentrations of negentropy in their developing systems that speed up overall entropy and therefore consume (thermodynamic) time at a rate never before achieved and that is potenitially self-accelerating. I.e. They eat us and then they eat reality.Baden

    I can definitely see the outlines of that. One could connect my argument about lying to Kant's arguments against lying, which of course have social implications and not merely individual implications.

    If we have a technology or decision-path with good and bad effects, there are some cases where we are deeply tempted to attend to the good effects and ignore the bad effects. Or to at least postpone the assessment of the bad effects without in any way postponing the assessment of the good effects. So if there is an optimist who nevertheless recognizes that "negentropy" is being produced, I would plead with them to try to objectively understand the severity and extent of that negentropy before making any decisions about how to adopt a new technology. The reason I am pessimistic is largely because the investment dollars that are being poured into these technologies is astronomical.
  • Baden
    16.7k


    I'm also very pessimistic actually. Feel fee to PM if you'd like a pdf of the technoethics essay and the AI one if you want. Or the whole book (ten essays).
  • wonderer1
    2.3k
    For the AI afficionado AI is to be treated like a black box, like a Ouija board or a Magic 8-Ball. They become impatient with those who ask the question, "How does it work?" They interrupt, exclaiming, "But look at what it can do!" This is literally the oldest trick in The Book, "Don't worry about any objections, just focus on the power it will give you!" The AI afficionado's approach is consequentialist through and through, and he has discovered a consequence which is supreme; which can ignore objections tout court. For him, what AI provides must outweigh any possible objection, and indeed objections therefore need not be heard. His only argument is a demonstration of its power, for that is all he deems necessary ("It is precious..."). In response to an objection he will begin by quoting AI itself in order to demonstrate its power, as if he were not begging the question in doing so. Indeed, if his interlocutors accept his premise that might makes right, then he is begging no question at all. With such logical and rhetorical power at stake, how could the price of lying be a price too high to pay?Leontiskos

    What is the source of your claims to knowledge of the psychology of "the AI afficianado"?

    I.e. is there any reason for us to think that you aren't lying while making such claims?
  • Baden
    16.7k
    The most menacing AI (I've come across) for doing things for you that you should be doing yourself is ChatGPT. That thing would take a shit for me if it could.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.