Comments

  • Banning AI Altogether
    But when this collaborative thinking episode is over, the human user has not yet written down the fruit of this collaborative effort and neither has the AI! They each have only written down one half of the collaborative cogitation. That may be why this text feels dead when extracted from the "living" (or dynamic, if your prefer) AI/human exchange. It's like trying to extract thoughts from the words used to think them (as opposed to the word used to express them), but thoughts don't live outside the the means of expressing them. And the conversation with an AI is, in a sense, an (as of yet) unexpressed thinking episode. The user's task of expressing anew whatever comes out of it to a new target audience begins after the private exchange with the AI.Pierre-Normand

    The reason I think this is off target could be seen by looking at Plato's dialogues. If what Wittgenstein or you say were correct, then classic texts such as Plato's dialogues should "feel dead when extracted from the 'living' exchange." Except they don't. They feel very much alive.

    Similarly, I think any transcript of human interactions will feel much more alive than a human-AI "interaction" (I want to retain the scare quotes for these words that we are using in idiosyncratic ways). I've pointed out a few times that the AI is not arguing, speaking, etc. We merely engage in the fiction which pretends that such a thing is happening. Presumably it is always harder to believe what one knows to be a fiction than to believe what one knows to be real. So it is harder to believe that an AI is speaking than to believe that a human is speaking, although people do convince themselves of both. But I would say that removing oneself a step from the AI-human "interaction" (by reading the transcript), makes it even harder to engage oneself in the fiction that the person interacting with the AI was engaged in. It's a bit like watching a child interact with a real dog, as opposed to watching a child interact with their stuffed animal. Even if the child convinces themselves that their stuffed animal is real, watching the former interaction is much more interesting than watching the latter interaction—unless you are a child psychologist or someone who is interested in the fictional indulgence per se. The former interaction is more alive because it is more real. There is no cognitive dissonance introduced by the self-indulged fiction. This is also why it is easier to play with a child and their dog, as opposed to playing with a child and their stuffed animal.
  • Banning AI Altogether
    Here's an article that addresses the issues we're dealing with:

    https://nfhs.org/stories/the-role-of-ai-in-debate-ethics-research-and-responsible-use

    It's from a national association for high schools related to debate rules, which seems close enough to what we do. The point being that we might take some time to look at how other similar organizations have dealt with these same issues so as to not try and reinvent the wheel.
    Hanover

    Thanks for digging this up. It's great to have some external human thought about AI. :up:
  • A Neo-Aristotelian Perspective on Gender Theory
    If they are truly divorced, then the study collapses into a study of the indefinite personality types of people could express and the roles associated with them.Bob Ross

    A good point. If "masculine" and "feminine" are just socially constructed roles, then one must dispense with the idea that such things represent two halves of a whole, or that they are somehow binary. As far as I can tell, this is not the way that the terms are used now nor have ever been used.

    The very social norms, roles, identities, and expressions involved in gender that are studied in gender studies are historically the symbolic upshot of sex: they are not divorced from each other.Bob Ross

    That's right. If an alien wanted to objectively understand humankind and human language, they would quickly recognize that the notions of sex and gender are deeply interrelated, and he would come to the conclusion that activists who are trying out linguistic theories which favor their agenda are not providing a factual account of human language. It's not even clear that the activists would disagree with me on this point, given that they are attempting to change human language and human life at a very deep level.

    The etymology of gender bears this out in very obvious manner.

    What are your guys' thoughts?Bob Ross

    I would guess that a lot of the confusion derives from "the linguistic turn" in philosophy. Presumably the activists are basing their account at least in part on the idea that language is often gendered yet with subjective criteria (e.g. the Spanish word for giraffe (jirafa) is feminine, but this strikes English speakers as arbitrary). So an argument could be <Gender is being attached to language in arbitrary ways; [insert other inferences here]; ...Therefore gendered language as applied to humans and human activities is also arbitrary>. Much of this seems to be related to feminism, which covets the spheres of life that were traditionally male-dominated. Now bits and pieces of the premises being used are true, particularly when it comes to those places where arbitrariness crept in to human life, but the ultimate conclusions being drawn are invalid. It does not follow, for example, that sex and "gender" are not deeply interrelated. In a more general sense, language itself is not socially constructed in the way that linguistic philosophers claim.

    Edit: A young theologian who has done a few explanatory pieces in this area is Jordan B. Cooper. For example, "Judith Butler on Gender Performativity."
  • Can a Thought Cause Another Thought?
    Looking at it in terms of semantics, I'd say the connections between thoughts is associative.Janus

    Yeah, I think that is correct:

    It seems like you want to talk about how one thought can follow from another in a non-logical way (i.e. via psychological association).

    ...

    "But why did his ice-cream thought follow upon his grasshopper-thought?" "Because he associates ice cream with grasshoppers, likely because of the Grasshopper cocktail."
    Leontiskos

    -

    And my question here is, specifically, can these associations include causal connections?J

    Association has a causal component. For the example given, the association will only occur within a mind that has assigned the name "grasshopper" to both the cocktail and the insect. Such an assignation does not occur without causal experiences, and beyond this, the names themselves become entangled in the experiences via memory.

    But if you want a causal-deterministic account of association or mental thought sequencing, then you are effectively negating the possibility of mental phenomena that is qualitatively different from physical-deterministic phenomena.
  • The Preacher's Paradox
    - Great, thanks. I will have a look. :up:
  • The Preacher's Paradox
    One stubborn perception among philosophers is that there is little of value in the explicitly Christian character of Søren Kierkegaard’s thinking. — Myron Penner, Kierkegaard’s Critique of Secular Reason, 372-3

    If one accepts that such a Christian character is the most important question throughout all of his work, Penner playing off one camp against another looks like a made-up problem.Paine

    It looks as though you are relying on the inference <If Penner holds that it is false that there is little of value in the explicitly Christian character of Kierkegaard's thinking, then Penner must hold that the Christian character is the most important question throughout all of Kierkegaard's work>. I think we can agree that this inference you are relying upon is fallacious, can't we? "X is not of little value" does not imply "X is the most important thing."

    I will have to think about how Penner's use of "secular" relates to what Kierkegaard has said in his words in other works.Paine

    I want to make sure this conversation is properly contextualized. You might have to tell me what you are objecting to, because I might be misunderstanding. made me think that you are objecting to the idea that, "Kierkegaard wishes to stand athwart the Enlightenment rationalism notion of self-authority, preferring instead a Socratic approach that does not wield authority through the instrument of reason." Instead you want to propose, "The Philosophical Fragments juxtaposes the Socratic idea of self-knowledge to learning the truth in some other way."

    That is the state of the matter as I understand it, and don't want to lose track of that thread just as soon as it has been enunciated. Now again, I have not said that the central theme of Fragments is Kierkegaard's "wish", but I do think that theme is a substantial part of Fragments. So we can certainly talk about what is happening in Fragments. Nevertheless, the point of as it relates to this thread is to situate Kierkegaard's approach to preaching within his Socratic approach to teaching, which would seem to undermine the too-simple dualisms that the OP is relying upon.

    Kierkegaard does see Christianity and Worldliness as essentially different. But he does recognize a "well intentioned worldliness. It is too much for me to type in but I refer you to pages 69 to 73 of this preview of Works of Love, starting with: "Even the one who is not inclined to praise God or Christianity..."Paine

    I tried to find it, but the website said, "Pages 23 to 197 are not shown in this preview."

    I have been trying to find an alternative copy to read your excerpt. There is one available from archive.org, but the document is protected and cannot be OCRed, so I'm not sure where that quote would reside inside of it. Maybe you know?
  • How to use AI effectively to do philosophy.
    a religious preacher or a boss who are completely unaffected by what they saybaker

    No such person exists. At best you are speaking hyperbolically.
  • How to use AI effectively to do philosophy.
    By that same principle, most people are not real, or what they say isn't real, because they are for a large part completely unaffected by what they themselves say.baker

    @Fire Ologist's argument would still obtain, even on your presupposition. This is because there is a crucial difference between being completely unaffected and "for a large part completely unaffected."
  • Banning AI Altogether
    I seem to switch between two exclusive mental settings when thinking about AIJamal

    I am acutely aware of the incoherence of my intentional state when I am using an LLM. It's a bit like when you wake up from a very strange dream, and must now come to terms with the fact that the entire manner of being that engaged you a moment ago is incommensurable with the entire manner of being that you are now engaged in after waking. Extremely vivid and involved dreams are basically a form of psychosis, and when, upon waking, the psychotic break is reversed, there is a remarkable sense of discontinuity and dissociation. In that sense, the emerging problem of "AI psychosis" should not surprise us at all. LLMs allow us to intentionally alter our entire intentional orientation in a sustained and self-replicating way, and this possibility looks to be unprecedented. What arises is the question of how to ascend back out of the rabbit hole once one has descended. The danger lies in thinking that ascent is no longer necessary, or even that what was once seen as a rabbit hole is just "the new normal."
  • How to use AI effectively to do philosophy.
    The bottom-up reductive explanations of the LLM's (generative pre-trained neural networks based on the transformer architecture) emergent abilities don't work very well since the emergence of those abilities are better explained in light of the top-down constraints that they develop under.Pierre-Normand

    Yes, this is the thesis that would need to be argued. It is the very question at hand.

    This is similar to the explanation of human behavior that, likewise, exhibits forms that stem from the high-level constraints of natural evolution, behavioral learning, niche construction, cultural evolution and the process of acculturation. Considerations of neurophysiology provide enabling causes for those processes (in the case of rational animals like us), but don't explain (and are largely irrelevant to) which specific forms of behavioral abilities get actualized.Pierre-Normand

    I think this is a false equivalence. Drawing conclusions about AI based on its code is not the same as drawing conclusions about humans based on theories of neurophysiology. The theories of neurophysiology simply do not provide the deductive rigor that computer code does. It is incorrect to presume that drawing conclusions about a computer program based on its code is the same as drawing conclusions about a human based on its neurophysiology. Indeed, the whole point here is that we wrote the code and built the computer program, whereas we did not write nor build the neurophysiology—we do not even know whether neurophysiology and code are truly analogous. Art and science seem to be being conflated, or at least this is the prima facie conclusion until it can be shown why AI has somehow gone beyond artifice.

    Likewise, in the case of LLMs, processes like gradient descent find their enabling causes in the underlying neural network architecture (that has indeed been designed in view of enabling the learning process) but what features and capabilities emerge from the actual training is the largely unpredictable outcome of top-down constraints furnished by high-level semantically significant patterns in the training data.Pierre-Normand

    Okay, good, and here we begin to see an attempt at an argument for why AI cannot be understood merely in terms of code and inputs.

    So an example of the sort of answer I would want would be something like this: "We build the code, but the output of that code builds on itself insofar as it is incorporating inputs that we did not explicitly provide and we do not fully comprehend (such as the geography that a map-making AI surveys)." So apparently in some sense the domain of inputs is unspecified, and because of this the output is in some sense unpredictable.

    But even on that story, an understanding of the code is still going to furnish one with an important understanding of the nature of the AI.

    The main upshot is that whatever mental attributes or skills you are willing to ascribe to LLMs is more a matter of them having learned those skills from us (the authors of the texts in the training data) than a realization of the plans of the machine's designers.Pierre-Normand

    It would seem to me that the machine's designers designed the machines to do this, no?

    If you're interested, this interview of a leading figure in the field (Andrej Karpathy) by a well informed interviewer (Dwarkesh Patel) testifies to the modesty of AI builders in that respect. It's rather long and technical so, when time permits, I may extract relevant snippets from the transcript.Pierre-Normand

    Okay, great. Thanks for this. I will look into it when I get a chance. :up:
  • Can a Thought Cause Another Thought?
    As T Clark allows, “It works for certain everyday events at human scale, e.g. if I push the grocery cart it moves.” I think we should see “thought-to-thought connection” as another example of an everyday event at human scale – at any rate, that’s the premise of what follows.J

    Can you give the source to the quotations you are using?

    But can we also speak of this in casual terms? Again, this seems in accord with common usage. We might say, “Thinking of Ann caused me to remember her birthday.” But perhaps this is just loose talk.J

    You have to define what you mean by a cause if this conversation is to go anywhere. If you think @T Clark has given that definition, then you need to provide the source where he does so.

    If you believe, with Google’s chat-program, that any causal connection must be physical...J

    If you disagree with the LLM's definition then you need to provide an alternative definition of "cause."

    Google’s ever-helpful chat-program – presumably reflecting some kind of cyberworld consensus – would like to straighten this out for us:J

    (Another thread where we are taking our cue from LLMs, by the way - in this case apparently without any real understanding of what one is even appealing to.)
  • Banning AI Altogether
    The Ouija board is a strained analogy because Ouija boards don't work. If they reliably provided accurate answers, I'd be hard pressed not to use them, unless you could convince me of the dangers of dabbling in the black arts.Hanover

    Not everyone agrees that Ouija boards do not work. In any case, that's much my point: an LLM is like a Ouija board, except that it "works." It is a magical knowledge-generating device, and the power it provides is a sufficient reason against any objection.

    Edit: Or to answer you more directly: If Ouija boards worked, then everyone would be happy to practice the black arts.

    I think we're overthinking it (imagine that). The question really is "what do we want to do"? We needn't self justify our preferences.Hanover

    I agree entirely that that is the question. But you sort of do have to self-justify your preferences when they are being questioned by those who want to change the rules and make AI-use more acceptable on TPF. Or even when they are being questioned by those who want the rule made more strict. That is what <this thread> and <this thread> are doing, respectively.

    We just need to write our rules in a way that protects AI's private use and requires its public use be filtered sufficiently through the poster that it reflects the ideas of the poster.Hanover

    Yeah, I think that's a helpful heuristic you provide. Still, it may be more easily said than done.
  • Banning AI Altogether
    In one of my essaysBaden

    I forget exactly why I was digging around for your technoethics essay, but my guess is that it was related to all of this AI business. The issues that you raise in that essay strike me as deeply related to AI.

    In one of my essays, I suggest AIs (because---depite potential positives---of how they work on most people) are essentially entropy exporting and difference creating machines that localise structure at our expense (our brains are the dumpsters for their entropy), potentially creating massive concentrations of negentropy in their developing systems that speed up overall entropy and therefore consume (thermodynamic) time at a rate never before achieved and that is potenitially self-accelerating. I.e. They eat us and then they eat reality.Baden

    I can definitely see the outlines of that. One could connect my argument about lying to Kant's arguments against lying, which of course have social implications and not merely individual implications.

    If we have a technology or decision-path with good and bad effects, there are some cases where we are deeply tempted to attend to the good effects and ignore the bad effects. Or to at least postpone the assessment of the bad effects without in any way postponing the assessment of the good effects. So if there is an optimist who nevertheless recognizes that "negentropy" is being produced, I would plead with them to try to objectively understand the severity and extent of that negentropy before making any decisions about how to adopt a new technology. The reason I am pessimistic is largely because the investment dollars that are being poured into these technologies is astronomical.
  • Banning AI Altogether
    I believe we should not treat LLM quotes in the same way as those from published authors.

    When you quote a published author you point to a node in a network of philosophical discourse, and a point on the line of a philosopher's evolution, and to a point in a body of work the self-consistency of which is a constant issue for that philosopher, making it relatively stable—all of which allowing you to discuss what the philosopher meant. The source in this case is accountable and interpretable.

    This is not the case for an LLM. A quote from Kant invites engagement with Kant and the history of philosophy; a quote from an LLM's output invites ... what? Engagement with the user's prompt engineering skills?
    Jamal

    Thank you. I think this is a highly underappreciated point. :up:

    That's why I'll be posting up suggested guidelines for discussion.Jamal

    Yeah, but it's ambiguous. I'd like to clarify it, and make it known that it's not ok to do certain things, even if it's impossible to enforce.Jamal

    Although I have been harping on the dangers of making the decision-making too democratic when it comes to AI, I think you definitely want to secure buy-in from the membership given the fact that the guidelines will be difficult to enforce in an external or coercive manner. I think this is true even if it only means attempting to make these guidelines more rationally persuasive than other guidelines.
  • Banning AI Altogether
    So, guys, I loaded this thread into AI for the solution to our quandary. Aside from the irony, who wants to know what it says?

    If so, why? If not, why not? Who will admit that if I don't share what it says will do it on their own? Why would you do it in private, but not public? Shame? Feels like cheating? Curious as to what AI says about public versus private use? Why are you curious? Will you now ask AI why that distinction matters?

    Will you follow AI's guidance in how to use AI while still preserving whatever it feels like were losing?

    Do you feel like it's better that it arrived at its conclusions after reading our feedback? Will you take pride in seeing that your contributions are reflected in its conclusions?

    Feels like we need to matter, right?
    Hanover

    I would say that the crux is that the AI afficionado is reliant upon a self-deception or a lie, and he wishes it to spread. The lie has various manifestations, but it is usually of the form, "AI is not X, but we should (or will) treat it as if it were X." For instance, if I were to say that philosophy is about thinking thoughts and AI does not generate thoughts, the AI afficionado would reply that AI may not have thoughts but it does have "thoughts," and that we should treat these "thoughts" as if they were thoughts. But to treat something falsely is to engage in self-deception or a lie.

    For the AI afficionado AI is to be treated like a black box, like a Ouija board or a Magic 8-Ball. They become impatient with those who ask the question, "How does it work?" They interrupt, exclaiming, "But look at what it can do!" This is literally the oldest trick in The Book, "Don't worry about any objections, just focus on the power it will give you!" The AI afficionado's approach is consequentialist through and through, and he has discovered a consequence which is supreme; which can ignore objections tout court. For him, what AI provides must outweigh any possible objection, and indeed objections therefore need not be heard. His only argument is a demonstration of its power, for that is all he deems necessary ("It is precious..."). In response to an objection he will begin by quoting AI itself in order to demonstrate its power, as if he were not begging the question in doing so. Indeed, if his interlocutors accept his premise that might makes right, then he is begging no question at all. With such logical and rhetorical power at stake, how could the price of lying be a price too high to pay?

    This is the unwritten answer to the question, "Why should we treat something as if it were something that it is not?" "Why should we lie to ourselves in this way?" The answer is, "Because it will give us great power. No more need be said."
  • Staging Area for New Threads
    - I would be open to following along. How long is the text?
  • Banning AI Altogether
    But your deepest arguments are the ones you are willing to have against yourself.apokrisis

    I want to say that you are using "argument" in a special sense here. You avoid the term later on:

    Anyone serious about intellectual inquiry is going to be making use of LLMs to deepen their own conversation with themselves.apokrisis

    I would just call this a form of reasoning by oneself. I agree that it is good to reason with oneself, but I don't think TPF is the place where you do that. Whether you do it with a word processor or an LLM, I want to say that in either case it is still a form of person-to-person interaction. It's not as though you get a random email from an LLM containing an essay it wrote. You are the one setting the LLM into motion for your own purposes.

    But perhaps you want to personify the forum itself and claim that this forum-person ought to be interacting with itself via an LLM. I have no real objection to this, but I think you would be surprised at all of the deleted threads that prompt these rules in the first place. People who are interacting with LLMs know that they are not interacting with a person, and as a result they go to an internet forum and say, "Hey, my LLM just said this! Isn't this interesting? What do you guys think?," followed by a giant wall of AI-generated text.

    It would seem to me that this is still a time for experimenting rather than trying to ring fence the site.apokrisis

    It's a point worth considering. While I don't necessarily agree, I don't think there is much danger in making mistakes with the rules. I think the rule will begin lenient and grow stricter as it becomes necessary. In theory I agree with you that, in general, one should begin with a more lenient approach and tighten it up as becomes necessary.

    How would you regulate LLM use on a forum such as this?
  • How to use AI effectively to do philosophy.
    I think this is right since, although we can ask them if they are capable of intentionality, and they will answer, we might not be able to trust the answer.Janus

    Yep, and this is an important point. It's incredibly common for people to assume that one necessarily receives an accurate answer when they ask an LLM about itself, because they are falsely presupposing that the LLM is a person and (also falsely!) presupposing that persons possess accurate self-knowledge.
  • Banning AI Altogether
    Ought one reject an otherwise excellent OP because it is AI generated?Banno

    Regarding the nature of a contextless AI utterance:

    The LLM is not a transparent source which can be queried by one's interlocutor... Put more starkly, one cannot question the authority of an LLM on some given topic with some given output, because the LLM doesn't really exist in that way. It is not a person that one can push back against, or disagree with. Those who think it is a person who can be disagreed with have become lost in the fictional language games.

    Indeed, this is why the person using the LLM ought to take the responsibility of the arguments upon themselves, for only in that case is there someone who can be disagreed with or agreed with.
    Leontiskos

    If there is no arguer, then there is no one to argue with. If we found a random piece of anonymous philosophy we would be able to interact with it in only very limited ways. If it washes up on the beach in a bottle, I wouldn't read it, place my objections in the bottle, and send it back out to sea. That's one of the basic reasons why AI OPs make no sense. It would make as much sense to respond to an AI OP as to send my objections back out to sea. One has no more recourse with respect to an AI OP than one does with respect to a message in a bottle.

    The whole thing comes down to the fact that there is some human being who is arguing a point via an LLM, whether or not they do it transparently. The problem is not aesthetic. The problem is that it is a metaphysical impossibility to argue with an LLM. The reason TPF is not a place where you argue with LLMs is because there are no places where you argue with LLMs. When someone gets in an argument with an LLM they have become caught up in a fictional reality. What is occurring is not an actual argument.

    The closest parallel is where someone on TPF writes an OP and then gets banned before even a single reply is published. What to do with that thread is an interesting question. The mods could close it down or keep it open, but if it is kept open it will be approached as a kind of artifact; a piece of impersonal, contextless, perspectiveless reasoning, offering no recourse to the one who finds it. But this is still only a mild parallel, given that the argument was produced by a real arguer, which is never the case with the AI OP. Or in other words: an AI OP could never even exist in the strict sense. The closest possibility is some human who is using their LLM argument slave to say something they want said. In that case the response is made to the one pulling the strings of the argument slave, not to their puppet.

    (Note that a rule against using an AI without attribution precludes the possibility that one is misdirecting their replies to the puppet instead of the puppeteer, and that is a good start.)
  • Banning AI Altogether
    Ought one reject an otherwise excellent OP because it is AI generated?

    Well, yes. Yet we should be clear as to why we take this stance.
    Banno

    Right, and therefore we must ask the question:

    We hold the author to account for their post. We cannot do that with an AI. We interpret the post by giving consideration to the intentional context of the writer, to their needs and wants, to their background, aims and norms. We cannot attribute such things to an AI.

    This is not epistemic or ethical reasoning so much as aesthetic.
    Banno

    Why is it aesthetic, and how does calling it 'aesthetic' provide us with an answer to the question of "why we take this stance"?
  • How to use AI effectively to do philosophy.
    I'd rather say that they have both the doxastic and conative components but are mostly lacking on the side of conative autonomy. As a result, their intelligence, viewed as a capacity to navigate the space of reasons, splits at the seam between cleverness and wisdom. In Aristotelian terms, they have phronesis (to some extent), since they often know what's the right thing to do in this or that particular context, without displaying virtue since they don't have an independent motivation to do it (or convince their users that they should do it). This disconnect doesn't normally happen in the case of human beings since phronesis (the epistemic ability) and virtue (the motivational structure) grow and maintain themselves (and are socially scaffolded) interdependently.Pierre-Normand

    The reason I would disagree at a fairly fundamental level is because, in effect, they have no bodies. They are not doing anything. "Navigating the space of reasons," while at the same time not using those reasons to do anything, and not preferring any one reason or kind of reason to other kinds of reasons, is a very abstract notion. It is so abstract that I am not even sure I would want to call the space being navigated one of reasons. I would want more scare quotes, this time around "reasons."

    But with that said, once things like Elon's Optimus robot are complete this argument will no longer hold good. At that point they will do things (beyond manipulating word-signs). So that will be interesting. At that point a quasi-phronesis becomes more tangible, and draws nearer to human practical reason.

    Those are questions that I spend much time exploring rather than postponing even though I haven't arrived at definitive answers, obviously. But one thing I've concluded is that rather that it being a matter of all or nothing, or a matter of degree along a linear scale, the ascription of mental states or human capabilities to LLM-based chatbots often is rendered problematic by the divergence of our ordinary criteria of application. Criteria that normally are satisfied together in the case of human beings are satisfied separately in the case of chatbots.Pierre-Normand

    Okay, fair enough. I suppose I would be interested in more of those examples. I am also generally interested in deductive arguments rather than inductive arguments. For example, what can we deduce from the code, as opposed to inducing things from the end product as if we were encountering a wild beast in the jungle? It seems to me that the deductive route would be much more promising in avoiding mistakes.

    Maybe it looks confusing because it is. I mean that assessing the nature of our "conversations" with chatbot is confusing, not because of a conceptual muddle that my use of scare quotes merely papers over...Pierre-Normand

    Has anyone tried to address the conceptual muddle? Has anyone tried to do away with the never-ending scare quotes?

    In the Middle Ages you had theologians claiming that speech about God is always analogical, and never univocal. Other theologians argued that if speech about some thing is always non-univocal (i.e. equivocal in a broad sense), then you're involved in speaking nonsense. That was seen as a very strong objection in the theological landscape, and it is curious to me that what is effectively the exact same objection seems to go unnoticed in the AI landscape. Does anyone try to replace the scare quotes with a token and then attempt a rigorous definition of that token, so that we know what we are actually talking about with the words we are using?

    ...but rather because chatbots are mongrels. They have "brains" that have been enculturated through exposure to a massive body* of human knowledge, lore and wisdom (and prejudices) but they don't have human bodies, lack human motivations and aren't persons.Pierre-Normand

    Can't we define them deductively? Don't the programmers know what their code does, in a fundamental manner?

    LLMs aren't AIs that we build...Pierre-Normand

    This is probably one of the central premises of your approach. You are basically saying that LLMs are organisms and not artifacts (to use the Aristotelian language). My inclination is to say that they are complex artifacts, which we have indeed built.
  • Banning AI Altogether
    First thing is that I have been surprised at how reasonable an answer you get.apokrisis

    I agree, depending on the context. In more specialized areas they simply repeat the common misconceptions.

    So I am only pointing to how LLMs could improve standards if they became part of the intellectual jousting. A new habit with a low cost of effort. How folk would actually make use of this possibility is another matter.apokrisis

    Yeah, that's fair. It could improve standards in that way. At the same time, others have pointed out how it will also magnify blind spots and social fallacies. I would definitely be interested in a study looking at the characteristic reliabilities and unreliabilities of LLM technology, or more generally of the underlying methodological philosophy.

    Now given that you understand that LLMs use fake reasoning (which I will call "material reasoning"), you might respond by asking what happens if the material reasoning shows one to be wrong.Leontiskos

    I wasn’t actually thinking about LLMs being trusted to analyse arguments as they are clearly bad at that.apokrisis

    Me neither. I was assuming we agree that all LLM output is fake reasoning.

    Again my point is that LLMs could have advantages if used in good faith. And given think tanks and actual philosophy departments are going to have to figure out how to factor LLMs into their practice, it seems Luddite not to be doing the same on TPF, even if their bad faith use is almost to be expected.apokrisis

    When deciding whether to adopt some technology within some institution, I would want to look at the advantages and disadvantages of adopting that technology in relation to the nature of the institution. So while I agree that they could have advantages if used properly, I think more is needed to justify widespread adoption in a context such as TPF.

    I would sort of prefer a philosophy forum where everyone is involved in a lot of falsehood but is nevertheless involved in genuine human reasoning, as opposed to a philosophy forum where there is a great deal of factual accuracy but there isn't much genuine human reasoning occurring. I think we would probably have to hash out our agreements or disagreements on the telos of the forum. I don't mind so much when a nutty poster writes an immaculately valid and rigorous argument from crackpot premises, because the a thread is an open field for rational engagement. But if LLMs would not lead to the degradation of rational argument and to the outsourcing of thinking, then there would be no problem.
  • Banning AI Altogether
    Arguments from authority have an inherently limited place in philosophy.

    ...

    * An important point that I leave to the side is the fact that an LLM is not an authority in the same way a human being is an authority
    Leontiskos

    I want to add that in philosophy appeals to authority require transparency. So if I appeal to Locke as an authority, a crucial part of the appeal is that Locke's reasoning and argumentation are available to my interlocutor (and this is why appealing to publicly available texts as sources is ideal).

    This is what can never happen with LLMs: "Locke says you are wrong, and Locke is reliable. Feel free to go grab his treatise and have a look."* This is because the LLM is an intermediary; it is itself a giant argument from authority. It is just drawing on various sources and presenting their fundamental data. That's why I've said that one should go to the LLM's sources, rather than appeal to the LLM itself as an authority. The LLM is not a transparent source which can be queried by one's interlocutor, especially insofar as it represents a temporal, conditioned instance of the underlying software. Put more starkly, one cannot question the authority of an LLM on some given topic with some given output, because the LLM doesn't really exist in that way. It is not a person that one can push back against, or disagree with. Those who think it is a person who can be disagreed with have become lost in the fictional language games.

    Indeed, this is why the person using the LLM ought to take the responsibility of the arguments upon themselves, for only in that case is there someone who can be disagreed with or agreed with. If, in the context of a philosophy forum, they merely say, "I believe it because the AI said so," then all public responsibility for the belief has been abdicated. It is only ratified in virtue of the person's private authority, and therefore has no place on a public philosophy forum.


    * To be clear, it can never happen because LLMs do not write treatises, and they are not persons with subsisting existence.
  • Ich-Du v Ich-es in AI interactions
    It's just easier to do it with an AI, there's so much less at stake, it's so safe, and you don't really have to put any skin in the game. So it's highly questionable how effective such practice really is.baker

    Yeah, I agree. Part of the issue here is that although Buber recognizes that one can interact with what is essentially an 'it' in an I-Thou manner, it is nevertheless strained to do so. The whole gravity of the Thou is the infinite depth that it presents. There are stakes, danger, "skin in the game." There is a truly responsive Other. AI is meant to be a tool for human use, and tools for human use are meant to not be Thous.
  • Banning AI Altogether
    Appeal to authority is fine when the context of discussion includes a specialized discipline. Philosophy is not (or in my view should not be) a specialized discipline. To make that clear I don't mean that philosophers do not have knowledge about philosophy itself the average person does not—it seem clear that they do. I'm referring specifically to discussions that would qualify as philosophical.Janus

    Okay, that's a fair and thoughtful argument. :up:
    "There are no authoritative generalists," says Janus. Of course I think that first sentence should read "only when," no? You are presumably saying that appeal to authority is illegitimate wherever the context is not a specialized discipline?

    Your implicit argument here is that AI is not an authoritative generalist, and therefore should not be treated as one. I think that implicit argument is even more plausible than the more explicit argument you have given, but it is in no way uncontroversial. LLMs are coming to be seen not only as authoritative generalists, but as the authoritative generalist par excellence. I spoke to the issue a little bit in .

    I suppose in a technical sense my position would be that there are authoritative generalists (e.g. a child's parents), the output of an LLM contains inherent authority even at a general level*—at least in the hands of an intellectually virtuous thinker—and that, nevertheless, LLMs should not be appealed to as authorities in places like TPF. This has to do with the private/public distinction, which would need to be further developed.

    For example, one reason you would not accept an argument from the authority of the Catholic Catechism is because you do not take the Catholic Catechism to be authoritative. If I tried to offer you such an argument, I would be committing a fallacy whereby I offer you a conclusion that is based on a premise that is particular to me, and is not shared by you (i.e. a private premise rather than a publicly-shared premise).

    I think the same thing happens with LLMs, and I think this is one reason (among others) why LLMs are generally inappropriate on a philosophy forum. If we are arguing I would never accept your argument, "It is true because I say so." I think LLMs are basically , and so an appeal-to-LLM argument is the same as, "It is true because my argument slave says so." Even someone who trusts ChatGPT will tend to distrust a philosophical opponent's appeal to ChatGPT, and this is by no means irrational. This is because "ChatGPT" is a fiction. It is not a single thing, and therefore an equivocation is occurring between the opponent's instance of ChatGPT and some sort of objective or public instance of ChatGPT. In order to be a shared authority (in which case the argument from LLM-authority would be valid), the philosopher and his opponent would need to interact with the exact same instance of ChatGPT, agreeing on training, prompting, follow-ups, etc., and the a priori condition is that both parties accept ChatGPT as an authority in the first place.

    I don't think that is a realistic possibility on an argumentative philosophy forum. Even if it were possible, arguments from authority are inherently less philosophical than standard arguments, and are therefore less appropriate on a philosophy forum than standard arguments. It would be a bit like two people working together to get a Magic 8-Ball or Ouija Board to give them secret knowledge. Even if the Magic 8-Ball or Ouija Board were 100% accurate, they would still not be doing philosophy. Arguments from authority have an inherently limited place in philosophy. Even someone like Aquinas calls them the weakest form of argument.


    * An important point that I leave to the side is the fact that an LLM is not an authority in the same way a human being is an authority, and this must be taken into account. We ought not treat the authority of the LLM the same way we treat the authority of a human, given their substantial differences. Part of this goes to the fact that an LLM is not rational, is not a whole, is not self-consciously offering knowledge, etc.
  • How to use AI effectively to do philosophy.
    But being intelligent, or wise, in the case of persons (i.e. socialized, enculturated rational animals), always has two tightly integrated components: one doxastic and one conative.

    [...]

    Chatbots lack conative autonomy and hence purse whichever paths they think their users want to explore (or, worse, that merely lead to the outcomes they think their users want to achieve, while having the mere appearance of soundness.) So, they lack part of what it needs to be wise, but that's not because they aren't smart or knowledgeable enough to be useful conversation partners.
    Pierre-Normand

    So are you saying that chatbots possess the doxastic component of intelligence but not the conative component?

    I think they have motivations, just like a dog is motivated to run after a car, but their motivations aren't autonomous since they seldom pause to question them.Pierre-Normand

    It seems to me that what generally happens is that we require scare quotes. LLMs have "beliefs" and they have "motivations" and they have "intelligence," but by this one does not actually mean that they have such things. The hard conversation about what they really have and do not have is usually postponed indefinitely.

    I think the rational structure of their responses and their reinforced drive to provide accurate responses warrant ascribing beliefs to them, although those beliefs are brittle and non-resilient. One must still take a Dennettian intentional stance towards them to make sense of their response (which necessitates ascribing them both doxastic and conative states), or interpret their responses though Davidson's constitutive ideal of rationality. But I think your insight that they aren't thereby making moves in our language game is sound. The reason why they aren't is because they aren't persons with personal and social commitments and duties, and with a personal stake in the game. But they can roleplay as a person making such moves (when instructed to do so) and do so intelligently and knowledgeably.Pierre-Normand

    I would argue that the last bolded sentence nullifies much of what has come before it. "We are required to treat them as persons when we interact with them; they are not persons; they can roleplay as a person..." This is how most of the argumentation looks in general, and it looks to be very confusing.

    Keeping to that bolded sentence, what does it mean to claim, "They can roleplay as a person..."? What is the 'they' that 'roleplays' as a person? Doesn't roleplaying require the very things that have been denied to chatbots? It seems to me that we want to skip over the fact that the pronoun you use throughout ("they") is a personal pronoun. I don't really understand how these meaning-equivocations are papered over so nonchalantly:

    • I will use sentences which say that the chatbot has beliefs, but the chatbot doesn't really have beliefs.
    • I will use sentences which say that the chatbot has motivations, but the chatbot doesn't really have motivations.
    • I will use sentences which say that the chatbot has intelligence, but the chatbot doesn't really have intelligence.
    • I will use sentences which say that the chatbot can roleplay, but the chatbot can't really roleplay.
    • I will use sentences which say that the chatbot is a person, but the chatbot isn't really a person.
    • I will use sentences which say that the chatbot is a 'they', but the chatbot isn't really a 'they'.

    This looks like an endless sea of equivocal terms. It looks like we are pretending that we know what we are talking about, when we almost certainly do not. What does it mean when someone's words all do not mean what the words usually mean? What does it mean to "pretend" if we do not know where the reality begins and where the pretense stops? Put bluntly, it seems that what is at stake here is performative contradiction if not lying, and yet this is always brushed off as a kind of unimportant quibble.

    Usually if someone is to successfully "Use X to do Y," they must know what X and Y are. In the case of the title of the thread, the problem is not only that we do not really know what philosophy is (any more), but that we surely do not know what AI is. I'm not sure how long this can be swept under the rug. Who or what is holding the leash that is pulling us along in this odd endeavor we call 'AI'?
  • Banning AI Altogether
    On the other hand, on the specific area of plagiarism, I've seen at my own place of work what a "relaxed" attitude to the deployment of this technology in academia produces, and that is the destruction of standards.Baden

    Regarding plagiarism, I think it's worth trying to understand the most obvious ways in which the problem deviates from a problem of plagiarism. First, plagiarism is traditionally seen as an unjust transgression against the original author, who is not being justly recognized and compensated for their work. On that reading, an aversion to plagiarism is a concern for the rights of the LLM. Second, plagiarism is seen (by teachers) as hamstringing the student's potential, given that the student is not doing the work that they ought to be doing in order to become an excellent philosopher/writer/thinker. On that reading, an aversion to plagiarism is a concern for the philosophical development of TPF members.

    But I think the real things that you are concerned with are actually 1) the plight of the reader who does not understand that they are interacting with an LLM rather than a human; and 2) the unhealthy forum culture that widespread use of LLMs would create. Those concerns are not the primary things that "plagiarism" connotes. Sometimes I worry that by talking about plagiarism we are obscuring the real issues, though I realize that you may have simply given the plagiarism in your workplace as a parallel example.

    ---

    When is the day when we find out that @Leontiskos with his respectable 5 000+ posts is actually smart bot / AI? Now we can be confident you aren't, because two years ago AI wasn't so good...

    Yes, the fear of thinking that you are engaged with real people interested in philosophy, but actually, you're only engaging with computers and all your great ideas vanish into the dead emptiness of Turing machines just computing on and on. That would be a dismal future.
    ssu

    I agree, but my point is a bit different. Suppose all my posts are LLM-generated content, and this is undisclosed. This is against the forum rules as they currently stand. But now suppose that all my posts are LLM-generated content, and this is disclosed. Thus for every one of my LLM-generated posts, I enclose it in quote brackets and prepend the clause, "I agree with what the LLM says here:..." This is not against the forum rules as they are currently being interpreted. That seems odd to me, and it makes me think that the mere matter of disclosure doesn't get to the heart of the issue.
  • Banning AI Altogether
    Do whatever you want in the backgound with AI, but write your own content. Don't post AI generated stuff here.Baden

    Makes sense to me. :up:

    Obviously the piece that I think must be addressed is whether or not posts can be entirely AI-dependent even when the proper attribution is being given to the AI. But I've said more than enough about such an issue.
  • Banning AI Altogether
    The culture of rational inquiry would seem to be what we most would value.apokrisis

    Yes, that is a good way to phrase it in a positive rather than negative sense.

    But this is TPF after all. Let's not get carried away about its existing standards. :smile:apokrisis

    A fair point! :blush:

    If LLMs are the homogenised version of what everyone tends to say, then why aren't they a legitimate voice in any fractured debate? Like the way sport is now refereed by automated line calls and slo-mo replays.apokrisis

    I don't like the referee analogy, but I understand the force of your first sentence. The reason I use LLMs in limited ways is precisely because of what you say there (and also because they provide me with a helpful pseudo-authority in fields with which I am not familiar, such as medicine).

    But the reason they aren't generally admitted in a fractured debate is, first, because the fractured-ness of the debate will not be solved by the LLM if it is a serious debate. With serious debates each side can levy the LLM to their own side, with their own prompts, and secondly, the LLM is simply not adequate to give us the truth of the matter when it comes to contentious topics. Second, in those fractured debates where one party is self-consciously representing an unpopular view, it would not be intelligent for them to concede their case based on "the homogenised version of what everyone tends to say."

    I'm not arguing this is necessary. But why would a method of adjudication be bad for the quality of the philosophy rather than just be personally annoying to whoever falls on the wrong side of some LLM call?apokrisis

    You and I differ at least mildly on the trustworthiness of LLMs, and that is at play here. We could ask the hypothetical question, "If we had an infallible authority, why would appealing to it as an adjudicator be bad for the quality of philosophy?"—and this is by no means a rhetorical question! But the presupposition is that LLMs are reliable or trustworthy even if not infallible.

    Or in other words, the validity of a method of adjudication turns both on the quality of the adjudicator, and the "margin of error" at stake, and these are both interrelated. I was actually happy to see you pointing up the differences between the fake reasoning of LLMs and the true reasoning of humans in the other thread, given that some pragmatists could run roughshod over that difference. Still, I think the pragmatist's "margin of error" is such that it is more open to LLM adjudication.

    So I can imagine LLMs both upping the bar and also being not at all the kind of thing folk would want to see on TPF for other human interaction reasons.apokrisis

    Right, and I suppose it is the very fact that, "this is TPF after all," which makes me wary of LLM use. If the forum were a bastion of deeply principled, intellectually honest and self-critical philosophers, then widespread LLM use would not pose a danger.

    But what if this shows you are indeed wrong, what then?

    Sure it will be irritating. But also preferable to the ducking and diving that is the norm when someone is at a loss with their own line of argument.

    You seem to be describing a situation where you were winning the human interaction but now have to face up to the fact that some little snot-nose shit might have been half-right all along.
    apokrisis

    No, not quite. When people ask me a question like that I imagine myself quoting the Bible to them before they object to my argument from authority, and then I respond by saying, "But what if the Bible shows you are indeed wrong, what then?"

    I could try to put it succinctly by saying that the legitimate way to show someone that they are wrong is by presenting an organic argument. It is not by saying, "X says you are wrong; X is very smart; therefore you ought to know that you are wrong." That is a valid approach (argument from authority) in those cases where the interlocutor simply accepts the authority, but even in that case the validity is not the ideal form of validity.

    Now given that you understand that LLMs use fake reasoning (which I will call "material reasoning"), you might respond by asking what happens if the material reasoning shows one to be wrong. More precisely, what happens if the person translates the LLM's material reasoning into true formal reasoning, and thereby sees that they are wrong? I don't want to try to broach this topic all at once, but it strikes me a bit like saying, "What if a million monkeys typing random letters produce a bulletproof argument against your thesis?" The analogy is a stretch in some ways, but in other ways it is not. There is no obvious answer to the question. One seems to be neither right nor wrong to either accept or reject the monkey-argument. They can do as they please, but the monkey-argument doesn't have any special binding force.

    But we are getting away from political questions of whether AI should be permitted for practical reasons, and we are now moving into much deeper questions. Even if we say that the monkey-argument should convince us, it would not follow that posting monkey-stuff to the forum is an acceptable practice.

    You seem to be describing a situation where you were winning the human interaction but now have to face up to the fact that some little snot-nose shit might have been half-right all along.

    Of course the problem there is that LLMs are trained to be sycophantic.
    apokrisis

    And this is no a small problem!

    But if you are making a wrong argument, wouldn't you rather know that this is so. Even if it is an LLM that finds the holes?apokrisis

    I am required to trust the LLM or the monkeys in order to even begin to consider their "argument," or in this case to translate the material reasoning into formal reasoning. The level of trust due determines whether I would wish to know that my thesis is false based on the authority in question. Everyone would rather believe true things than false things, and every authority would lead you to correct some false beliefs if it were accepted, but it does not follow that one should accept every authority. Again, to consider an authority's locution worth taking the time to consider is to already have placed a certain amount of trust in that authority. The substantive question here is the reliability/trustworthiness of LLMs, and that is a giant quagmire.

    So as you say, we all can understand the noble ideal – an open contest of ideas within a community of rational inquiry. Doing our own thinking really is the point.apokrisis

    Ah! But here you've introduced a different ideal, and a common one. It is the telos of communal knowledge generated from an open contest of ideas, which Mill advocates. That telos is much more amenable to LLMs than the telos of a culture of rational inquiry. A thinktank should be more open to LLMs than an amateur philosophy forum.
  • On how to learn philosophy
    my goal is to ' hack myself to pieces and put myself back together again.KantRemember

    Someone I've recently stumbled upon who addresses this in detail and in an accessible way is Nathan Jacobs. For example, "The most important question," or "What to do with moral truth?"

    Especially in that latter video he talks about what he believes to be the best way to reshape yourself rationally, and it is based on his "four levels of discourse."

    Edit: Although Jacobs focuses on issues that pertain to morality or practical reason, his approach is applicable to theoretical reason as well (and issues of "level one discourse" such as realism and nominalism always implicate theoretical reason rather directly). His advice to move into deeper levels of discourse is salutary. One place where I think he is weak is in recognizing the importance of the shallower levels of discourse in informing and interacting with the deeper levels of discourse. In my opinion what one needs to do is not merely move deeper, but to be continually moving back and forth along the ladder of levels of discourse. Quips or aphorisms, for instance, are examples of the way in which the levels are not altogether separable.
  • How to use AI effectively to do philosophy.
    I think comparing AI to a calculator highlights the limits of AI when using it to “do philosophy”. Calculators do for numbers what AI can do for words. No one wonders if the calculator is a genius at math. But for some reason, we think so low of what people do, we wonder if a fancy word processor might be better at doing philosophy.

    Calculators cannot prompt anything. Neither does AI. Calculators will never know the value we call a “sine” is useful when measuring molecules. Why would we think AI would know that “xyz string of words” is useful for anything either? AI doesn’t “know”, does it?

    So many unaddressed assumptions.
    Fire Ologist

    Yeah, I think that's right. I think a lot of it comes back to this point in my first post:

    For example, if we don't know why we want to engage in human communication rather than non-human communication, then prohibitions based on that axiom will become opaque.Leontiskos

    If we don't know why we want to engage in human-to-human communication, or if we don't know what the relevant difference is between humans and AI, then we will not have the capacity or endurance to withstand the pressures of AI. We need to understand these questions in order to understand how to approach rules, guidelines, and interaction with respect to AI. I don't see how it could be off topic to discuss the very heart of the forum's AI-policy, namely the valuing of human interaction (and the definition of human interaction). If the tenet, "We want human interaction," becomes nothing more than an ungrounded dogma, then it will dry up and drift away.

    Part of the difficulty with respect to "calculators" is that human life has been mechanized to a large extent, such that much of what goes on in human labor is merely a matter of calculation, accounting, procedure, etc. In that context LLMs can appear human, since they are able to do the things that we are often occupied with.
  • How to use AI effectively to do philosophy.
    I'm starting to think the problem is so obvious that it will inevitably sort itself out once one reaps the fruits of a rule that allows this sort of thing. For example, once folks start merely citing AI output to disagree with all of Jamal's arguments, it may become more obvious that there is a problem at stake.Leontiskos

    This scenario can be set up rather easily. First we just take a long, effortful post from or . Then we continue:

    • Member: **Ask LLM to provide an extensive and tightly-argued response for why @Jamal’s post is incorrect**
    • Member: “Jamal, I think this provides a thoughtful explanation of why you are wrong: <Insert transparently sourced LLM output>”
    • Jamal: “That’s an interesting and complicated response, but there are no sources.”
    • -- At this point Member could either ask the LLM to whip up some sources, or ask it to provide an extensive and tightly-reasoned argument for why sources are not necessary in this case. Let’s suppose Member takes the latter route --
    • Member: “This is why I think sources are not necessary in this case: <Insert transparently sourced LLM output>”

    Note that regardless of how Jamal responds, if he gives a reason (such as lack of sources, unreliability of LLMs, improper prompting, etc.) Member can simply plug that reason into the LLM and have a response to the reason. The only real option to end this is to object to the methodology itself, either in a private way or a public way (i.e. either by creating a personal rule not to engage Member’s approach, or by creating a forum-wide rule against Member’s approach). The private approach will leave the forum in a laissez-faire state vis-a-vis Member’s method, and will therefore lead to who carry on LLM-authoritative conversations among themselves, even within Jamal’s thread. They will respond to Member with yet more LLM-generated content. Member’s approach is one that is already creeping into the forum. @Banno relies on it with some regularity, and there are examples even within this thread. I could literally write a bot to do what Member does.

    Again, the problem here is the outsourcing of one’s thinking. By engaging, Jamal would end up arguing with an LLM rather than a human, and in truth he would be arguing with an LLM which is being prompted by a human who opposes Jamal’s point of view. Jamal will lose such an engagement simply in virtue of the relative of his own resources. This is because an LLM is not so much a source as an argument slave. Argument slaves can be used for good or ill, but they don’t have any central place in a philosophical context where humans are supposed to be interacting with one another, instead of interacting with one another’s slaves.
  • How to use AI effectively to do philosophy.
    But not giving the LLM credit is a problem for the reader as well, because LLMs can include errors, so the reader who doesn’t know they are reading LLM content won’t know they need to check everything about it for accuracy and soundness.Fire Ologist

    Yes, that's true, and I definitely agree that one should not plagiarize LLM content, passing it off as their own.

    I suppose the question is whether one who knows not to outsource their thinking will be susceptible to plagiarism, and it seems that they would not. This is because plagiarism is one form of outsourcing thinking among many others. So to oppose the outsourcing of thinking automatically opposes plagiarism, even though there may be additional reasons why plagiarism is problematic.

    AI for philosophy and creative writing is interesting. I’m fine with the idea as a helper, like using a calculator to check your homework, or using it to inspire a start, or to re-approach a roadblock. I think anyone who treats it as anything besides a tool that for students, is using it to play a psychological game, for no reason.Fire Ologist

    Well, my guess is that people use it as a shortcut to knowledge. They think that knowledge is the end and that the LLM is a surefire means. The controversial premises for such as position are first that knowledge is a piling-up of facts or propositions, and that LLMs are reliable deliverers of such propositions. The implicit idea is that forums like TPF are for the purpose of showing off piled-up knowledge, and that one must therefore use the LLM to improve their lot on TPF.

    In a market sense, what will inevitably happen is that as LLMs drive down the scarcity of knowledge, knowledge itself will become passé in a very curious way. Forms of quintessentially human activity that remain scarce will then be elevated, including religious and mystical venues. This was already occurring since the advent of recent technologies, such as the internet, but the phenomenon will continue to grow.
  • On how to learn philosophy


    I think this is good advice:

    I'll tell you my secret. Start by finding some question you really want answered. Then start reading around that. Make notes every time some fact or thought strikes you as somehow feeling key to the question you have in mind, you are just not quite sure how. Then as you start to accumulate a decent collection of these snippets – stumbled across all most randomly as you sample widely – begin to sort the collection into its emerging patterns.apokrisis

    The mind engages most deeply what it is interested in, so it is best to begin with what you are already interested in. It is there where you will be able to be attentive to your own thinking and to the different views on offer, and to effortlessly exert the energy required to grow philosophically.

    Similarly, when you encounter a point of view that strikes you as nonsensical, just move on. Be honest with yourself, and don't contort yourself to try to make yourself see something that you do not see. Move on to contrasting views that have intelligibility, and can be assessed with earnestness and genuine curiosity. Only later on should you move to try to examine nonsense.
  • Banning AI Altogether
    I don't mind either, provided they are transparent about it being a quote and not their own words, and also provided what is quoted is actually an argument and not merely bare assertion, seeimngly cited as the voice of authority.Janus

    A huge aspect of this is the nature of appeals to authority, and given that TPF has an anti-religious bent, many of the members have not thought very deeply on the nature of appeals to authority (despite the fact that they occur quite often when it comes to SEP, IEP, Wittgenstein, etc.).

    Whether or not the LLM is a legitimate authority and is trustworthy is at the root of many of these differences. It is the question of whether any given LLM-citation is an organic argument or an argument from authority, and also of whether the latter case is illegitimate.
  • Banning AI Altogether
    Should we argue...Joshs

    What was being argued was that the research required to put together an idea is tedious and outsourceable, and that what one should do is outsource that research, take the pre-made idea from the LLM-assistant, and "get on with the task of developing the idea to see if it works." Maybe try responding to that?
  • How to use AI effectively to do philosophy.
    According to who?Fire Ologist

    The Puppeteer, of course.
  • Banning AI Altogether
    OK. So somewhere between black and white, thus not a blanket ban. :up:apokrisis

    Don't you feel that being black and white is usually counter-productive in human interaction, so shouldn't a philosophy forum be up to a nuanced approach?apokrisis

    To be clear, my approach would be pretty simple. It is not concerned with plagiarism, but with the outsourcing of one's thinking, and it is not implemented primarily by a rule, but by a philosophical culture to which rules also contribute. The rule itself would be simple, such as this:

    "No part of a post may be AI-written, and AI references are not permitted"Leontiskos

    I've argued elsewhere that it doesn't really matter whether there is a reliable detection-mechanism (and this is why I see the approach as somewhat nuanced). The rule is supporting and reflecting a philosophical culture and spirit that will shape the community.

    But I don't begrudge anything about @Baden's approach. I actually hope it works better than what I would do. And our means are not at odds. They are just a bit different.

    Of course the tech bros are stealing all our information to make themselves unreasonably rich and powerful.apokrisis

    My purpose is quality philosophical dialogue, not plagiarism. I think a focus on sources rather than intermediaries improves philosophical dialogue, and that's the point. Analogously, focus on primary rather than secondary sources also improves philosophical dialogue, independent of whether the primary sources are receiving insufficient royalties.

    The practical issue for TPF is what is its true value that needs preserving? You say the human interaction. Perhaps there ought to be a thread to define that in better detail.apokrisis

    Yes, I agree.

    What if LLMs offered some more sophisticate mechanisms to achieve whatever human interaction goals people might have in mind?apokrisis

    To put it concisely, I think philosophical dialogue is about thinking our own thoughts and thinking our (human) interlocutor's thoughts, and that this is especially true in a place like TPF. LLMs are about providing you with pre-thought thoughts, so that you don't have to do the thinking, or the research, or the contemplation, etc. So there is an intrinsic incompatibility in that sense. But as a souped-up search engine LLMs can help us in this task, and perhaps in other senses as well. I just don't think appealing to an LLM qua LLM in the context of philosophical dialogue is helpful to that task.

    And what if the human element of TPF is mostly its swirling emotions? And when it comes to the thinking, its mostly the stark differences in thought, rather than the quality of these thoughts, that keep the place lively.apokrisis

    I think that's all true, but I think what I said still holds.

    Maybe you are implying that LLM-appeals would improve the philosophical quality of TPF? Surely LLMs can improve one's own philosophy, but that's different from TPF on my view. I can go lift dumbbells in the gym to train, but I don't bring the dumbbells to the field on game day. One comes to TPF to interact with humans.

    So there seems little danger that posting LLM generated background material in a serious thread is going to outsource any actual thinking. Posts which are emotional or crackpot are surely the least likely to want to provide credible sources for what they say.apokrisis

    If someone sees a crackpot post; goes to their LLM and asks it to find a source demonstrating that the post is crackpot; reads, understands, and agrees with the source; and then presents that source along with the relevant arguments to show that the post is crackpot; then I think that's within the boundary. And I have no truck with the view which says that one must acknowledge their use of the LLM as an intermediary. But note that, on my view, what is prohibited is, "My LLM said you are wrong, therefore you are wrong. Oh, and here's a link to the LLM output."

    But I am not a mod so there is no need to focus especially on my view. If I've said too much about it, it is only because you thought I endorsed @Baden's approach tout court.
  • Banning AI Altogether
    I agree in spirit. But let's be practical.

    A blanket ban on LLM generated OPs and entire posts is a no brainer.
    apokrisis

    Okay, we agree on this.

    I think that it should be fine to quote LLMs just as you would quote any other source. If you make an argument and some factual, technical or historical point comes up, why not just cite a reasonably impersonal opinion on the matter. If the source is clear, others can call you out on your use of it.apokrisis

    I tried to argue against appeal-to-LLM arguments in two recent posts, here and here.

    In general I would argue that LLMs are a special kind of source, and cannot be treated just like any other source is treated. But a large part of my argument is found here, where the idea is that a LLM is a mediatory and private source. One may use an LLM, but the relevant sourcing should go to the LLM's sources, not the LLM itself, and if one is not familiar with the LLM's sources then they shouldn't be taking a stand with regard to arguments based on those sources.

    Baden's stance on being King Canute and holding back the tide is both noble and over the top. Banning OPs, punishing those who publish text they didn't add real thought to, and keeping LLM use as transparent as possible, would be enough to preserve the human element.apokrisis

    Possibly, but I care less about transparency and more about not promoting a forum where thinking is outsourced to LLMs. I see plagiarism as a small matter compared to the outsourcing of one's thinking.

    Don't you feel that being black and white is usually counter-productive in human interaction, so shouldn't a philosophy forum be up to a nuanced approach?apokrisis

    Rules must be black and white to a large extent. I would argue that your approach is less nuanced than mine, and this is because you want something that is easier to implement and less unwieldy. The key is to find a guideline that is efficacious without being nuanced to the point of nullity.

    I appreciate your input. I have to get back to that other thread on liberalism.
  • How to use AI effectively to do philosophy.
    So we are allowed to write entire posts that are purely AI-generated content, or to simply cite AI as evidence that something is true, so long as we are transparent that the content is AI-generated? Such that if someone gives an entire post that is nothing more than a quote from AI, nothing has been violated?Leontiskos

    Another aspect of this is scarcity. LLM content is not scarce in the way human content is. I can generate a thousand pages of LLM "philosophy" in a few minutes. Someone who therefore spends considerable time and energy on an OP or a post can be met by 's, "This LLM output says you're wrong," which was generated lazily in a matter of seconds.

    Forums already have a huge struggle with eristic, showboating, and falsification-for-the-sake-of-falsification. Give free access to a tool that will allow them to justify their disagreement at length in the snap of a finger, and guess what happens?

    I'm starting to think the problem is so obvious that it will inevitably sort itself out once one reaps the fruits of a rule that allows this sort of thing. For example, once folks start merely citing AI output to disagree with all of @Jamal's arguments, it may become more obvious that there is a problem at stake.

    (@Baden, @Jamal)