Comments

  • Banning AI Altogether
    Ought one reject an otherwise excellent OP because it is AI generated?Banno

    Regarding the nature of a contextless AI utterance:

    The LLM is not a transparent source which can be queried by one's interlocutor... Put more starkly, one cannot question the authority of an LLM on some given topic with some given output, because the LLM doesn't really exist in that way. It is not a person that one can push back against, or disagree with. Those who think it is a person who can be disagreed with have become lost in the fictional language games.

    Indeed, this is why the person using the LLM ought to take the responsibility of the arguments upon themselves, for only in that case is there someone who can be disagreed with or agreed with.
    Leontiskos

    If there is no arguer, then there is no one to argue with. If we found a random piece of anonymous philosophy we would be able to interact with it in only very limited ways. If it washes up on the beach in a bottle, I wouldn't read it, place my objections in the bottle, and send it back out to sea. That's one of the basic reasons why AI OPs make no sense. It would make as much sense to respond to an AI OP as to send my objections back out to sea. One has no more recourse with respect to an AI OP than one does with respect to a message in a bottle.

    The whole thing comes down to the fact that there is some human being who is arguing a point via an LLM, whether or not they do it transparently. The problem is not aesthetic. The problem is that it is a metaphysical impossibility to argue with an LLM. The reason TPF is not a place where you argue with LLMs is because there are no places where you argue with LLMs. When someone gets in an argument with an LLM they have become caught up in a fictional reality. What is occurring is not an actual argument.

    The closest parallel is where someone on TPF writes an OP and then gets banned before even a single reply is published. What to do with that thread is an interesting question. The mods could close it down or keep it open, but if it is kept open it will be approached as a kind of artifact; a piece of impersonal, contextless, perspectiveless reasoning, offering no recourse to the one who finds it. But this is still only a mild parallel, given that the argument was produced by a real arguer, which is never the case with the AI OP. Or in other words: an AI OP could never even exist in the strict sense. The closest possibility is some human who is using their LLM argument slave to say something they want said. In that case the response is made to the one pulling the strings of the argument slave, not to their puppet.

    (Note that a rule against using an AI without attribution precludes the possibility that one is misdirecting their replies to the puppet instead of the puppeteer, and that is a good start.)
  • Banning AI Altogether
    Ought one reject an otherwise excellent OP because it is AI generated?

    Well, yes. Yet we should be clear as to why we take this stance.
    Banno

    Right, and therefore we must ask the question:

    We hold the author to account for their post. We cannot do that with an AI. We interpret the post by giving consideration to the intentional context of the writer, to their needs and wants, to their background, aims and norms. We cannot attribute such things to an AI.

    This is not epistemic or ethical reasoning so much as aesthetic.
    Banno

    Why is it aesthetic, and how does calling it 'aesthetic' provide us with an answer to the question of "why we take this stance"?
  • How to use AI effectively to do philosophy.
    I'd rather say that they have both the doxastic and conative components but are mostly lacking on the side of conative autonomy. As a result, their intelligence, viewed as a capacity to navigate the space of reasons, splits at the seam between cleverness and wisdom. In Aristotelian terms, they have phronesis (to some extent), since they often know what's the right thing to do in this or that particular context, without displaying virtue since they don't have an independent motivation to do it (or convince their users that they should do it). This disconnect doesn't normally happen in the case of human beings since phronesis (the epistemic ability) and virtue (the motivational structure) grow and maintain themselves (and are socially scaffolded) interdependently.Pierre-Normand

    The reason I would disagree at a fairly fundamental level is because, in effect, they have no bodies. They are not doing anything. "Navigating the space of reasons," while at the same time not using those reasons to do anything, and not preferring any one reason or kind of reason to other kinds of reasons, is a very abstract notion. It is so abstract that I am not even sure I would want to call the space being navigated one of reasons. I would want more scare quotes, this time around "reasons."

    But with that said, once things like Elon's Optimus robot are complete this argument will no longer hold good. At that point they will do things (beyond manipulating word-signs). So that will be interesting. At that point a quasi-phronesis becomes more tangible, and draws nearer to human practical reason.

    Those are questions that I spend much time exploring rather than postponing even though I haven't arrived at definitive answers, obviously. But one thing I've concluded is that rather that it being a matter of all or nothing, or a matter of degree along a linear scale, the ascription of mental states or human capabilities to LLM-based chatbots often is rendered problematic by the divergence of our ordinary criteria of application. Criteria that normally are satisfied together in the case of human beings are satisfied separately in the case of chatbots.Pierre-Normand

    Okay, fair enough. I suppose I would be interested in more of those examples. I am also generally interested in deductive arguments rather than inductive arguments. For example, what can we deduce from the code, as opposed to inducing things from the end product as if we were encountering a wild beast in the jungle? It seems to me that the deductive route would be much more promising in avoiding mistakes.

    Maybe it looks confusing because it is. I mean that assessing the nature of our "conversations" with chatbot is confusing, not because of a conceptual muddle that my use of scare quotes merely papers over...Pierre-Normand

    Has anyone tried to address the conceptual muddle? Has anyone tried to do away with the never-ending scare quotes?

    In the Middle Ages you had theologians claiming that speech about God is always analogical, and never univocal. Other theologians argued that if speech about some thing is always non-univocal (i.e. equivocal in a broad sense), then you're involved in speaking nonsense. That was seen as a very strong objection in the theological landscape, and it is curious to me that what is effectively the exact same objection seems to go unnoticed in the AI landscape. Does anyone try to replace the scare quotes with a token and then attempt a rigorous definition of that token, so that we know what we are actually talking about with the words we are using?

    ...but rather because chatbots are mongrels. They have "brains" that have been enculturated through exposure to a massive body* of human knowledge, lore and wisdom (and prejudices) but they don't have human bodies, lack human motivations and aren't persons.Pierre-Normand

    Can't we define them deductively? Don't the programmers know what their code does, in a fundamental manner?

    LLMs aren't AIs that we build...Pierre-Normand

    This is probably one of the central premises of your approach. You are basically saying that LLMs are organisms and not artifacts (to use the Aristotelian language). My inclination is to say that they are complex artifacts, which we have indeed built.
  • Banning AI Altogether
    First thing is that I have been surprised at how reasonable an answer you get.apokrisis

    I agree, depending on the context. In more specialized areas they simply repeat the common misconceptions.

    So I am only pointing to how LLMs could improve standards if they became part of the intellectual jousting. A new habit with a low cost of effort. How folk would actually make use of this possibility is another matter.apokrisis

    Yeah, that's fair. It could improve standards in that way. At the same time, others have pointed out how it will also magnify blind spots and social fallacies. I would definitely be interested in a study looking at the characteristic reliabilities and unreliabilities of LLM technology, or more generally of the underlying methodological philosophy.

    Now given that you understand that LLMs use fake reasoning (which I will call "material reasoning"), you might respond by asking what happens if the material reasoning shows one to be wrong.Leontiskos

    I wasn’t actually thinking about LLMs being trusted to analyse arguments as they are clearly bad at that.apokrisis

    Me neither. I was assuming we agree that all LLM output is fake reasoning.

    Again my point is that LLMs could have advantages if used in good faith. And given think tanks and actual philosophy departments are going to have to figure out how to factor LLMs into their practice, it seems Luddite not to be doing the same on TPF, even if their bad faith use is almost to be expected.apokrisis

    When deciding whether to adopt some technology within some institution, I would want to look at the advantages and disadvantages of adopting that technology in relation to the nature of the institution. So while I agree that they could have advantages if used properly, I think more is needed to justify widespread adoption in a context such as TPF.

    I would sort of prefer a philosophy forum where everyone is involved in a lot of falsehood but is nevertheless involved in genuine human reasoning, as opposed to a philosophy forum where there is a great deal of factual accuracy but there isn't much genuine human reasoning occurring. I think we would probably have to hash out our agreements or disagreements on the telos of the forum. I don't mind so much when a nutty poster writes an immaculately valid and rigorous argument from crackpot premises, because the a thread is an open field for rational engagement. But if LLMs would not lead to the degradation of rational argument and to the outsourcing of thinking, then there would be no problem.
  • Banning AI Altogether
    Arguments from authority have an inherently limited place in philosophy.

    ...

    * An important point that I leave to the side is the fact that an LLM is not an authority in the same way a human being is an authority
    Leontiskos

    I want to add that in philosophy appeals to authority require transparency. So if I appeal to Locke as an authority, a crucial part of the appeal is that Locke's reasoning and argumentation are available to my interlocutor (and this is why appealing to publicly available texts as sources is ideal).

    This is what can never happen with LLMs: "Locke says you are wrong, and Locke is reliable. Feel free to go grab his treatise and have a look."* This is because the LLM is an intermediary; it is itself a giant argument from authority. It is just drawing on various sources and presenting their fundamental data. That's why I've said that one should go to the LLM's sources, rather than appeal to the LLM itself as an authority. The LLM is not a transparent source which can be queried by one's interlocutor, especially insofar as it represents a temporal, conditioned instance of the underlying software. Put more starkly, one cannot question the authority of an LLM on some given topic with some given output, because the LLM doesn't really exist in that way. It is not a person that one can push back against, or disagree with. Those who think it is a person who can be disagreed with have become lost in the fictional language games.

    Indeed, this is why the person using the LLM ought to take the responsibility of the arguments upon themselves, for only in that case is there someone who can be disagreed with or agreed with. If, in the context of a philosophy forum, they merely say, "I believe it because the AI said so," then all public responsibility for the belief has been abdicated. It is only ratified in virtue of the person's private authority, and therefore has no place on a public philosophy forum.


    * To be clear, it can never happen because LLMs do not write treatises, and they are not persons with subsisting existence.
  • Ich-Du v Ich-es in AI interactions
    It's just easier to do it with an AI, there's so much less at stake, it's so safe, and you don't really have to put any skin in the game. So it's highly questionable how effective such practice really is.baker

    Yeah, I agree. Part of the issue here is that although Buber recognizes that one can interact with what is essentially an 'it' in an I-Thou manner, it is nevertheless strained to do so. The whole gravity of the Thou is the infinite depth that it presents. There are stakes, danger, "skin in the game." There is a truly responsive Other. AI is meant to be a tool for human use, and tools for human use are meant to not be Thous.
  • Banning AI Altogether
    Appeal to authority is fine when the context of discussion includes a specialized discipline. Philosophy is not (or in my view should not be) a specialized discipline. To make that clear I don't mean that philosophers do not have knowledge about philosophy itself the average person does not—it seem clear that they do. I'm referring specifically to discussions that would qualify as philosophical.Janus

    Okay, that's a fair and thoughtful argument. :up:
    "There are no authoritative generalists," says Janus. Of course I think that first sentence should read "only when," no? You are presumably saying that appeal to authority is illegitimate wherever the context is not a specialized discipline?

    Your implicit argument here is that AI is not an authoritative generalist, and therefore should not be treated as one. I think that implicit argument is even more plausible than the more explicit argument you have given, but it is in no way uncontroversial. LLMs are coming to be seen not only as authoritative generalists, but as the authoritative generalist par excellence. I spoke to the issue a little bit in .

    I suppose in a technical sense my position would be that there are authoritative generalists (e.g. a child's parents), the output of an LLM contains inherent authority even at a general level*—at least in the hands of an intellectually virtuous thinker—and that, nevertheless, LLMs should not be appealed to as authorities in places like TPF. This has to do with the private/public distinction, which would need to be further developed.

    For example, one reason you would not accept an argument from the authority of the Catholic Catechism is because you do not take the Catholic Catechism to be authoritative. If I tried to offer you such an argument, I would be committing a fallacy whereby I offer you a conclusion that is based on a premise that is particular to me, and is not shared by you (i.e. a private premise rather than a publicly-shared premise).

    I think the same thing happens with LLMs, and I think this is one reason (among others) why LLMs are generally inappropriate on a philosophy forum. If we are arguing I would never accept your argument, "It is true because I say so." I think LLMs are basically , and so an appeal-to-LLM argument is the same as, "It is true because my argument slave says so." Even someone who trusts ChatGPT will tend to distrust a philosophical opponent's appeal to ChatGPT, and this is by no means irrational. This is because "ChatGPT" is a fiction. It is not a single thing, and therefore an equivocation is occurring between the opponent's instance of ChatGPT and some sort of objective or public instance of ChatGPT. In order to be a shared authority (in which case the argument from LLM-authority would be valid), the philosopher and his opponent would need to interact with the exact same instance of ChatGPT, agreeing on training, prompting, follow-ups, etc., and the a priori condition is that both parties accept ChatGPT as an authority in the first place.

    I don't think that is a realistic possibility on an argumentative philosophy forum. Even if it were possible, arguments from authority are inherently less philosophical than standard arguments, and are therefore less appropriate on a philosophy forum than standard arguments. It would be a bit like two people working together to get a Magic 8-Ball or Ouija Board to give them secret knowledge. Even if the Magic 8-Ball or Ouija Board were 100% accurate, they would still not be doing philosophy. Arguments from authority have an inherently limited place in philosophy. Even someone like Aquinas calls them the weakest form of argument.


    * An important point that I leave to the side is the fact that an LLM is not an authority in the same way a human being is an authority, and this must be taken into account. We ought not treat the authority of the LLM the same way we treat the authority of a human, given their substantial differences. Part of this goes to the fact that an LLM is not rational, is not a whole, is not self-consciously offering knowledge, etc.
  • How to use AI effectively to do philosophy.
    But being intelligent, or wise, in the case of persons (i.e. socialized, enculturated rational animals), always has two tightly integrated components: one doxastic and one conative.

    [...]

    Chatbots lack conative autonomy and hence purse whichever paths they think their users want to explore (or, worse, that merely lead to the outcomes they think their users want to achieve, while having the mere appearance of soundness.) So, they lack part of what it needs to be wise, but that's not because they aren't smart or knowledgeable enough to be useful conversation partners.
    Pierre-Normand

    So are you saying that chatbots possess the doxastic component of intelligence but not the conative component?

    I think they have motivations, just like a dog is motivated to run after a car, but their motivations aren't autonomous since they seldom pause to question them.Pierre-Normand

    It seems to me that what generally happens is that we require scare quotes. LLMs have "beliefs" and they have "motivations" and they have "intelligence," but by this one does not actually mean that they have such things. The hard conversation about what they really have and do not have is usually postponed indefinitely.

    I think the rational structure of their responses and their reinforced drive to provide accurate responses warrant ascribing beliefs to them, although those beliefs are brittle and non-resilient. One must still take a Dennettian intentional stance towards them to make sense of their response (which necessitates ascribing them both doxastic and conative states), or interpret their responses though Davidson's constitutive ideal of rationality. But I think your insight that they aren't thereby making moves in our language game is sound. The reason why they aren't is because they aren't persons with personal and social commitments and duties, and with a personal stake in the game. But they can roleplay as a person making such moves (when instructed to do so) and do so intelligently and knowledgeably.Pierre-Normand

    I would argue that the last bolded sentence nullifies much of what has come before it. "We are required to treat them as persons when we interact with them; they are not persons; they can roleplay as a person..." This is how most of the argumentation looks in general, and it looks to be very confusing.

    Keeping to that bolded sentence, what does it mean to claim, "They can roleplay as a person..."? What is the 'they' that 'roleplays' as a person? Doesn't roleplaying require the very things that have been denied to chatbots? It seems to me that we want to skip over the fact that the pronoun you use throughout ("they") is a personal pronoun. I don't really understand how these meaning-equivocations are papered over so nonchalantly:

    • I will use sentences which say that the chatbot has beliefs, but the chatbot doesn't really have beliefs.
    • I will use sentences which say that the chatbot has motivations, but the chatbot doesn't really have motivations.
    • I will use sentences which say that the chatbot has intelligence, but the chatbot doesn't really have intelligence.
    • I will use sentences which say that the chatbot can roleplay, but the chatbot can't really roleplay.
    • I will use sentences which say that the chatbot is a person, but the chatbot isn't really a person.
    • I will use sentences which say that the chatbot is a 'they', but the chatbot isn't really a 'they'.

    This looks like an endless sea of equivocal terms. It looks like we are pretending that we know what we are talking about, when we almost certainly do not. What does it mean when someone's words all do not mean what the words usually mean? What does it mean to "pretend" if we do not know where the reality begins and where the pretense stops? Put bluntly, it seems that what is at stake here is performative contradiction if not lying, and yet this is always brushed off as a kind of unimportant quibble.

    Usually if someone is to successfully "Use X to do Y," they must know what X and Y are. In the case of the title of the thread, the problem is not only that we do not really know what philosophy is (any more), but that we surely do not know what AI is. I'm not sure how long this can be swept under the rug. Who or what is holding the leash that is pulling us along in this odd endeavor we call 'AI'?
  • Banning AI Altogether
    On the other hand, on the specific area of plagiarism, I've seen at my own place of work what a "relaxed" attitude to the deployment of this technology in academia produces, and that is the destruction of standards.Baden

    Regarding plagiarism, I think it's worth trying to understand the most obvious ways in which the problem deviates from a problem of plagiarism. First, plagiarism is traditionally seen as an unjust transgression against the original author, who is not being justly recognized and compensated for their work. On that reading, an aversion to plagiarism is a concern for the rights of the LLM. Second, plagiarism is seen (by teachers) as hamstringing the student's potential, given that the student is not doing the work that they ought to be doing in order to become an excellent philosopher/writer/thinker. On that reading, an aversion to plagiarism is a concern for the philosophical development of TPF members.

    But I think the real things that you are concerned with are actually 1) the plight of the reader who does not understand that they are interacting with an LLM rather than a human; and 2) the unhealthy forum culture that widespread use of LLMs would create. Those concerns are not the primary things that "plagiarism" connotes. Sometimes I worry that by talking about plagiarism we are obscuring the real issues, though I realize that you may have simply given the plagiarism in your workplace as a parallel example.

    ---

    When is the day when we find out that @Leontiskos with his respectable 5 000+ posts is actually smart bot / AI? Now we can be confident you aren't, because two years ago AI wasn't so good...

    Yes, the fear of thinking that you are engaged with real people interested in philosophy, but actually, you're only engaging with computers and all your great ideas vanish into the dead emptiness of Turing machines just computing on and on. That would be a dismal future.
    ssu

    I agree, but my point is a bit different. Suppose all my posts are LLM-generated content, and this is undisclosed. This is against the forum rules as they currently stand. But now suppose that all my posts are LLM-generated content, and this is disclosed. Thus for every one of my LLM-generated posts, I enclose it in quote brackets and prepend the clause, "I agree with what the LLM says here:..." This is not against the forum rules as they are currently being interpreted. That seems odd to me, and it makes me think that the mere matter of disclosure doesn't get to the heart of the issue.
  • Banning AI Altogether
    Do whatever you want in the backgound with AI, but write your own content. Don't post AI generated stuff here.Baden

    Makes sense to me. :up:

    Obviously the piece that I think must be addressed is whether or not posts can be entirely AI-dependent even when the proper attribution is being given to the AI. But I've said more than enough about such an issue.
  • Banning AI Altogether
    The culture of rational inquiry would seem to be what we most would value.apokrisis

    Yes, that is a good way to phrase it in a positive rather than negative sense.

    But this is TPF after all. Let's not get carried away about its existing standards. :smile:apokrisis

    A fair point! :blush:

    If LLMs are the homogenised version of what everyone tends to say, then why aren't they a legitimate voice in any fractured debate? Like the way sport is now refereed by automated line calls and slo-mo replays.apokrisis

    I don't like the referee analogy, but I understand the force of your first sentence. The reason I use LLMs in limited ways is precisely because of what you say there (and also because they provide me with a helpful pseudo-authority in fields with which I am not familiar, such as medicine).

    But the reason they aren't generally admitted in a fractured debate is, first, because the fractured-ness of the debate will not be solved by the LLM if it is a serious debate. With serious debates each side can levy the LLM to their own side, with their own prompts, and secondly, the LLM is simply not adequate to give us the truth of the matter when it comes to contentious topics. Second, in those fractured debates where one party is self-consciously representing an unpopular view, it would not be intelligent for them to concede their case based on "the homogenised version of what everyone tends to say."

    I'm not arguing this is necessary. But why would a method of adjudication be bad for the quality of the philosophy rather than just be personally annoying to whoever falls on the wrong side of some LLM call?apokrisis

    You and I differ at least mildly on the trustworthiness of LLMs, and that is at play here. We could ask the hypothetical question, "If we had an infallible authority, why would appealing to it as an adjudicator be bad for the quality of philosophy?"—and this is by no means a rhetorical question! But the presupposition is that LLMs are reliable or trustworthy even if not infallible.

    Or in other words, the validity of a method of adjudication turns both on the quality of the adjudicator, and the "margin of error" at stake, and these are both interrelated. I was actually happy to see you pointing up the differences between the fake reasoning of LLMs and the true reasoning of humans in the other thread, given that some pragmatists could run roughshod over that difference. Still, I think the pragmatist's "margin of error" is such that it is more open to LLM adjudication.

    So I can imagine LLMs both upping the bar and also being not at all the kind of thing folk would want to see on TPF for other human interaction reasons.apokrisis

    Right, and I suppose it is the very fact that, "this is TPF after all," which makes me wary of LLM use. If the forum were a bastion of deeply principled, intellectually honest and self-critical philosophers, then widespread LLM use would not pose a danger.

    But what if this shows you are indeed wrong, what then?

    Sure it will be irritating. But also preferable to the ducking and diving that is the norm when someone is at a loss with their own line of argument.

    You seem to be describing a situation where you were winning the human interaction but now have to face up to the fact that some little snot-nose shit might have been half-right all along.
    apokrisis

    No, not quite. When people ask me a question like that I imagine myself quoting the Bible to them before they object to my argument from authority, and then I respond by saying, "But what if the Bible shows you are indeed wrong, what then?"

    I could try to put it succinctly by saying that the legitimate way to show someone that they are wrong is by presenting an organic argument. It is not by saying, "X says you are wrong; X is very smart; therefore you ought to know that you are wrong." That is a valid approach (argument from authority) in those cases where the interlocutor simply accepts the authority, but even in that case the validity is not the ideal form of validity.

    Now given that you understand that LLMs use fake reasoning (which I will call "material reasoning"), you might respond by asking what happens if the material reasoning shows one to be wrong. More precisely, what happens if the person translates the LLM's material reasoning into true formal reasoning, and thereby sees that they are wrong? I don't want to try to broach this topic all at once, but it strikes me a bit like saying, "What if a million monkeys typing random letters produce a bulletproof argument against your thesis?" The analogy is a stretch in some ways, but in other ways it is not. There is no obvious answer to the question. One seems to be neither right nor wrong to either accept or reject the monkey-argument. They can do as they please, but the monkey-argument doesn't have any special binding force.

    But we are getting away from political questions of whether AI should be permitted for practical reasons, and we are now moving into much deeper questions. Even if we say that the monkey-argument should convince us, it would not follow that posting monkey-stuff to the forum is an acceptable practice.

    You seem to be describing a situation where you were winning the human interaction but now have to face up to the fact that some little snot-nose shit might have been half-right all along.

    Of course the problem there is that LLMs are trained to be sycophantic.
    apokrisis

    And this is no a small problem!

    But if you are making a wrong argument, wouldn't you rather know that this is so. Even if it is an LLM that finds the holes?apokrisis

    I am required to trust the LLM or the monkeys in order to even begin to consider their "argument," or in this case to translate the material reasoning into formal reasoning. The level of trust due determines whether I would wish to know that my thesis is false based on the authority in question. Everyone would rather believe true things than false things, and every authority would lead you to correct some false beliefs if it were accepted, but it does not follow that one should accept every authority. Again, to consider an authority's locution worth taking the time to consider is to already have placed a certain amount of trust in that authority. The substantive question here is the reliability/trustworthiness of LLMs, and that is a giant quagmire.

    So as you say, we all can understand the noble ideal – an open contest of ideas within a community of rational inquiry. Doing our own thinking really is the point.apokrisis

    Ah! But here you've introduced a different ideal, and a common one. It is the telos of communal knowledge generated from an open contest of ideas, which Mill advocates. That telos is much more amenable to LLMs than the telos of a culture of rational inquiry. A thinktank should be more open to LLMs than an amateur philosophy forum.
  • On how to learn philosophy
    my goal is to ' hack myself to pieces and put myself back together again.KantRemember

    Someone I've recently stumbled upon who addresses this in detail and in an accessible way is Nathan Jacobs. For example, "The most important question," or "What to do with moral truth?"

    Especially in that latter video he talks about what he believes to be the best way to reshape yourself rationally, and it is based on his "four levels of discourse."
  • How to use AI effectively to do philosophy.
    I think comparing AI to a calculator highlights the limits of AI when using it to “do philosophy”. Calculators do for numbers what AI can do for words. No one wonders if the calculator is a genius at math. But for some reason, we think so low of what people do, we wonder if a fancy word processor might be better at doing philosophy.

    Calculators cannot prompt anything. Neither does AI. Calculators will never know the value we call a “sine” is useful when measuring molecules. Why would we think AI would know that “xyz string of words” is useful for anything either? AI doesn’t “know”, does it?

    So many unaddressed assumptions.
    Fire Ologist

    Yeah, I think that's right. I think a lot of it comes back to this point in my first post:

    For example, if we don't know why we want to engage in human communication rather than non-human communication, then prohibitions based on that axiom will become opaque.Leontiskos

    If we don't know why we want to engage in human-to-human communication, or if we don't know what the relevant difference is between humans and AI, then we will not have the capacity or endurance to withstand the pressures of AI. We need to understand these questions in order to understand how to approach rules, guidelines, and interaction with respect to AI. I don't see how it could be off topic to discuss the very heart of the forum's AI-policy, namely the valuing of human interaction (and the definition of human interaction). If the tenet, "We want human interaction," becomes nothing more than an ungrounded dogma, then it will dry up and drift away.

    Part of the difficulty with respect to "calculators" is that human life has been mechanized to a large extent, such that much of what goes on in human labor is merely a matter of calculation, accounting, procedure, etc. In that context LLMs can appear human, since they are able to do the things that we are often occupied with.
  • How to use AI effectively to do philosophy.
    I'm starting to think the problem is so obvious that it will inevitably sort itself out once one reaps the fruits of a rule that allows this sort of thing. For example, once folks start merely citing AI output to disagree with all of Jamal's arguments, it may become more obvious that there is a problem at stake.Leontiskos

    This scenario can be set up rather easily. First we just take a long, effortful post from or . Then we continue:

    • Member: **Ask LLM to provide an extensive and tightly-argued response for why @Jamal’s post is incorrect**
    • Member: “Jamal, I think this provides a thoughtful explanation of why you are wrong: <Insert transparently sourced LLM output>”
    • Jamal: “That’s an interesting and complicated response, but there are no sources.”
    • -- At this point Member could either ask the LLM to whip up some sources, or ask it to provide an extensive and tightly-reasoned argument for why sources are not necessary in this case. Let’s suppose Member takes the latter route --
    • Member: “This is why I think sources are not necessary in this case: <Insert transparently sourced LLM output>”

    Note that regardless of how Jamal responds, if he gives a reason (such as lack of sources, unreliability of LLMs, improper prompting, etc.) Member can simply plug that reason into the LLM and have a response to the reason. The only real option to end this is to object to the methodology itself, either in a private way or a public way (i.e. either by creating a personal rule not to engage Member’s approach, or by creating a forum-wide rule against Member’s approach). The private approach will leave the forum in a laissez-faire state vis-a-vis Member’s method, and will therefore lead to who carry on LLM-authoritative conversations among themselves, even within Jamal’s thread. They will respond to Member with yet more LLM-generated content. Member’s approach is one that is already creeping into the forum. @Banno relies on it with some regularity, and there are examples even within this thread. I could literally write a bot to do what Member does.

    Again, the problem here is the outsourcing of one’s thinking. By engaging, Jamal would end up arguing with an LLM rather than a human, and in truth he would be arguing with an LLM which is being prompted by a human who opposes Jamal’s point of view. Jamal will lose such an engagement simply in virtue of the relative of his own resources. This is because an LLM is not so much a source as an argument slave. Argument slaves can be used for good or ill, but they don’t have any central place in a philosophical context where humans are supposed to be interacting with one another, instead of interacting with one another’s slaves.
  • How to use AI effectively to do philosophy.
    But not giving the LLM credit is a problem for the reader as well, because LLMs can include errors, so the reader who doesn’t know they are reading LLM content won’t know they need to check everything about it for accuracy and soundness.Fire Ologist

    Yes, that's true, and I definitely agree that one should not plagiarize LLM content, passing it off as their own.

    I suppose the question is whether one who knows not to outsource their thinking will be susceptible to plagiarism, and it seems that they would not. This is because plagiarism is one form of outsourcing thinking among many others. So to oppose the outsourcing of thinking automatically opposes plagiarism, even though there may be additional reasons why plagiarism is problematic.

    AI for philosophy and creative writing is interesting. I’m fine with the idea as a helper, like using a calculator to check your homework, or using it to inspire a start, or to re-approach a roadblock. I think anyone who treats it as anything besides a tool that for students, is using it to play a psychological game, for no reason.Fire Ologist

    Well, my guess is that people use it as a shortcut to knowledge. They think that knowledge is the end and that the LLM is a surefire means. The controversial premises for such as position are first that knowledge is a piling-up of facts or propositions, and that LLMs are reliable deliverers of such propositions. The implicit idea is that forums like TPF are for the purpose of showing off piled-up knowledge, and that one must therefore use the LLM to improve their lot on TPF.

    In a market sense, what will inevitably happen is that as LLMs drive down the scarcity of knowledge, knowledge itself will become passé in a very curious way. Forms of quintessentially human activity that remain scarce will then be elevated, including religious and mystical venues. This was already occurring since the advent of recent technologies, such as the internet, but the phenomenon will continue to grow.
  • On how to learn philosophy


    I think this is good advice:

    I'll tell you my secret. Start by finding some question you really want answered. Then start reading around that. Make notes every time some fact or thought strikes you as somehow feeling key to the question you have in mind, you are just not quite sure how. Then as you start to accumulate a decent collection of these snippets – stumbled across all most randomly as you sample widely – begin to sort the collection into its emerging patterns.apokrisis

    The mind engages most deeply what it is interested in, so it is best to begin with what you are already interested in. It is there where you will be able to be attentive to your own thinking and to the different views on offer, and to effortlessly exert the energy required to grow philosophically.

    Similarly, when you encounter a point of view that strikes you as nonsensical, just move on. Be honest with yourself, and don't contort yourself to try to make yourself see something that you do not see. Move on to contrasting views that have intelligibility, and can be assessed with earnestness and genuine curiosity. Only later on should you move to try to examine nonsense.
  • Banning AI Altogether
    I don't mind either, provided they are transparent about it being a quote and not their own words, and also provided what is quoted is actually an argument and not merely bare assertion, seeimngly cited as the voice of authority.Janus

    A huge aspect of this is the nature of appeals to authority, and given that TPF has an anti-religious bent, many of the members have not thought very deeply on the nature of appeals to authority (despite the fact that they occur quite often when it comes to SEP, IEP, Wittgenstein, etc.).

    Whether or not the LLM is a legitimate authority and is trustworthy is at the root of many of these differences. It is the question of whether any given LLM-citation is an organic argument or an argument from authority, and also of whether the latter case is illegitimate.
  • Banning AI Altogether
    Should we argue...Joshs

    What was being argued was that the research required to put together an idea is tedious and outsourceable, and that what one should do is outsource that research, take the pre-made idea from the LLM-assistant, and "get on with the task of developing the idea to see if it works." Maybe try responding to that?
  • How to use AI effectively to do philosophy.
    According to who?Fire Ologist

    The Puppeteer, of course.
  • Banning AI Altogether
    OK. So somewhere between black and white, thus not a blanket ban. :up:apokrisis

    Don't you feel that being black and white is usually counter-productive in human interaction, so shouldn't a philosophy forum be up to a nuanced approach?apokrisis

    To be clear, my approach would be pretty simple. It is not concerned with plagiarism, but with the outsourcing of one's thinking, and it is not implemented primarily by a rule, but by a philosophical culture to which rules also contribute. The rule itself would be simple, such as this:

    "No part of a post may be AI-written, and AI references are not permitted"Leontiskos

    I've argued elsewhere that it doesn't really matter whether there is a reliable detection-mechanism (and this is why I see the approach as somewhat nuanced). The rule is supporting and reflecting a philosophical culture and spirit that will shape the community.

    But I don't begrudge anything about @Baden's approach. I actually hope it works better than what I would do. And our means are not at odds. They are just a bit different.

    Of course the tech bros are stealing all our information to make themselves unreasonably rich and powerful.apokrisis

    My purpose is quality philosophical dialogue, not plagiarism. I think a focus on sources rather than intermediaries improves philosophical dialogue, and that's the point. Analogously, focus on primary rather than secondary sources also improves philosophical dialogue, independent of whether the primary sources are receiving insufficient royalties.

    The practical issue for TPF is what is its true value that needs preserving? You say the human interaction. Perhaps there ought to be a thread to define that in better detail.apokrisis

    Yes, I agree.

    What if LLMs offered some more sophisticate mechanisms to achieve whatever human interaction goals people might have in mind?apokrisis

    To put it concisely, I think philosophical dialogue is about thinking our own thoughts and thinking our (human) interlocutor's thoughts, and that this is especially true in a place like TPF. LLMs are about providing you with pre-thought thoughts, so that you don't have to do the thinking, or the research, or the contemplation, etc. So there is an intrinsic incompatibility in that sense. But as a souped-up search engine LLMs can help us in this task, and perhaps in other senses as well. I just don't think appealing to an LLM qua LLM in the context of philosophical dialogue is helpful to that task.

    And what if the human element of TPF is mostly its swirling emotions? And when it comes to the thinking, its mostly the stark differences in thought, rather than the quality of these thoughts, that keep the place lively.apokrisis

    I think that's all true, but I think what I said still holds.

    Maybe you are implying that LLM-appeals would improve the philosophical quality of TPF? Surely LLMs can improve one's own philosophy, but that's different from TPF on my view. I can go lift dumbbells in the gym to train, but I don't bring the dumbbells to the field on game day. One comes to TPF to interact with humans.

    So there seems little danger that posting LLM generated background material in a serious thread is going to outsource any actual thinking. Posts which are emotional or crackpot are surely the least likely to want to provide credible sources for what they say.apokrisis

    If someone sees a crackpot post; goes to their LLM and asks it to find a source demonstrating that the post is crackpot; reads, understands, and agrees with the source; and then presents that source along with the relevant arguments to show that the post is crackpot; then I think that's within the boundary. And I have no truck with the view which says that one must acknowledge their use of the LLM as an intermediary. But note that, on my view, what is prohibited is, "My LLM said you are wrong, therefore you are wrong. Oh, and here's a link to the LLM output."

    But I am not a mod so there is no need to focus especially on my view. If I've said too much about it, it is only because you thought I endorsed @Baden's approach tout court.
  • Banning AI Altogether
    I agree in spirit. But let's be practical.

    A blanket ban on LLM generated OPs and entire posts is a no brainer.
    apokrisis

    Okay, we agree on this.

    I think that it should be fine to quote LLMs just as you would quote any other source. If you make an argument and some factual, technical or historical point comes up, why not just cite a reasonably impersonal opinion on the matter. If the source is clear, others can call you out on your use of it.apokrisis

    I tried to argue against appeal-to-LLM arguments in two recent posts, here and here.

    In general I would argue that LLMs are a special kind of source, and cannot be treated just like any other source is treated. But a large part of my argument is found here, where the idea is that a LLM is a mediatory and private source. One may use an LLM, but the relevant sourcing should go to the LLM's sources, not the LLM itself, and if one is not familiar with the LLM's sources then they shouldn't be taking a stand with regard to arguments based on those sources.

    Baden's stance on being King Canute and holding back the tide is both noble and over the top. Banning OPs, punishing those who publish text they didn't add real thought to, and keeping LLM use as transparent as possible, would be enough to preserve the human element.apokrisis

    Possibly, but I care less about transparency and more about not promoting a forum where thinking is outsourced to LLMs. I see plagiarism as a small matter compared to the outsourcing of one's thinking.

    Don't you feel that being black and white is usually counter-productive in human interaction, so shouldn't a philosophy forum be up to a nuanced approach?apokrisis

    Rules must be black and white to a large extent. I would argue that your approach is less nuanced than mine, and this is because you want something that is easier to implement and less unwieldy. The key is to find a guideline that is efficacious without being nuanced to the point of nullity.

    I appreciate your input. I have to get back to that other thread on liberalism.
  • How to use AI effectively to do philosophy.
    So we are allowed to write entire posts that are purely AI-generated content, or to simply cite AI as evidence that something is true, so long as we are transparent that the content is AI-generated? Such that if someone gives an entire post that is nothing more than a quote from AI, nothing has been violated?Leontiskos

    Another aspect of this is scarcity. LLM content is not scarce in the way human content is. I can generate a thousand pages of LLM "philosophy" in a few minutes. Someone who therefore spends considerable time and energy on an OP or a post can be met by 's, "This LLM output says you're wrong," which was generated lazily in a matter of seconds.

    Forums already have a huge struggle with eristic, showboating, and falsification-for-the-sake-of-falsification. Give free access to a tool that will allow them to justify their disagreement at length in the snap of a finger, and guess what happens?

    I'm starting to think the problem is so obvious that it will inevitably sort itself out once one reaps the fruits of a rule that allows this sort of thing. For example, once folks start merely citing AI output to disagree with all of @Jamal's arguments, it may become more obvious that there is a problem at stake.

    (@Baden, @Jamal)
  • The Old Testament Evil
    - Some examples of accounts that have been given in the past are spiritual accounts and also genetic accounts. The basic idea is that humankind is more than just a number of irremediably separate individual parts; that there is a real interconnection. I am not exactly sure of the mechanism, but in fact this idea is quite common historically, and especially outside of strongly individualistic cultures like our own. In Christianity the idea is taken for granted when it is said that at the Incarnation God took on human nature, and thus elevated all humans in that event.
  • Banning AI Altogether
    - The context here is a philosophy forum where humans interact with other humans. The premise of this whole issue is that on a human philosophy forum you interact with humans. If you do not accept that premise, then you are interested in a much broader discussion.
  • Banning AI Altogether


    ...a similar argument could be given from a more analytic perspective, although I realize it is a bit hackneyed. It is as follows:

    --

    The communal danger from AI lies in the possibility that the community come to outsource its thinking as a matter of course, constantly appealing to the authority of AI instead of giving organic arguments. This danger is arguably epistemic, in the sense that someone who is interacting with an argument will be doing philosophy as long as they do not know that they are interacting with AI. For example, if Ben is using AI to write his posts and Morgan does not know this, then when Morgan engages Ben's posts he will be doing philosophy. He will be—at least to his knowledge—engaging in human-to-human philosophical dialogue. Ben hurts only himself, and Morgan is (mostly) unaffected.

    --

    There are subtle ways in which this argument fails, but it does point up the manner in which a rule need not "catch" every infraction. Ben can lie about his posts all he likes, and Morgan will not be harmed in any serious way. Indeed it is salutary that Ben his LLM-use, both for Morgan and the community, but also for Ben.
  • Why do many people belive the appeal to tradition is some inviolable trump card?
    Why do many people belive the appeal to tradition is some inviolable trump card?unimportant

    Tradition is not infallible; it's just better than most things. Humans are intelligent; they do things for reasons; the things they do over and over tend to have very sound or deep reasons; therefore tradition is a reliable norm. Most thinking is faddish, and therefore tradition is a good rule of thumb.
  • Banning AI Altogether
    We have nothing to lose by going in that direction, and I believe the posters with most integrity here will respect us for it.Baden

    Good stuff.

    And if the product is undetectable, our site will at least not look like an AI playground.Baden

    The "undetectability" argument turns back on itself in certain respects. Suppose AI-use is undetectable. Ex hypothesi, this means that AI-use is not detrimental, for if something cannot be detected then it cannot be detrimental (or at least it cannot be identified as the cause of any detriment). But this is absurd. The whole premise of a rule against AI-use is that excessive and inappropriate AI-use would be detrimental to the forum, and what is detrimental to the forum is obviously also detectable. There is an equivocation occurring between being able to detect every instance of AI-use, and AI-use being a detectable cause given certain undesirable effects.

    So I want to say that one should think about generating a philosophical culture that is adverse to outsourcing thinking to AI, rather than merely thinking about a rule and its black-and-white enforcement. It shouldn't be too hard to generate that culture, given that it already exists in anyone remotely interested in philosophy. This is precisely why it is more important that the general membership would heed such a rule, whether or not the rule could be enforced with some measure of infallibility. The rule is not heeded for mere fear of being found out and punished, but rather because it is in accord with the whole ethos of philosophical inquiry. This is in accord with Kant's idea of respect for a law, rather than obeying out of fear or self-interest.

    In order to be effective, a rule need not be infallibly enforceable. No rule achieves such a thing, and the rules are very rarely enforced in that manner. It only needs to track and shape the cultural sense of TPF with respect to AI. Of course it goes far beyond AI. The fellow who is mindlessly beholden to some particular philosopher, and cannot handle objections that question his philosopher's presuppositions, does not receive much respect in philosophical circles, and such a fellow does not tend to prosper in pluralistic philosophical settings. So too with the fellow who constantly appeals to AI. The TPF culture already opposes and resists the outsourcing of one's thinking, simply in virtue of the fact that the TPF culture is a philosophical culture. The rule against outsourcing one's thinking to AI is obvious to philosophers, and those who aspire towards philosophy certainly have the wherewithal to come to understand the basis for such a rule. But I should stress that a key point here is to avoid a democratization of the guidelines. On a democratic vote we will sell our thinking to AI for a bowl of pottage. The moderators and owners need to reserve this decision for themselves, and for this reason it seems fraught to have an AI write up a democratic set of guidelines, where everyone's input is equally weighed (or else weighed in virtue of their post-count).
  • The Old Testament Evil
    - I think the ontological reality would ground juridical judgments, such as those in question. In traditional, pre-Reformation Christianity God does not make juridical judgments if there is no ontological basis for the judgments.
  • How to use AI effectively to do philosophy.
    Reflectivity and expressivity, along with intuition and imagination are at the heart of what we do here, and at least my notion of what it means to be human.Baden

    I would agree. I would want to say that, for philosophy, thinking is an end in itself, and therefore cannot be outsourced as a means to some further end.

    And, while AIs can be a useful tool (like all technology, they are both a toxin and a cure), there is a point at which they become inimical to what TPF is and should be. The line for me is certainly crossed when posters begin to use them to directly write posts and particularly OPs, in full or in part. And this is something it is still currently possible to detect. The fact that it is more work for us mods is unfortunate. But I'm not for throwing in the towel.Baden

    I'm encouraged that you're willing to put in the work.

    As above, I don't see how the line can be drawn in such a way that mere appeals to AI authority—whether an implicit appeal as found in a post with nothing more than a quoted AI response, or an explicit appeal where one "argues" their position by mere reference to AI output—are not crossing the line. If one can cite AI as an authority that speaks for itself and requires no human comment or human conveyance, then it's not clear why the AI can't speak for itself tout court.

    We could envision a kind of limit case where someone queries AI and then studies the output extensively. They "make it their own," by agreeing with the arguments and the language to such an extent that they are committed to argue the exact points and words as their own points and words. They post the same words to TPF, which they have "baptized" as their own and are willing to defend in a fully human manner. Supposing for the sake of argument that such a thing would be formally permissible (even if, materially, it would be sanctioned or flagged). What then would be the difference when someone posts AI output to justify their claims? ...And let us suppose that in both cases the AI-sourcing is transparent.

    If one wants members to think in a manner that goes beyond AI regurgitation, then it would seem that quote-regurgitations of AI fall into the same category as first-person regurgitations of AI. Contrariwise, if I love Alasdair MacIntyre, imbibe his work, quote him, and begin to sound like him myself, there is no problem. There is no problem because MacIntyre is a human, and thus the thinking being emulated or even regurgitated is human thinking. Yet if someone imbibes AI, quotes it constantly, and begins to sound themselves like AI, in this case the "thinking" being emulated or regurgitated is non-human thinking. If I quote MacIntyre and appeal to his authority, I am appealing to the authority of a thinking human. When Banno quotes AI and appeals to its authority, he is appealing to the authority of a non-thinking language-piecing algorithm.

    The laissez-faire approach to sourcing leads to camps, such as the camp of people who take Wittgenstein as an authority and accept arguments from the authority of Wittgenstein, and those who don't. The laissez-faire approach to AI sourcing will lead to the same thing, where there will be groups of people who simply quote AI back and forth to each other in the same way that Wittgenstenians quote Wittgenstein back and forth to each other, and on the other hand those who do not accept such sources as authorities. One difference is that Wittgenstein and MacIntyre are humans whereas AI is not. Another difference is that reading and exegeting Wittgenstein requires philosophical effort and exertion, whereas LLMs were basically created to avoid that sort of effort and exertion. Hence there will be a much greater impetus to lean on LLMs than to lean on Wittgenstein.

    Isn't the problem that of letting LLMs do our thinking for us, whether or not we are giving the LLM credit for doing our thinking? If so, then it doesn't matter whether we provide the proper citation to the LLM source.* What matters is that we are letting the LLM do our thinking for us. "It's true because the LLM said so, and I have no need to read the LLM's sources or understand the underlying evidence."

    (Cf. The LLM is a private authority, not a public authority, and therefore arguments from authority based on LLMs are invalid arguments from authority.)


    * And in this case it is equally true that the "plagiarism" argument is separate and lesser, and should not be conflated with the deeper issue of outsourcing thinking. One need not plagiarize in order to outsource their thinking.
  • How to use AI effectively to do philosophy.
    If you had asked the AI to write your reply in full or in part and had not disclosed that, we would be in the area I want to immediately address.Baden

    So we are allowed to write entire posts that are purely AI-generated content, or to simply cite AI as evidence that something is true, so long as we are transparent that the content is AI-generated? Such that if someone gives an entire post that is nothing more than a quote from AI, nothing has been violated?

    How is this in line with the human-to-human interaction that the rule is supposed to create?
  • How to use AI effectively to do philosophy.
    Arguably the most important part of the job is very often the "calculator" task, the most tedious task.Jamal

    The point is that you've outsourced the drafting of the guidelines to AI. Whether or not drafting forum guidelines is a tedious, sub-human task is a separate question.

    But I'll keep "encourage", since the point is to encourage some uses of LLMs over others. In "We encourage X," the X stands for "using LLMs as assistants for research, brainstorming, and editing," with the obvious emphasis on the "as".Jamal

    You are claiming that, "We encourage using LLMs as assistants for research, brainstorming, and editing," means, "If one wishes to use an LLM, we would encourage that they use the LLM in X way rather than in Y way." Do you understand that this is what you are claiming?

    It is very helpful when those who enforce the rules write the rules. When this does not happen, those who enforce the rules end up interpreting the rules contrary to their natural meaning.
  • The Preacher's Paradox
    Faith translates into Russian as "VERA."Astorre

    It's an interesting discrepancy: Etymologically, Latin "fides" means 'trust', but Slavic "vera" (related to Latin "verus") means 'truth'.baker

    This looks to be a false etymology. The Latin fides and the Slavic vera are both translations of the Greek pistis, and vera primarily means faith, not true. The two words do share a common ancestor (were-o), but vera is not derived from verus, and were-o does not exclude faith/trustworthiness.
  • The Preacher's Paradox
    I was surprised by the depiction of what is said to be "Socratic" in your account of the Penner article.Paine

    Well that sentence about "standing athwart" was meant to apply to Kierkegaard generally, but I think Fragments is a case in point. The very quote I gave from Fragments is supportive of the idea (i.e. the Socratic teacher is the teacher who sees himself as a vanishing occasion, and such a teacher does not wield authority through the instrument of reason).

    If I do try to reply, it would be good to know if you have studied Philosophical Fragments as a whole or only portions as references to other arguments.Paine

    I am working through it at the moment, and so have not finished it yet. I was taking my cue from the Penner article I cited, but his point is also being borne out in the text.

    Here is a relevant excerpt from Piety's introduction:

    The motto from Shakespeare at the start of the book, ‘Better well hanged than ill wed’, can be read as ‘I’d rather be hung on the cross than bed down with fast talkers selling flashy “truth” in a handful of proposition’. A ‘Propositio’ follows the preface, but it is not a ‘proposition to be defended’. It reveals the writer’s lack of self-certainty and direction: ‘The question [that motivates the book] is asked in ignorance by one who does not even know what can have led him to ask it.’ But this book is not a stumbling accident, so the author’s pose as a bungler may be only a pose. Underselling himself shows up brash, self-important writers who know exactly what they’re saying — who trumpet Truth and Themselves for all comers. — Repetition and Philosophical Crumbs, Piety, xvii-xviii

    He goes on to talk about Climacus in light of the early Archimedes and Diogenes images. All of this is in line with the characterization I've offered.

    I want to say that Penner's point is salutary:

    One stubborn perception among philosophers is that there is little of value in the explicitly Christian character of Søren Kierkegaard’s thinking. Those embarrassed by a Kierkegaardian view of Christian faith can be divided roughly into two camps: those who interpret him along irrationalist-existentialist lines as an emotivist or subjectivist, and those who see him as a sort of literary ironist whose goal is to defer endlessly the advancement of any positive philosophical position. The key to both readings of Kierkegaard depends upon viewing him as more a child of Enlightenment than its critic, as one who accepts the basic philosophical account of reason and faith in modernity and remains within it. More to the point, these readings tend to view him through the lens of secular modernity as a kind of hyper- or ultra-modernist, rather than as someone who offers a penetrating analysis of, and corrective to, the basic assumptions of modern secular philosophical culture. In this case, Kierkegaard, with all his talk of subjectivity as truth, inwardness, and passion, the objective uncertainty and absolute paradox of faith, and the teleological suspension of the ethical, along with his emphasis on indirect communication and the use of pseudonyms, is understood merely to perpetuate the modern dualisms between secular and sacred, public and private, object and subject, reason and faith—only as having opted out of the first half of each disjunction in favor of the second. Kierkegaard’s views on faith are seen as giving either too much or too little to secular modernity, and, in any case, Kierkegaard is dubbed a noncognitivist, irrationalist antiphilosopher.

    Against this position, I argue that it is precisely the failure to grasp Kierkegaard’s dialectical opposition to secular modernity that results in a distortion of, and failure to appreciate, the overtly Christian character of Kierkegaard’s thought and its resources for Christian theology. Kierkegaard’s critique of reason is at the same time, and even more importantly, a critique of secular modernity. To do full justice to Kierkegaard’s critique of reason, we must also see it as a critique of modernity’s secularity.
    — Myron Penner, Kierkegaard’s Critique of Secular Reason, 372-3

    I find the readings that Penner opposes very strange, but they are nevertheless very common. They seem to do violence to Kierkegaard's texts and life-setting, and to ignore his affinity with a figure like J. G. Hamann (who is also often mistaken as an irrationalist by secular minds). Such readings go hand in hand with the OP of this thread, which takes them for granted even without offering any evidence for the idea that they come from Kierkegaard.
  • The Old Testament Evil
    I apologize for the incredibly belated response!Bob Ross

    No worries.

    I see what you are saying. The question arises: if God is not deploying a concept of group guilt, then why wouldn’t God simply restore that grace for those generations that came after (since they were individually innocent)?Bob Ross

    Yes, good. That is one of the questions that comes up.

    What do you think?Bob Ross

    That's an interesting theory, with a lot of different moving parts. I'm not sure how many of the details I would want to get into, especially in a thread devoted to Old Testament evil.

    My thought is that there must be some ontological reality binding humans one to another, i.e. that we are not merely individuals. Hence God, in creating humans, did not create a set of individuals, but actually also created a whole, and there is a concern for the whole qua whole (which does not deny a concern for the parts). If one buys into the Western notion of individualism too deeply, then traditional Christian doctrines such as Original Sin make little sense.

    annihilation is an act of willing the bad of something (by willing its non-existence)...Bob Ross

    That's an interesting argument, and it may well be correct. Annihilation is certainly unheard of in the Biblical context, and even the notion of non-being is something that develops relatively late.
  • How to use AI effectively to do philosophy.
    the difference between consulting a secondary source and consulting an llm is the following:
    After locating a secondary source one merely jots down the reference and that’s the end of it.
    Joshs

    Well, they could read the secondary source. That's what I would usually mean when I talk about consulting a secondary source.

    When one locates an argument from an llm...Joshs

    Okay, but remember that many imbibe LLM content without thinking of it as "arguments," so you are only presenting a subclass here.

    When one locates an argument from an llm that one finds valuable, one decides if the argument is something one can either defend in one’s own terms, or should be backed up with a direct quote from either a secondary or primary source. This can then be achieved with a quick prompt to the llm, and finding a reference for the quote.Joshs

    Right, and also reading the reference. If someone uses a LLM as a kind of search engine for primary or secondary sources, then there is no concern. If someone assents to the output of the LLM without consulting (i.e. reading) any of the human sources in question, or if one is relying on the LLM to summarize human sources accurately, then the problems in question do come up, and I think this is what often occurs.

    The fact that proper use of a.i. leaves one with the choice between incorporating arguments one can easily defend and elaborate on based on one’s own understanding of the subject, and connecting those arguments back to verifiable quotes, means that the danger of falsehood doesn’t come up at all.Joshs

    What do you mean, "The danger of falsehood doesn't come up at all?"

    It seems to me that you use LLMs more responsibly than most people, so there's that. But I think there is a very large temptation to slip from responsible use to irresponsible use. LLMs were built for quick answers and the outsourcing of research. I don't find it plausible that the available shortcuts will be left untrodden.

    If the LLM is merely being used to find human sources, which are in turn consulted in their own right, then I have no more objection to an LLM than to a search engine. In I give an argument to the effect that LLMs should not be directly used in philosophical dialogue (with other humans). I am wondering if you would disagree.
  • The Preacher's Paradox
    - Have you offered anything more than an appeal to your own authority? I can't see that there is anything more, but perhaps I am missing something.
  • How to use AI effectively to do philosophy.
    Again, you have not even attempted to show that the AI's summation was in any way inaccurate.Banno

    True, and that's because there is no such thing as an ad hominem fallacy against your AI authority. According to the TPF rules as I understand them, you are not allowed to present AI opinions as authoritative. The problem is that you have presented the AI opinion as authoritative, not that I have disregarded it as unauthoritative. One simply does not need some counterargument to oppose your appeal to AI. The appeal to AI is intrinsically impermissible. That you do not understand this underlines the confusion that AI is breeding.
  • How to use AI effectively to do philosophy.
    The AI is not being appealed to as an authorityBanno

    But it is, as I've shown . You drew a conclusion based on the AI's response, and not based on any cited document the AI provided. Therefore you appealed to the AI as an authority. The plausibility of the conclusion could come from nowhere else than the AI, for the AI is the only thing you consulted.

    This goes back to what I've pointed out a number of times, namely that those who take the AI's content on faith are deceiving themselves when they do so, and are failing to see the way they are appealing to the AI as an authority.
  • How to use AI effectively to do philosophy.
    It's noticeable that you have not presented any evidence, one way or the other.

    If you think that what the AI said is wrong, then what you ought do is to present evidence, perhaps in the form of peer-reviewed articles that say that humans can reliably recognise AI generated text.

    But that is not what you have chosen to do. Instead, you cast aspersions.
    Banno

    I am pointing out that all you have done is appealed to the authority of AI, which is precisely something that most everyone recognizes as a danger (except for you!). Now you say that I am "casting aspersions" on the AI, or that I am engaging in ad hominem against the AI (!).

    The AI has no rights. The whole point is that blind appeals to AI authority are unphilosophical and unresponsible. That's part of why the rule you are trying to undermine exists. That you have constantly engaged in these blind appeals could be shown rather easily, and it is no coincidence that the one who uses AI in these irresponsible ways is the one attempting to undermine the rule against AI.
  • How to use AI effectively to do philosophy.
    No, I presented what the AI had to say, for critique. Go ahead and look at the papers it cites...Banno

    But you didn't read the papers it cited, and you , "So yes, I overstated my case. You may be able to recognise posts as AI generated at a slightly better rate than choosing at random."

    If you were better at logic you would recognize your reasoning process: "The AI said it, so it must be true." This is the sort of mindless use of AI that will become common if your attempt to undermine the LLM rule succeeds.