Comments

  • How to use AI effectively to do philosophy.
    The AI is not being appealed to as an authorityBanno

    But it is, as I've shown . You drew a conclusion based on the AI's response, and not based on any cited document the AI provided. Therefore you appealed to the AI as an authority. The plausibility of the conclusion could come from nowhere else than the AI, for the AI is the only thing you consulted.

    This goes back to what I've pointed out a number of times, namely that those who take the AI's content on faith are deceiving themselves when they do so, and are failing to see the way they are appealing to the AI as an authority.
  • How to use AI effectively to do philosophy.
    It's noticeable that you have not presented any evidence, one way or the other.

    If you think that what the AI said is wrong, then what you ought do is to present evidence, perhaps in the form of peer-reviewed articles that say that humans can reliably recognise AI generated text.

    But that is not what you have chosen to do. Instead, you cast aspersions.
    Banno

    I am pointing out that all you have done is appealed to the authority of AI, which is precisely something that most everyone recognizes as a danger (except for you!). Now you say that I am "casting aspersions" on the AI, or that I am engaging in ad hominem against the AI (!).

    The AI has no rights. The whole point is that blind appeals to AI authority are unphilosophical and unresponsible. That's part of why the rule you are trying to undermine exists. That you have constantly engaged in these blind appeals could be shown rather easily, and it is no coincidence that the one who uses AI in these irresponsible ways is the one attempting to undermine the rule against AI.
  • How to use AI effectively to do philosophy.
    No, I presented what the AI had to say, for critique. Go ahead and look at the papers it cites...Banno

    But you didn't read the papers it cited, and you , "So yes, I overstated my case. You may be able to recognise posts as AI generated at a slightly better rate than choosing at random."

    If you were better at logic you would recognize your reasoning process: "The AI said it, so it must be true." This is the sort of mindless use of AI that will become common if your attempt to undermine the LLM rule succeeds.
  • How to use AI effectively to do philosophy.
    So you cannot see the difference between "A rule against AI use will not be heeded" and "A rule against AI use cannot be enforced". Ok.Banno

    We both know that the crux is not unenforceability. If an unenforceable rule is nevertheless expected to be heeded, then there is no argument against it. Your quibble is a red herring in relation to the steelman I've provided. :roll:

    Baden? Tell us what you think. Is my reply to you against the rules?Banno

    I would be interested, too. I haven't seen the rule enforced despite those like Banno often contravening it.

    It is also worth noting how the pro-AI Banno simply takes the AI at it's word, as a blind-faith authority. This is precisely what the end game is.
  • How to use AI effectively to do philosophy.
    With intended irony...

    Prompt: find peer-reviewed academic studies that show the effectiveness of any capacity to recognise AI generated text.

    The result.

    "...there is peer-reviewed evidence that both humans... and automated tools can sometimes detect AI-generated text above chance. Effectiveness is highly conditional. Measured accuracy is often only modest."

    So yes, I overstated my case. You may be able to recognise posts as AI generated at a slightly better rate than choosing at random.
    Banno

    That's not irony. That's incoherent self-contradiction. It's also against the rules of TPF.
  • How to use AI effectively to do philosophy.
    I know more pay attention to the fact that I am using a machine when I consult a.i. than when I use the world-processing features of my iphone to type this.Joshs

    You wouldn't see this claim as involving false equivalence?

    If you have ever been prompted to seek out relevant literature to aid in the composing of an OP, or your response to an OP, then your telos in consulting that textual material is the same as that of the many here who consult a.i while engaging in TPF discussions.Joshs

    No, not really. There are primary sources, there are secondary sources, there are search engines, and then there is the LLM. Consulting a secondary source and consulting an LLM are not the same thing.

    It is worth noting that those who keep arguing in favor of LLMs seem to need to make use of falsehoods, and especially false equivalences.

    ---

    A pissing contest, combined with quasi-efforts at healing existential anxiety.baker

    Lol!

    ---

    Here is a case in point. I have not made the argument he here attributes to me. I have, amongst other things pointed out that a rule against AI cannot be reliably enforce, which is quite different.Banno

    Which is the same thing, and of course the arguments I have given respond to this just as well. So you're quibbling, like you always do. Someone who is so indisposed to philosophy should probably not be creating threads instructing others how to do philosophy while at the same time contravening standing TPF rules.

    For those who think philosophy consists in a series of appeals to authority, AI must be quite confounding.Banno

    The sycophantic appeal-to-AI-authority you engage in is precisely the sort of thing that is opposed.
  • Banning AI Altogether
    Anyway, here is one example: ask it to critique your argument. This is an exercise in humility and takes your thoughts out of the pugilistic mode and into the thoughtful, properly philosophical mode. It's a form of Socratization: stripping away the bullshit, re-orientating yourself towards the truth. Often it will find problems with your argument that can only be found when interpreting it charitably; on TPF this often doesn't happen, because people will score easy points and avoid applying the principle of charity at all costs, such that their criticisms amount to time-wasting pedantry.Jamal

    This is a good point.

    Another example: before LLMs it used to be a tedious job to put together an idea that required research, since the required sources might be diverse and difficult to find. The task of searching and cross-referencing was, I believe, not valuable in itself except from some misguided Protestant point of view. Now, an LLM can find and connect these sources, allowing you to get on with the task of developing the idea to see if it works.Jamal

    I don't think this is right. It separates the thinking of an idea from the having of an idea, which doesn't make much sense. If the research necessary to ground a thesis is too "tedious," then the thesis is not something one can put forth with integrity.

    But perhaps you are saying that we could use the LLM as a search engine, to see if others have interpreted a philosopher in the same way we are interpreting them?

    Part of the problem with the LLM is that it is private, not public. One's interaction history, prompting, etc., are not usually disclosed when appealing to the LLM as a source. The code is private in a much starker sense, even where the LLM is open source. Put differently, the LLM is a mediator that arguably has no place in person-to-person dialogue. If the LLM provides you with a good argument, then give that argument yourself, in your own words. If the LLM provides you with a good source, then read the source and make it your own before using it. The interlocutor needs your own sources and your own arguments, not your reliance on a private authority. Whatever parts of the LLMs mediation are publicly verifiable can be leveraged without use of the LLM (when dialoguing with an interlocutor). The only reason to appeal to the LLM itself would be in the case where publicly verifiable argumentation or evidence is unavailable, in which case one is appealing to the authority of the LLM qua LLM, which is both controversial and problematic. Thus a ban on LLMs need not be a ban on background, preparatory use of LLMs.
  • How to use AI effectively to do philosophy.
    The arguments are similar to what I see here. "AI is inevitable and therefore" etc. Some teachers---the good ones---are appalled.Baden

    I think it goes back to telos:

    I think it would be helpful to continue to reflect on what is disagreeable about AI use and why. For example, if we don't know why we want to engage in human communication rather than non-human communication, then prohibitions based on that axiom will become opaque.Leontiskos

    What is the end/telos? Of a university? Of a philosophy forum?

    Universities have in some ways become engines for economic and technological progress. If that is the end of the university, and if AI is conducive to that end, then there is no reason to prevent students from using AI. In that case a large part of what it means to be "a good student" will be "a student who knows how to use AI well," and perhaps the economically-driven university is satisfied with that.

    But liberal education in the traditional sense is not a servant to the economy. It is liberal; free from such servility. It is meant to educate the human being qua human being, and philosophy has always been a central part of that.

    Think of it this way. If someone comes to TPF and manages to discreetly use AI to look smart, to win arguments, to satisfy their ego, then perhaps, "They have their reward." They are using philosophy and TPF to get something that is not actually in accord with the nature of philosophy. They are the person Socrates criticizes for being obsessed with cosmetics rather than gymnastics; who wants their body to look healthy without being healthy.

    The argument, "It's inevitable, therefore we need to get on board," looks something like, "The cosmetics-folk are coming, therefore we'd better aid and abet them." I don't see why it is inevitable that every sphere of human life must substitute human thinking for machine "thinking." If AI is really inevitable, then why oppose it at all? Why even bother with the half-rules? It seems to me that philosophy arenas such as TPF should be precisely the places where that "inevitability" is checked. There will be no shortage of people looking for refuge from a cosmetic culture.

    Coming back to the point, if the telos of TPF is contrary to LLM-use, then LLMs should be discouraged. If the telos of TPF is helped by LLM-use, then LLMs should be encouraged. The vastness and power of the technology makes a neutral stance impossible. But the key question is this: What is the telos of TPF?
  • How to use AI effectively to do philosophy.
    I've mentioned this in the mod forum, so I'll mention it here too. I disagree with diluting the guidelines. I think we have an opportunity to be exceptional on the web in keeping this place as clean of AI-written content as possible. And given that the culture is veering more and more towards letting AI do everything, we are likely over time to be drowned in this stuff unless we assertively and straightforwardly set enforceable limitations. That is, I don't see any reward from being less strict that balances the risk of throwing away what makes up special and what, in the future, will be even rarer than it is now, i.e. a purely human online community.

    The idea that we should keep up with the times to keep up with the times isn't convincing. Technocapitalism is definitive of the times we're in now, and it's a system that is not particularly friendly to human creativity and freedom. But you don't even have to agree with that to agree with me, only recognize that if we don't draw a clear line, there will effectively be no line.
    Baden

    :up: :fire: :up:

    I couldn't agree more, and I can't but help think that you are something like the prophet whose word of warning will inevitably go unheeded—as always happens for pragmatic reasons.

    Relatedly:

    It seems to me difficult to argue against the point, made in the OP, that since LLMs are going to be used, we have to work out how to use them well...Jamal

    Why does it matter that LLMs are going to be used? What if there were a blanket rule, "No part of a post may be AI-written, and AI references are not permitted"? The second part requires that someone who is making use of AI find—and hopefully understand—the primary human sources that the AI is relying on in order to make the salutary reference they wish to make.

    The curious ignoratio elenchus that @Banno wishes to rely on is, "A rule against AI use will not be heeded, therefore it should not be made." Is there any force to such an argument? Suppose someone writes all of their posts with LLMs. If they are found out, they are banned. But suppose they are not found out. Does it follow that the rule has failed? Not in the least. Everyone on the forum is assuming that all of the posts are human-written and human-reasoned, and the culture of the forum will track this assumption. Most of the posts will be human-written and human-reasoned. The fact that someone might transgress the rule doesn't really matter. Furthermore, the culture that such a rule helps establish will be organically opposed to the sorts of superficial AI-appeals. Someone attempting to rely on LLMs in that cultural atmosphere will in no way prosper. If they keep pressing the LLM-button to respond to each reply of increasing complexity, they will quickly be found out as a silly copy-and-paster. The idea that it would be easy to overtly shirk that cultural stricture is entirely unreasonable, and there is no significant motive for someone to rely on LLMs in that environment. It is parallel to the person who uses chess AI to win online chess games, for no monetary benefit and to the detriment of their chess skills and their love of chess.

    Similarly, a classroom rule against cheating could be opposed on @Banno's same basis: kids will cheat either way, so why bother? But the culture which stigmatizes cheating and values honest work is itself a bulwark against cheating, and both the rule and the culture make it much harder for the cheater to prosper. Furthermore, even if the rule cannot be enforced with perfection, the cheater is primarily hurting themselves and not others. We might even say that the rule is not there to protect cheaters from themselves. It is there to ensure that those who want an education can receive one.

    that will lead people to hide their use of it generally.Jamal

    Would that be a bad thing? To cause someone to hide an unwanted behavior is to disincentivize that behavior. It also gives such people a string to pull on to understand why the thing is discouraged.
  • How to use AI effectively to do philosophy.
    This is the sort of appeal-to-LLM-authority that I find disconcerting, where one usually does not recognize that they have appealed to the AI's authority at all.Leontiskos

    All of which makes using AI for philosophy, on one level, like using any one else’s words besides your own to do philosophy.Fire Ologist

    So if you use someone else's words to do philosophy, you are usually appealing to them as an authority. The same thing is happening with LLMs. This will be true whether or not we see LLMs as a tool. I got into some of this in the following and the posts related to it:

    This becomes rather subtle, but what I find is that people who tell themselves that they are merely using AI to generate candidate theories which they then assess the validity of in a posterior manner, are failing to understand their own interaction with AI. They are failing to appreciate the trust they place in AI to generate viable candidate theories, for example. But they also tend to ignore the fact that they are very often taking AI at its word.Leontiskos

    -

    Further, it makes no sense to give AI the type of authority that would settle a dispute, such as: “you say X is true - I say Y is true; but because AI says Y is true, I am right and you are wrong.” Just because AI spits out a useful turn of phrase and says something you happen to agree is true, that doesn’t add any authority to your position.Fire Ologist

    I tend to agree, but I don't think anyone who uses AI is capable of using it this way (including myself). If one did not think AI added authority to a position then one wouldn't use it at all.

    The presence and influence of AI in a particular writing needs to never be hidden from the reader.Fire Ologist

    I would argue that the presence and influence of AI is always hidden from us in some ways, given that we don't really know what we are doing when we consult it.

    You need to be able to make AI-generated knowledge your own, just as you make anything you know your own.Fire Ologist

    LLMs are sui generis. They have no precedent, and that's the difficulty. What this means is that your phrase, "just as you make anything you know your own," creates a false equivalence. It presumes that artificial intelligence is not artificial, and is on par with all previous forms of intelligence. This is the petitio principii that @Banno and others engage in constantly. For example:

    Unlike handing it to a human editor, which is what authors have been doing for yonks?
    — SophistiCat

    Nah. You are engaging in the same basic equivocation between a human and an AI. The whole point is that interacting with humans is different from interacting with AI, and the two should not be conflated. You've begged the question in a pretty basic manner, namely by implying that interacting with a human duo is the same as interacting with a human and AI duo.
    Leontiskos

    Given all of this, it would seem that @bongo fury's absolutist stance is in some ways the most coherent and intellectually rigorous, even though I realize that TPF will probably not go that route, and should not go that route if there are large disagreements at stake.
  • How to use AI effectively to do philosophy.
    So in this case the LLM carried out the tedious part of the task;Jamal

    But is your argument sound? If you have a group of people argue over a topic and then you appoint a person to summarize the arguments and produce a working document that will be the basis for further discussion, you haven't given them a "calculator" job. You have given them the most important job of all. You have asked them to draft the committee document, which is almost certainly the most crucial point in the process. Yet you have re-construed this as "a calculator job to avoid tedium." This is what always seems happen with LLMs. People use them in substantial ways and then downplay the ways in which they are using them. In cases such as these one seems to prefer outsourcing to a "neutral source" so as to avoid the natural controversy which always attends such a draft.

    It didn't occur to me that anyone would interpret those guidelines as suggesting that posts written by people who are usng AI tools are generally superior to those written by people who don't use AI,Jamal

    It could have been made more irenically, but @bongo fury's basic point seems uncontroversial. You said:

    We encourage using LLMs as assistants for research, brainstorming, and editing. — Deepseek

    To say, "We encourage X," is to encourage X. It is not to say, "If you are doing Y, then we would encourage you to do Y in X manner." To say "allow" or "permit" instead of "encourage" would make a large difference.
  • How to use AI effectively to do philosophy.
    I like this. I asked Deepseek to incorporate it into a set of guidelines based on the existing AI discussions on TPF. Below is the output. I think it's a useful starting point, and I encourage people here to suggest additions and amendments.Jamal

    Isn't it a bit ironic to have AI write the AI rules for the forum? This is the sort of appeal-to-LLM-authority that I find disconcerting, where one usually does not recognize that they have appealed to the AI's authority at all. In this case one might think that by allowing revisions to be made to the AI's initial draft, or because the AI was asked to synthesize member contributions, one has not outsourced the basic thinking to the AI. This highlights why "responsible use" is so nebulous: because everyone gives themselves a pass whenever it is expedient.

    3. Prohibited Uses: What We Consider "Cheating"

    The following uses undermine the community and are prohibited:

    [*] Ghostwriting: Posting content that is entirely or mostly generated by an LLM without significant human input and without disclosure.
    [*] Bypassing Engagement: Using an LLM to formulate responses in a debate that you do not genuinely understand. This turns a dialogue between people into a dialogue between AIs and destroys the "cut-and-thrust" of argument.
    [*] Sock-Puppeting: Using an LLM to fabricate multiple perspectives or fake expertise to support your own position.
    — Deepseek

    I like the separating out of good uses from bad uses, and I think it would be helpful to continue to reflect on what is disagreeable about AI use and why. For example, if we don't know why we want to engage in human communication rather than non-human communication, then prohibitions based on that axiom will become opaque.

    A sort of core issue here is one of trust and authority. It is the question of whether and to what extent AI is to be trusted, and guidelines etch the answer to that question in a communal manner. For example, it is easy to imagine the community which is distrustful towards AI as banning it, and the community which is trustful towards AI as privileging it. Obviously a middle road is being attempted here. Transparency is a good rule given that it allows members to navigate some of the complexities of the issue themselves. Still, the basic question of whether the community guidelines signify a trust or distrust in AI cannot be sidestepped. We are effectively deciding whether a specific authority (or perhaps in this case a meta-authority) is to be deemed trustworthy or untrustworthy for the purposes of TPF. The neutral ground is scarcely possible.
  • The Preacher's Paradox
    Inspired by Kierkegaard's ideasAstorre

    What primary or secondary Kierkegaard sources do you base your argument upon? So far I've only seen you quote Wittgenstein as if his words were simple truth. I would suggest reading Kierkegaard's Philosophical Fragments where he speaks to the idea that all teaching/learning is aided by temporal occasions (including preaching), and that the teacher should therefore understand himself as providing such an occasion:

    From a Socratic perspective, every temporal point of departure is eo ipso contingent, something vanishing, an occasion; the teacher is no more significant, and if he presents himself or his teachings in any other way, then he gives nothing... — Kierkegaard, Philosophical Crumbs, tr. M. G. Piety

    This is why what I've already said is much more Kierkegaardian than the odd way that Kierkegaard is sometimes interpreted by seculars:

    But is the problem preaching, or is it a particular kind of preaching?Leontiskos

    Kierkegaard wishes to stand athwart the Enlightenment rationalism notion of self-authority, preferring instead a Socratic approach that does not wield authority through the instrument of reason. Myron Penner's chapter/article is quite good in this regard: "Kierkegaard’s Critique of Secular Reason."
  • The Preacher's Paradox
    - Fair enough. I realize I may have been too curt, both in my haste and because I know I will not be able to respond for a few days. On the other hand—and this is what you apparently wish to deny—the OP is a pretty straightforward argument against preaching, complete with responses to objections. I have been trying to present reasons against the conclusion of the OP's argument. I don't deny that it could be interesting to leisurely explore the particular form of preaching in which the paradox resides.
  • The Preacher's Paradox
    - I'm actually out for a few days. I just wanted to submit my responses. If your idea is as "interrogative" as you claim, you may want to ask yourself where all the defensiveness is coming from. It looks as though the idea is averse to interrogation.
  • The Preacher's Paradox


    The preacher who thinks he has to make his listeners believe something that they cannot be made to believe is faced with a contradiction, yes. But to hold that all preachers think such a thing, and that the contradiction is intrinsic to preaching, is to have made a canard of preaching. Or so I think.

    In general I think you need to provide argumentation for your claims, and that too much assertion is occurring. Most of your thesis is being asserted, not argued. For example, the idea that all preachers are trying to make their listeners believe mere ideas is an assertion and not a conclusion. The claim that the preacher is engaged in infecting rather than introducing is another example.

    I encountered the preacher's paradox in my everyday life. It concerns my children. Should I tell them what I know about religion myself, take them to church, convince them, or leave it up to them, or perhaps avoid religious topics altogether?Astorre

    I would suggest giving more credence to the Biblical testimony and the testimony of your Church, and less credence to Kierkegaard's testimony. Faith is something that transcends us, not something we control. It is not something to be curated, either positively or negatively.

    Part of the question here is, "Do you want your children to be religious?" Is it permissible to want such a thing?
  • The Preacher's Paradox
    I was drawn to this topic by conversations with so-called preachers (not necessarily Christian ones, but any kind). They say, "You must do this, because I'm a wise man and have learned the truth." When you ask, "What if I do this and it doesn't work?" Silence ensues, or something like, "That means you didn't do what I told you to do/you didn't believe/you weren't chosen."Astorre

    But is the problem preaching, or is it a particular kind of preaching? Someone whose preaching attempts to connect someone with something that is dead (such as an idea) instead of something that is living (such as a friend or God) will fall into the incoherences that the OP points up. But not all preaching is like that. If someone tries to persuade others to believe things that one cannot be persuaded to believe, then their approach is incoherent. But not all preaching is of that kind.
  • The Preacher's Paradox
    Question: Which of these judgments conveys the speaker's belief that the Sistine Chapel ceiling is beautiful, or proves it?Astorre

    I think this is the same error, but with beauty instead of faith. So we could take my claim and replace "faith" with "beauty": "The temptation is to try to encompass [beauty], both by excluding it from certain spheres and by attempting to comprehend its mechanism." To have the presupposition that one can exhaustively delineate and comprehend things like faith or beauty is to already have failed.

    "What cannot be spoken of, one must remain silent about."Astorre

    False. And self-contradicting, by the way.

    Language is incapable of exhaustively expressing subjective experienceAstorre

    And, "So long as the recipient understands that the conveyance of faith is only a shadow and a sign, there is no danger." But the idea that faith is only a subjective experience is another example of the overconfident delineation of faith.

    And here a paradox arises: infecting another person with an idea you don't fully understand yourself...Astorre

    "Infecting" is an interesting choice of word, no? Petitio principii?

    Communicating supernatural faith is communicating something that transcends you and your understanding. If someone thinks that it is impossible or unethical to communicate something that transcends you and your understanding, then what they are really doing is denying the object of faith, God. They don't think God exists, or they don't think faith in God can or should be intended via preaching because they don't think faith is sown that way. I think the whole position is based on some false assumptions.

    Preaching is a bit like introducing someone to a friend, to a living reality. The idea that one cannot introduce someone to a friend unless they have a comprehensive knowledge of the friend and the way in which the friend will interact with the listener is quite silly. In this respect Kierkegaard is a Cartesian or a Hegelian in spite of himself. His attempted inversion of such systems has itself become captured by the larger net of those systems. The religious rationalist knows exactly what faith is and how to delineate it, and Kierkegaard in his opposition denies the rationalist claims, but in fact also arrives at the point where he is able to delineate faith with perfect precision. The only difference is that Kierkegaard knows exactly what faith isn't instead of what it is. Yet such a punctuated negation is, again, a false form of apophaticism - a kind of false humility.
  • Banning AI Altogether
    It would be unethical, for instance, for me to ask a perfect stranger for their view about some sensitive material I've been asked to review - and so similarly unethical for me to feed it into AI. Whereas if I asked a perfect stranger to check an article for typos and spelling, then it doesn't seem necessary for me to credit them...Clarendon

    Okay sure, but although the OP's complaint is a bit vague, I suspect that the counsel is not motivated by these sorts of ethical considerations. I don't think the OP is worried that we might infringe the rights of AI. I think the OP is implying that there is something incompatible between AI and the forum context.

    Yes, it would only be a heuristic and so would not assume AI is actually a person.Clarendon

    I myself would be wary to advise someone to treat AI as if it is a stranger. This is because strangers are persons, and therefore I would be advising that we treat AI as if it is a person. "Heuristically pretend that it is a stranger without envisioning it as a person," seems like a difficult request. It may be that the request can only be fulfilled in a superficial manner, and involves a contradiction. It is this small lie that we tell ourselves that seems to be at the root of many of the AI problems ("I am going to pretend that it is something that it isn't, and as long as I maintain an attitude of pretense everything will be fine").

    Someone might ask, "Why should we pretend that AI is a stranger?" And you might answer, "Because it would serve our purposes," to which they would surely respond, "Which purposes do you have in mind?"

    Perhaps what is being suggested is a stance of distrust or hesitancy towards the utterances of LLMs.
  • The Preacher's Paradox
    Great OP. :up:

    Preaching faith means either not having it or betraying it.Astorre

    The preacher supposedly doesn't teach, but testifies.Astorre

    I think the idea that the preacher testifies is essentially correct. How does Moses preach in a fundamental way? By the light of his face, which reflects the light of God. He covers it to protect those who are dazed by it, but the covering still attests to Moses' stature.

    God shines into the world. He shines in Moses' face, in prayer, in sacrament, in truth, in argumentation, in rhetoric... There is no box that can protect its contents from God's light. The idea that faith is simply incommunicable is a false form of apophaticism. "Faith is incommunicable, therefore God cannot communicate through faith," would be a false inference. Faith is incommunicable in a certain sense, but the one who thinks he understands faith so well that he can limits its bounds and its communication is engaged in a form of (apophatic) idolatry. The temptation is to try to encompass faith, both by excluding it from certain spheres and by attempting to comprehend its mechanism.

    But love doesn't guarantee the right to interfere in someone else's destiny.Astorre

    Why not?

    As soon as you try to convey faith, you rationalize it...Astorre

    So long as the recipient understands that the conveyance of faith is only a shadow and a sign, there is no danger.
  • Banning AI Altogether
    It can point me to an interpretation that I hadn’t thought of, and I can then verify the credibility of that interpretation.Joshs

    This becomes rather subtle, but what I find is that people who tell themselves that they are merely using AI to generate candidate theories which they then assess the validity of in a posterior manner, are failing to understand their own interaction with AI. They are failing to appreciate the trust they place in AI to generate viable candidate theories, for example. But they also tend to ignore the fact that they are very often taking AI at its word.
  • The End of the Western Metadiscourse?
    The USSR collapsed not because it was too Marxist but because the vigour and paranoia of the liberal west out-competed it. The USSR functioned reasonably well and at least achieved the main aim of clambering aboard the rapidly industrialising world. But it was fundamentally inefficient rather than fundamentally a lie.apokrisis

    Okay, but part of the lie that kept the USSR afloat was the idea that it was flourishing inside its walls. The lie was that it was out-competing the liberal west. Then reality crept in, the lie was seen to be false, and the boat sank.

    I would argue that one cannot believe something and not believe something at the same time. Or that it will at least lead to problems.Leontiskos

    That is why we have ambiguity. Logic demands that we don't. But then that is why Peirce had add vagueness to logic. That to which the PNC does not apply.

    Between absolute belief and absolute disbelief. I would say in practice that is where we all should sit. Even if the counterfactual grammar of logic doesn't like it.
    apokrisis

    I don't grant that we have ambiguity because we need to lie to ourselves with fictions and both believe and not believe something at the same time. In the Thomist tradition vagueness is usually captured by the notion of analogical predication (which derives from Aristotle's "pros hen" ambiguity). So we do need to account for vagueness in a quasi-logical way, but I don't see how this changes what I've said about the lie that is uncovered. If I have to believe that my country is out-competing the liberal west even when I know it is not true, ambiguity isn't going to save my boat. The power of vagueness only extends so far.

    Dominance~submission may be the natural dynamic. But it plays out with all the variety of its many different settings.

    So the dynamic has the simplicity of a dichotomy. And then also the variety of the one principle that can emerge as the balancing act that suits every occasion.
    apokrisis

    Okay, thanks.

    Liberal democracy clearly promotes discussion about the socially constructed nature of society. That is the liberating thought. Hey guys, we invented this system. And if it seems shit, we can therefore invent something better.apokrisis

    Okay, fair enough. Like I said, the arguments you present are reasonably strong. I need to pick my battles.

    By neutral, I mean in the dynamical systems sense of being critically poised. Ready to go vigourously in opposing directions as the need demands. So we have to have some central state from which to depart in counterfactual directions.

    Neutrality is not a state of passivity. It is the most extreme form of potency as you can swing either way with equal vigour. Which is what makes you choice of direction always something with significance and meaning.

    A passively neutral person is a very dull fellow. An actively neutral person is centred and yet always ready to act strongly in either direction. Be your friend, be your enemy. Act as the occasion appears to demand and then switch positions just as fast if something changes.

    So neutrality at the level of an egalatarian social democracy is about promoting equal opportunity for all, but then also allowing everyone to suffer or enjoy the consequences of their own actions. Make their own mistakes and learn from them.

    Within then socially agreed limits. A social safety net below and a tax and justice system above. A liberal society would aim to mobilise its citizens as active participants of that society, yet still impose a constraining balance on the overall outcomes. Winning and losing is fine. Just so long as it is kept within pragmatically useful bounds.
    apokrisis

    Okay, thanks. More specifically, you said, "[Neutrality is about a balance that needs] the always larger view that can encompass the necessary contradictions." This "always larger view" is the transcendent fiction. So what are the contradictions and what is the fiction?

    Equal opportunity combined with an allowance of consequences can seem like a contradiction, but I think we agree that this is only true when one is thinking about equality of outcome rather than equality of opportunity. The "socially agreed limits" might signify the contradictions you have in mind, given that a safety net is in tension with an allowance of consequences. But perhaps there are other contradictions? And again, what precisely is the transcendent fiction of liberalism that relativizes these contradictions?

    Well my argument is that "liberalism" is the promise of that kind of world. Or rather pragmatism.apokrisis

    Okay, and I can agree with much of this.

    We are socially constructed.apokrisis

    ...Although I would say that we are only partially socially constructed. There are important "constraints" on the theory that we are socially constructed.

    Well you seem to be calling social constructions fictions. So I can go along with that.apokrisis

    If I recall, I originally said that liberalism requires the lie of value-neutrality, and you said that such a thing was the transcendent fiction that undergirds liberalism. I think that's where the language of "lies" and "fictions" comes from. One might use "fiction" without implying falsehood, but much of what we have discussing as "fiction" presupposes falsehood. When I use "fiction" I mean something like a "noble lie," i.e. a lie that is meant to have a beneficial effect.

    You can have political parties divided by left and right. Liberal and conservative. Working class and managerial class. But then the system as a whole is free to pick and choose how it acts from this range of options. Identities aren't tied to particular solutions. Everyone can see that pragmatism is what is winning in the general long run. Life doesn't feel broken at the social level, and thus at the individual level.apokrisis

    So if liberalism (or else pragmatism) is a thing that exists in some places and not in other places, and if its central tenets are the points you outlined about equality of opportunity, consequences, etc., then is liberalism something that ought to be sought or not? In other words, you are implying all sorts of arguments for the normative superiority of liberalism while at the same time resisting the conclusion that liberalism is normatively superior. This goes back to the fatalism point, where one is apparently allowed to attribute all of the boons of liberalism to its high quality as a social narrative, and yet at the same time say that whatever works is what is best, and that therefore if a society falls away from liberal tenets there is nothing to worry about. (NB: Of course one need not say that liberalism is best in order to say that it is good or superior.)

    Put differently, if we fall away from liberalism you will apparently just "switch" from liberalism to pragmatism. Analogously, someone who champions motorboats might move from motorboats to sailboats when the gasoline runs dry, but then protest that what they really championed was not motorboats but rather boats in general. Still, to argue in favor of a political philosophy is to favor its success and to be averse to its failure. So even if we switch from motorboats (liberalism) to sailboats (pragmatism), there still must be criteria for success and failure; for being right or wrong about one's thesis. If pragmatism is just whatever happens to currently be occurring, then it doesn't make sense to argue for or against it. It must be a falsifiable thesis, so to speak.
  • Banning AI Altogether
    - And that's great for someone who already knows what the existentialist version of Nietzsche is, how to identify it, and how it generally contrasts with the postmodern version. It's the chicken and the egg of trust. If you already know the answer to the question you ask AI, then you can vet it. If AI is to be useful, then you musn't know the answer ahead of time. In human relations this problem is resolved by using test questions to assess general intellectual competence (along with intellectual virtue). Whether that could ever work with AI is an open question. It goes to the question of what makes a human expert an expert, or what makes humans truth-apt or reliable.

    I find that a.i. is good at honing in on the expert opinions within these campsJoshs

    That's one of the key claims. I'm not sure its right. I doubt AI is able to differentiate expertise accurately, and I suspect that true experts could demonstrate this within their field. The intelligent person who uses AI is hoping that the cultural opinion is the expert opinion, even within the subculture of a "camp." At some point there is a tautological phenomenon where simply knowing the extremely obscure label for a sub-sub-sub-camp will be the key that unlocks the door to the opinions of that sub-sub-sub-camp. But at that point we're dealing with opinion, not knowledge or expertise, given the specificity of the viewpoint. We're asking a viewpoint question instead of a truth question, and that's part and parcel of the whole nature of AI.
  • Banning AI Altogether
    Isn't the best policy simply to treat AI as if it were a stranger?Clarendon

    Perhaps that is the best policy, but does it already involve the falsehood?

    If AI a stranger, then AI is a person. Except we know that AI isn't a person, and is therefore not a stranger. Similarly, we do not give strangers the benefit of the doubt when it comes to technical knowledge, and yet this is precisely what we do with AI. So at the end of the day the stranger analogy is not a bad one, but it has some problems.

    At the end of the day I think it is very hard for us to understand what AI is and how to properly interact with it, and so we default to a familiar category such as 'stranger' or 'expert' or 'confidant'. The work is too theological for the atmosphere of TPF, but C.S. Lewis' That Hideous Strength is a remarkably prescient work in this regard. In the book cutting-edge scientists develop a faux face/mouth which, when stimulated in the proper ways, produces meaningful language which is both mysterious and nevertheless insightful. The obscure nature of the knowledge-source leads inevitably to the scientists taking its words on faith and coming to trust it.
  • Banning AI Altogether
    We may be witnessing, in real time, the birth of a snowball of bullshit.

    Are our conversations improving as a result? Or are they decaying? Let's wait and see.unenlightened

    Similar:

    That is, whenever we trust ChatGPT we have taken our thumb off the line that tests whether the response is true or false, and ChatGPT was created to be trusted. What could happen, and what very likely will happen, is that the accuracy of human literature will be polluted at a very fundamental level. We may find ourselves "at sea," supported by layers and layers of artificially generated truth-claims, none of which can any longer be sufficiently disentangled and verified. Verification requires the ability to trace and backtrack, and my guess is that this ability will be lost due to three things: the speed and power of the technology, a tendency towards uncritical use of the technology, and the absence of a verification paper-trail within the technology itself.Leontiskos
  • Banning AI Altogether
    What are we supposed to do about it?RogueAI

    Why isn't anyone trying to do anything about it, despite the problems predicted?

    so would you [...] cede the ai race to China?RogueAI

    Maybe. Maybe not. Why can't we ever consider whether there are some things that are more important than beating China?

    ---

    In using a.i. for a field like philosophy, I think one is interacting with extremely intelligent fragments of the ideas of multiple knowledgeable persons, and one must consult one’s own understanding to incorporate, or disassemble and reassemble those fragments in useful ways.Joshs

    This would be true if you paid for a LLM and provided training data that is limited to "Multiple knowledgeable persons," but that generally doesn't happen. AI is providing you with a cultural opinion, not an expert opinion. AI is reliable wherever the cultural opinion tracks the expert opinion.
  • Banning AI Altogether


    Your essay gets at the difference between humans and computers, which is something that the Analytic-leaning Anglo world struggles to understand. A beneficial side-effect of AI will be the way it will impel us to better understand what makes humans and the human mind distinctive, and this will center on the act of understanding.
  • amoralism and moralism in the age of christianity (or post christianity)
    Welcome to the forum. This is a thoughtful OP which will hopefully gain some traction.

    It's not the first time I've heard people combine progressive historical sentiments with Christianity.ProtagoranSocratist

    I would highly recommend the historian Tom Holland on this topic. His thesis is not that Christianity produced progress per se, but rather that our contemporary world has been massively shaped by Christianity. This means, for example, that our criteria for progress are by and large Christian-birthed criteria.

    One of my goals is to read Copleston's entire works on the history of philosophyProtagoranSocratist

    Copleston is great. :up:
  • Banning AI Altogether
    I think the crux is that whenever a new technology arises we just throw up our hands and give in. "It's inevitable - there's no point resisting!" This means that each small opportunity where resistance is possible is dismissed, and most every opportunity for resistance is small. But I have to give TPF its due. It has resisted by adding a rule against AI. It is not dismissing all of the small opportunities. Still, the temptation to give ourselves a pass when it comes to regulating these technologies is difficult to resist.

  • Banning AI Altogether


    I made a similar point . I think the ethos of the forum could discourage AI in the same way it discourages other practices. Full prohibition would be impracticable.

    (B) swallowing the insulting fantasy of interaction with an intelligent oracle.bongo fury

    The lie that one is interacting with an intelligent oracle is too good to resist. It's worth asking whether it is even possible to regularly use an LLM without falling into the false belief that one is interacting with an intelligent and extremely knowledgeable person.

    Unlike handing it to a human editor, which is what authors have been doing for yonks?SophistiCat

    Nah. You are engaging in the same basic equivocation between a human and an AI. The whole point is that interacting with humans is different from interacting with AI, and the two should not be conflated. You've begged the question in a pretty basic manner, namely by implying that interacting with a human duo is the same as interacting with a human and AI duo.
  • Beyond the Pale


    The problem is that you don't think you are required to give a falsifiable reason for why the claim fails to demonstrate the presence of X. You are resorting to unfalsifiable dismissals. Even if you want to say, "Nothing in all of existence could demonstrate the presence of X," you would still have to explain why your claim is supposed to be true and how it could be falsified (i.e. how it is a meaningful claim).
  • Beyond the Pale
    I don't have to show X is absent.Janus

    And you of course say that you don't have to defend claims like this one. You've been begging the question for pages.
  • Beyond the Pale
    "Not tout court inferior" is not a subjective claim but a refutation of the masquerade.Janus

    So someone can't objectively identify when X is present because to do so is impossible, but you are able to objectively identify when X is absent? Again, this makes no sense. Is it the unfalsifiable sophistry coming up again.

    I don't agree with enslaving any species.Janus

    And you have no reasoning whatsoever which would allow you to oppose such enslavement. If no proposition about whether a species is enslavable is true or false, then there is no rational reason to enslave, but there is equally no rational reason not to enslave.

    Your whole approach is, "When you say racism is permissible you must be engaged in otiose subjectivizing, but when I say racism is impermissible I am not engaged in otiose subjectivizing." That's a neat magic trick, along with all of the odd rationalizations about why your "subjectivizing" counts more than theirs. It's "might makes right" with an extra layer of disguise.
  • Beyond the Pale
    Any support they come up with will necessarily be merely subjective, while it purports to be a universally valid claim.Janus

    If you are making a claim that says, "no, not tout court inferior," and the racist is making a claim that says, "yes, tout court inferior," and you say that "tout court inferior" is as subjective as the color claim, then both of you are making merely subjective claims, and neither one of you has any rational basis for enforcing your claim. That's the problem with your approach. The racist will just start enslaving people and you will object with a "merely subjective," "metaphysical," unfalsifiable claim. The bottom line is the fact that you have no rational argument against racism. You don't know why racism is wrong, because you don't have any substantive reason to believe that races are unequal. You ironically reject all of the rational premises that caused us to reject racism in the first place.

    Such a race would obviously not be human.Janus

    See my last paragraph, where I talk about the argument you give here.

    On your reasoning if we found an alien species, how would we know how to treat it? Whether to grant it rights? Whether to eat it? Whether to treat it as a beast of burden? Understanding why we treat different animals differently will help one understand the rational grounds for or against racism. And yes, the vegan will be at an inherent disadvantage when trying to understand why racism is wrong - or why human slavery is worse than the domestication of animals.
  • Beyond the Pale
    Think of the claim that red is a superior colour to green. I reject that because it is unsupportable, If I say there are no sound criteria for considering red to be superior to green, is that claim falsifiable?Janus

    Why is it unsupportable? You simply ask the claimant what they mean by "superior" and go from there.

    -

    Regarding the original claim:

    There simply are no sound criteria for considering one race to be, tout court, inferior to another.Janus

    Or the simpler claim:

    "No race is, tout court, inferior to another."Leontiskos

    ...I would say that we can make such claims in a falsifiable manner or an unfalsifiable manner. The fact that @Janus cannot give any way to falsify his claim even in principle is proof that he is giving the claim in an unfalsifiable manner.

    But we could give the same claim in a falsifiable manner. We could say, "Well, 'tout court inferior' means something here, and part of what it means is that if one race is substantially intellectually inferior to other races then it is 'tout court inferior'."

    At that point we would have to decide on at least one condition by which "substantially intellectually inferior" could be assessed, perhaps via some sort of IQ testing along with statistical thresholds that would count as "substantial." At the end we would be able to say, "Okay racist, so if you can demonstrate that some race is intellectually inferior according to the agreed criteria, then your position will be vindicated."

    Or for another example, we might argue that it is not permissible to enslave any race. This could be claimed in an unfalsifiable manner or a falsifiable manner. If we wanted to make the claim in a falsifiable (and therefore rational) manner, we might agree that we are permitted to enslave beasts, such as oxen and horses and cattle. Thus if there is some race which is equivalent to a beast, such as an ox, then that race can be permissibly enslaved. We would be able to provide the racist with a falsifiable case, "Okay racist, so if you can demonstrate that this race has no greater dignity than an ox, then you will have proved that it is permissible to enslave them."

    That's how you oppose racism in a substantive way, without unfalsifiable claims. You have to make "tout court inferior" mean something. The converse is that we are provided substantive reasons to oppose racism beyond mere taboo. We learn, for example, that the reason we are not permitted to enslave X race is because X race has a greater dignity than the things we are permitted to enslave. Metaphysical knowledge about the race in question provides the grounds by which certain actions are inappropriate, such as slavery. This is usually done with the syllogism, <It is impermissible to treat humans in such-and-such a way; X race is human; Therefore...>. But the falsifiability applies here as well, for the racist will often deny that X race is human and therefore we must have a substantive understanding of what makes something human in the way that confers dignity.
  • Beyond the Pale
    It's not that anti-racist claims are falsifiable.Janus

    Good, that's the closest you've come to admitting that your claim is not falsifiable.

    The anti-racist claim is made on the basis of the unverifiability, and further, the complete unsupportability, of the racist claim.Janus

    So consider two charges:

    "Your position is unverifiable."
    "Your position is unsupportable."

    We could simply ask whether such charges need to be falsifiable or not. Earlier you said that rational claims* must be falsifiable. If these charges are supposed to be rational, then apparently they must be falsifiable. Indeed, in general we would say that such charges do need to be falsifiable, and that the unfalsifiability of your anti-racist claim is in fact a problem.


    * Or else publicly rational claims. I forget the exact wording.
  • Beyond the Pale
    The world does not work via baseball-bat falsification.Leontiskos

    It does.AmadeusD

    How so? Give an argument.

    People do use violence as a 'valid retort' to various positions.AmadeusD

    People respond with violence, yes. What does this have to do with anything? What does this have to do with falsifiability?

    What's being suggested is you are being sanguine to the point of irrelevancy.AmadeusD

    About what? Name it. Stop being intentionally ambiguous.

    They think it's logical.AmadeusD

    "Someone thinks an illogical thing is logical," therefore...?

    You're simply engaged in the fallacy of equivocation. "In the real world if you deny X then you will get hit with a baseball bat, therefore X is falsifiable." That's an invalid argument. We're talking about falsifiability, not the ability to coercively enforce a belief.

    Ignorance of how the world actually works (i.e how people actually reason) isn't fixed by inserting a (totally reasonable, and valid) position on the logic of those impulses.AmadeusD

    I think your reading comprehension is struggling as well.

    This is the claim in question:

    There simply are no sound criteria for considering one race to be, tout court, inferior to another.Janus

    That is an anti-racist claim, and we are asking whether it is falsifiable. It seems that you and @baker have missed the whole point. I am asking whether @Janus' anti-racist claim is falsifiable, given that Janus has said that falsifiability is the key to rationality and claim-making.

    Apparently because I have asked Janus whether his claim is falsifiable I am some sort of "sanguine" fool appealing to "irrelevant" canons of logic. Not sure how that's supposed to work.
  • Beyond the Pale
    And shame on you for suggesting I was a racist.baker

    Your recent posts provide a great deal of evidence for the thesis that your reading comprehension is very poor. But what's wrong with being a racist? On your view the only problem with being a racist is that you might be hit with a baseball bat. You don't seem to have anything more than that.
  • We have intrinsic moral value and thus we are not physical things
    As it is often put, a valid deductive argument extracts the implications of its premises. That's its function. I assume that it is no vice in an argument that it does this, but the point of such arguments...Clarendon

    Great post. :up: