• Metaphysician Undercover
    14.3k
    n my view, information is everywhere you care to lookHarry Hindu

    I agree, information is everywhere. But I differentiate between information and knowledge. And in my view information is not the source of knowledge because no matter how long information may hang around for, knowledge will not simply emerge from it. So, knowledge has a source which is distinctly not information.

    AI can do the same thing ... when promptedHarry Hindu
    Obviously, it's not "the same thing" then.
  • baker
    5.8k
    It's not black and white overall because I agree that AIs can be used positively, and they've been very helpful to me, especially in long philosophical back and forths that aid in clarifying certain ideas etc. That has made me more productiveBaden
    More productive?
    What gets to me is that consulting online sources like LLMs takes so much time. Who has the time and the will to study thousands of words spat out by a machine? I'd rather think things through myself, even if this means spending the same amount of time, or even more. It will be time well spent, it will feel like quality time, a mind well used.


    By that criteria, even philosophically, I'm not banning LLM's insofar that it fits that goal. And really I don't see what you've said as a harmful use --

    i.e. checking your own arguments, etc.
    Moliere
    But this is what conversation is for. I think it's appealing to put oneself out there, understanding that one may have possible vulnerabilities, gaps, etc. That's when one can learn best.
  • Leontiskos
    5.2k
    On the other hand, on the specific area of plagiarism, I've seen at my own place of work what a "relaxed" attitude to the deployment of this technology in academia produces, and that is the destruction of standards.Baden

    Regarding plagiarism, I think it's worth trying to understand the most obvious ways in which the problem deviates from a problem of plagiarism. First, plagiarism is traditionally seen as an unjust transgression against the original author, who is not being justly recognized and compensated for their work. On that reading, an aversion to plagiarism is a concern for the rights of the LLM. Second, plagiarism is seen (by teachers) as hamstringing the student's potential, given that the student is not doing the work that they ought to be doing in order to become an excellent philosopher/writer/thinker. On that reading, an aversion to plagiarism is a concern for the philosophical development of TPF members.

    But I think the real things that you are concerned with are actually 1) the plight of the reader who does not understand that they are interacting with an LLM rather than a human; and 2) the unhealthy forum culture that widespread use of LLMs would create. Those concerns are not the primary things that "plagiarism" connotes. Sometimes I worry that by talking about plagiarism we are obscuring the real issues, though I realize that you may have simply given the plagiarism in your workplace as a parallel example.

    ---

    When is the day when we find out that @Leontiskos with his respectable 5 000+ posts is actually smart bot / AI? Now we can be confident you aren't, because two years ago AI wasn't so good...

    Yes, the fear of thinking that you are engaged with real people interested in philosophy, but actually, you're only engaging with computers and all your great ideas vanish into the dead emptiness of Turing machines just computing on and on. That would be a dismal future.
    ssu

    I agree, but my point is a bit different. Suppose all my posts are LLM-generated content, and this is undisclosed. This is against the forum rules as they currently stand. But now suppose that all my posts are LLM-generated content, and this is disclosed. Thus for every one of my LLM-generated posts, I enclose it in quote brackets and prepend the clause, "I agree with what the LLM says here:..." This is not against the forum rules as they are currently being interpreted. That seems odd to me, and it makes me think that the mere matter of disclosure doesn't get to the heart of the issue.
  • Janus
    17.6k
    A huge aspect of this is the nature of appeals to authority, and given that TPF has an anti-religious bent, many of the members have not thought very deeply on the nature of appeals to authority (despite the fact that they occur quite often when it comes to SEP, IEP, Wittgenstein, etc.).

    Whether or not the LLM is a legitimate authority and is trustworthy is at the root of many of these differences. It is the question of whether any given LLM-citation is an organic argument or an argument from authority, and also of whether the latter case is illegitimate.
    Leontiskos

    Appeal to authority is fine when the context of discussion includes a specialized discipline. Philosophy is not (or in my view should not be) a specialized discipline. To make that clear I don't mean that philosophers do not have knowledge about philosophy itself the average person does not—it seem clear that they do. I'm referring specifically to discussions that would qualify as philosophical.

    You mention religion—I would not count it as a specialized discipline, in the sense of being an evolving body of knowledge and understanding like science, because although it is a space of ideas as philosophy is, in the case of religion the ideas take the form of dogma and are not to be questioned but are to be believed on the basis of authority.

    And likely written by Baden without AI, because backrground was misspelled.ssu

    And misspelled again!

    No. I didn't. When has philosophy every provided an answer to any of our questions? Philosophy piggy-backs on the discoveries of science. It is only when science and technology progresses that philosophy progresses (with AI being an example of how it brought new life to discussions about mind and body.)Harry Hindu

    So you think philosophy is always bad or poor, and therefore those words would be redundant? Philosophy is not entirely reliant on science, although I agree that a philosophy which does not take science into account would be poor or bad.
  • Joshs
    6.5k
    What was being argued was that the research required to put together an idea is tedious and outsourceable, and that what one should do is outsource that research, take the pre-made idea from the LLM-assistant, and "get on with the task of developing the idea to see if it works." Maybe try responding to that?Leontiskos

    By ‘getting on with developing the pre-made idea’ , do you mean simple intellectual theft? That would indeed be nasty, but I’m trying to make a distinction between stealing and proper use of an a.i. To use a pre-made idea properly, whether it comes from an a.i. or primary-secondary human source, is to read it with the aim of interpreting and modifying its sense in the direction of one’s own developing thesis, not blindly plugging the text into one’s work. When one submits a draft to an editor, this is precisely what one does with the ‘pre-made’ reviewers’ recommendations and critiques. Ideas can only be outsourced when one does not filter them critically through one’s own perspective.
  • Leontiskos
    5.2k
    Appeal to authority is fine when the context of discussion includes a specialized discipline. Philosophy is not (or in my view should not be) a specialized discipline. To make that clear I don't mean that philosophers do not have knowledge about philosophy itself the average person does not—it seem clear that they do. I'm referring specifically to discussions that would qualify as philosophical.Janus

    Okay, that's a fair and thoughtful argument. :up:
    "There are no authoritative generalists," says Janus. Of course I think that first sentence should read "only when," no? You are presumably saying that appeal to authority is illegitimate wherever the context is not a specialized discipline?

    Your implicit argument here is that AI is not an authoritative generalist, and therefore should not be treated as one. I think that implicit argument is even more plausible than the more explicit argument you have given, but it is in no way uncontroversial. LLMs are coming to be seen not only as authoritative generalists, but as the authoritative generalist par excellence. I spoke to the issue a little bit in .

    I suppose in a technical sense my position would be that there are authoritative generalists (e.g. a child's parents), the output of an LLM contains inherent authority even at a general level*—at least in the hands of an intellectually virtuous thinker—and that, nevertheless, LLMs should not be appealed to as authorities in places like TPF. This has to do with the private/public distinction, which would need to be further developed.

    For example, one reason you would not accept an argument from the authority of the Catholic Catechism is because you do not take the Catholic Catechism to be authoritative. If I tried to offer you such an argument, I would be committing a fallacy whereby I offer you a conclusion that is based on a premise that is particular to me, and is not shared by you (i.e. a private premise rather than a publicly-shared premise).

    I think the same thing happens with LLMs, and I think this is one reason (among others) why LLMs are generally inappropriate on a philosophy forum. If we are arguing I would never accept your argument, "It is true because I say so." I think LLMs are basically , and so an appeal-to-LLM argument is the same as, "It is true because my argument slave says so." Even someone who trusts ChatGPT will tend to distrust a philosophical opponent's appeal to ChatGPT, and this is by no means irrational. This is because "ChatGPT" is a fiction. It is not a single thing, and therefore an equivocation is occurring between the opponent's instance of ChatGPT and some sort of objective or public instance of ChatGPT. In order to be a shared authority (in which case the argument from LLM-authority would be valid), the philosopher and his opponent would need to interact with the exact same instance of ChatGPT, agreeing on training, prompting, follow-ups, etc., and the a priori condition is that both parties accept ChatGPT as an authority in the first place.

    I don't think that is a realistic possibility on an argumentative philosophy forum. Even if it were possible, arguments from authority are inherently less philosophical than standard arguments, and are therefore less appropriate on a philosophy forum than standard arguments. It would be a bit like two people working together to get a Magic 8-Ball or Ouija Board to give them secret knowledge. Even if the Magic 8-Ball or Ouija Board were 100% accurate, they would still not be doing philosophy. Arguments from authority have an inherently limited place in philosophy. Even someone like Aquinas calls them the weakest form of argument.


    * An important point that I leave to the side is the fact that an LLM is not an authority in the same way a human being is an authority, and this must be taken into account. We ought not treat the authority of the LLM the same way we treat the authority of a human, given their substantial differences. Part of this goes to the fact that an LLM is not rational, is not a whole, is not self-consciously offering knowledge, etc.
  • Leontiskos
    5.2k
    Arguments from authority have an inherently limited place in philosophy.

    ...

    * An important point that I leave to the side is the fact that an LLM is not an authority in the same way a human being is an authority
    Leontiskos

    I want to add that in philosophy appeals to authority require transparency. So if I appeal to Locke as an authority, a crucial part of the appeal is that Locke's reasoning and argumentation are available to my interlocutor (and this is why appealing to publicly available texts as sources is ideal).

    This is what can never happen with LLMs: "Locke says you are wrong, and Locke is reliable. Feel free to go grab his treatise and have a look."* This is because the LLM is an intermediary; it is itself a giant argument from authority. It is just drawing on various sources and presenting their fundamental data. That's why I've said that one should go to the LLM's sources, rather than appeal to the LLM itself as an authority. The LLM is not a transparent source which can be queried by one's interlocutor, especially insofar as it represents a temporal, conditioned instance of the underlying software. Put more starkly, one cannot question the authority of an LLM on some given topic with some given output, because the LLM doesn't really exist in that way. It is not a person that one can push back against, or disagree with. Those who think it is a person who can be disagreed with have become lost in the fictional language games.

    Indeed, this is why the person using the LLM ought to take the responsibility of the arguments upon themselves, for only in that case is there someone who can be disagreed with or agreed with. If, in the context of a philosophy forum, they merely say, "I believe it because the AI said so," then all public responsibility for the belief has been abdicated. It is only ratified in virtue of the person's private authority, and therefore has no place on a public philosophy forum.


    * To be clear, it can never happen because LLMs do not write treatises, and they are not persons with subsisting existence.
  • BC
    14.1k
    I am cautiously in favor of closing down AI operations for two reasons:

    It's not just a crutch -- it's a motorized wheelchair. Othopedists want injured patients to get up and walk ASAP, and the sooner they do so without crutches, the better. They certainly don't want modestly (even moderately) injured patients to resort to wheelchairs, powered or not.

    Asking AI for information is a far too easy solution. It pops back in a few seconds -- not with a list of links to look at, but a complete answer in text and more. Seems convenient, but it rapidly undermines one's willingness to look for answers one's self -- and to use search engines to find sources.

    We know that gadgets like smart phones and GPS navigating systems undermine one's memory of telephone numbers (and maybe names too) and people who constantly use GPS have more difficulty navigating with a map or memory. The "reptile brain" is good at finding its way around, if it exercised regularly.

    That's one line of reasoning against AI.

    The other line is this: We do not have a good record of foreseeing adverse consequences of actions a few miles ahead; we do not have a good record of controlling technology (it isn't that it acts on its own -- rather we elect to use it more and more).

    We are prone to build nuclear reactors without having a plan to safely store waste. We don't save ahead for the expensive decommissioning of old plants. We built far, far more atomic bombs than were necessary to "win" a nuclear exchange, and plutonium doesn't compost very well.

    The automobile is an outstanding example of technology driving us.

    We are smart enough to invent a real artificial intelligence (not quite there yet) but we are clearly not smart enough to protect ourselves from it.

    So, what happens here on TPF is a drop in a favorite bucket, but still a good example of what happens.
  • Leontiskos
    5.2k
    First thing is that I have been surprised at how reasonable an answer you get.apokrisis

    I agree, depending on the context. In more specialized areas they simply repeat the common misconceptions.

    So I am only pointing to how LLMs could improve standards if they became part of the intellectual jousting. A new habit with a low cost of effort. How folk would actually make use of this possibility is another matter.apokrisis

    Yeah, that's fair. It could improve standards in that way. At the same time, others have pointed out how it will also magnify blind spots and social fallacies. I would definitely be interested in a study looking at the characteristic reliabilities and unreliabilities of LLM technology, or more generally of the underlying methodological philosophy.

    Now given that you understand that LLMs use fake reasoning (which I will call "material reasoning"), you might respond by asking what happens if the material reasoning shows one to be wrong.Leontiskos

    I wasn’t actually thinking about LLMs being trusted to analyse arguments as they are clearly bad at that.apokrisis

    Me neither. I was assuming we agree that all LLM output is fake reasoning.

    Again my point is that LLMs could have advantages if used in good faith. And given think tanks and actual philosophy departments are going to have to figure out how to factor LLMs into their practice, it seems Luddite not to be doing the same on TPF, even if their bad faith use is almost to be expected.apokrisis

    When deciding whether to adopt some technology within some institution, I would want to look at the advantages and disadvantages of adopting that technology in relation to the nature of the institution. So while I agree that they could have advantages if used properly, I think more is needed to justify widespread adoption in a context such as TPF.

    I would sort of prefer a philosophy forum where everyone is involved in a lot of falsehood but is nevertheless involved in genuine human reasoning, as opposed to a philosophy forum where there is a great deal of factual accuracy but there isn't much genuine human reasoning occurring. I think we would probably have to hash out our agreements or disagreements on the telos of the forum. I don't mind so much when a nutty poster writes an immaculately valid and rigorous argument from crackpot premises, because the a thread is an open field for rational engagement. But if LLMs would not lead to the degradation of rational argument and to the outsourcing of thinking, then there would be no problem.
  • Banno
    29k
    Do we accept philosophical arguments because of their authority - literally, their authorship - or because of their content?

    Ought one reject an otherwise excellent OP because it is AI generated?

    Well, yes. Yet we should be clear as to why we take this stance.

    We hold the author to account for their post. We cannot do that with an AI. We interpret the post by giving consideration to the intentional context of the writer, to their needs and wants, to their background, aims and norms. We cannot attribute such things to an AI.

    This is not epistemic or ethical reasoning so much as aesthetic.
  • apokrisis
    7.7k
    I would sort of prefer a philosophy forum where everyone is involved in a lot of falsehood but is nevertheless involved in genuine human reasoning, as opposed to a philosophy forum where there is a great deal of factual accuracy but there isn't much genuine human reasoning occurring.Leontiskos

    Well now you are explaining the quirky appeal of TPF. And wanting to construct a preservation society around that.

    Which is fair enough. I agree that if you get enough of the highly constrained approach to speculation elsewhere, then it is fun to drop in on the bat-shit crazy stuff living alongside the po-faced academic stuff, all having to rub along and occasionally go up in flames.

    So if that is genuine human reasoning in the wild, that would be why TPF would have to be turned into @baden's game park. Save this little corner of unreason for posterity. Once the larger world has been blanded out by LLMs, folk can come visit and see how humans used to be. :grin:

    Certainly a valid argument in that.
  • apokrisis
    7.7k
    We hold the author to account for their post. ... This is not epistemic or ethical reasoning so much as aesthetic.Banno

    So the essence of TPF is that we have feelings about the authors of posts. And they must also respond with feeling. Sounds right. Now we are getting down to it. :up:
  • Leontiskos
    5.2k
    Ought one reject an otherwise excellent OP because it is AI generated?

    Well, yes. Yet we should be clear as to why we take this stance.
    Banno

    Right, and therefore we must ask the question:

    We hold the author to account for their post. We cannot do that with an AI. We interpret the post by giving consideration to the intentional context of the writer, to their needs and wants, to their background, aims and norms. We cannot attribute such things to an AI.

    This is not epistemic or ethical reasoning so much as aesthetic.
    Banno

    Why is it aesthetic, and how does calling it 'aesthetic' provide us with an answer to the question of "why we take this stance"?
  • Leontiskos
    5.2k
    Ought one reject an otherwise excellent OP because it is AI generated?Banno

    Regarding the nature of a contextless AI utterance:

    The LLM is not a transparent source which can be queried by one's interlocutor... Put more starkly, one cannot question the authority of an LLM on some given topic with some given output, because the LLM doesn't really exist in that way. It is not a person that one can push back against, or disagree with. Those who think it is a person who can be disagreed with have become lost in the fictional language games.

    Indeed, this is why the person using the LLM ought to take the responsibility of the arguments upon themselves, for only in that case is there someone who can be disagreed with or agreed with.
    Leontiskos

    If there is no arguer, then there is no one to argue with. If we found a random piece of anonymous philosophy we would be able to interact with it in only very limited ways. If it washes up on the beach in a bottle, I wouldn't read it, place my objections in the bottle, and send it back out to sea. That's one of the basic reasons why AI OPs make no sense. It would make as much sense to respond to an AI OP as to send my objections back out to sea. One has no more recourse with respect to an AI OP than one does with respect to a message in a bottle.

    The whole thing comes down to the fact that there is some human being who is arguing a point via an LLM, whether or not they do it transparently. The problem is not aesthetic. The problem is that it is a metaphysical impossibility to argue with an LLM. The reason TPF is not a place where you argue with LLMs is because there are no places where you argue with LLMs. When someone gets in an argument with an LLM they have become caught up in a fictional reality. What is occurring is not an actual argument.

    The closest parallel is where someone on TPF writes an OP and then gets banned before even a single reply is published. What to do with that thread is an interesting question. The mods could close it down or keep it open, but if it is kept open it will be approached as a kind of artifact; a piece of impersonal, contextless, perspectiveless reasoning, offering no recourse to the one who finds it. But this is still only a mild parallel, given that the argument was produced by a real arguer, which is never the case with the AI OP. Or in other words: an AI OP could never even exist in the strict sense. The closest possibility is some human who is using their LLM argument slave to say something they want said. In that case the response is made to the one pulling the strings of the argument slave, not to their puppet.

    (Note that a rule against using an AI without attribution precludes the possibility that one is misdirecting their replies to the puppet instead of the puppeteer, and that is a good start.)
  • apokrisis
    7.7k
    The reason TPF is not a place where you argue with LLMs is because there are no places where you argue with LLMs. When someone gets in an argument with an LLM they have become caught up in a fictional reality. What is occurring is not an actual argument.Leontiskos

    But your deepest arguments are the ones you are willing to have against yourself. Which is how I structured my own early practice once word processors made it practical to take a deeply recursive approach to note taking.

    And I think @Joshs example of his own conversation with an LLM quoted back on p6 - “What are we to make of the status of concepts like self and other, subject and object in Wittgenstein’s later work? Must they be relative to the grammar of a language game or form of life?” - is a great example of using LLMs in this same recursive and distilling fashion.

    So it feels like a fork in the road here. Anyone serious about intellectual inquiry is going to be making use of LLMs to deepen their own conversation with themselves.

    And then there is TPF as a fairly unserious place to learn about the huge variety of inner worlds that folk may construct for themselves.

    How does TPF respond to this new technology of LLM thought assistance and recursive inquiry? Does it aim to get sillier or smarter? More a social club/long running soap opera or more of an open university for all comers?

    It would seem to me that this is still a time for experimenting rather than trying to ring fence the site. TPF is basically an anarchy anyway. It may get better, it may get worse. But the basic dynamic is already locked in by priors such as the anonymity of the posters, the diversity of the internet and the back and forth haphazard nature of flinging posts into the ether with only a modest expectation of a helpful response.

    So for you, TPF might not be a place to do this or that. But if you have a clear vision about what it is indeed for, then LLMs are a thought amplifying technology. You could experiment and see what better thing might take.

    I mean it won’t. But you can have fun trying.
  • Leontiskos
    5.2k
    But your deepest arguments are the ones you are willing to have against yourself.apokrisis

    I want to say that you are using "argument" in a special sense here. You avoid the term later on:

    Anyone serious about intellectual inquiry is going to be making use of LLMs to deepen their own conversation with themselves.apokrisis

    I would just call this a form of reasoning by oneself. I agree that it is good to reason with oneself, but I don't think TPF is the place where you do that. Whether you do it with a word processor or an LLM, I want to say that in either case it is still a form of person-to-person interaction. It's not as though you get a random email from an LLM containing an essay it wrote. You are the one setting the LLM into motion for your own purposes.

    But perhaps you want to personify the forum itself and claim that this forum-person ought to be interacting with itself via an LLM. I have no real objection to this, but I think you would be surprised at all of the deleted threads that prompt these rules in the first place. People who are interacting with LLMs know that they are not interacting with a person, and as a result they go to an internet forum and say, "Hey, my LLM just said this! Isn't this interesting? What do you guys think?," followed by a giant wall of AI-generated text.

    It would seem to me that this is still a time for experimenting rather than trying to ring fence the site.apokrisis

    It's a point worth considering. While I don't necessarily agree, I don't think there is much danger in making mistakes with the rules. I think the rule will begin lenient and grow stricter as it becomes necessary. In theory I agree with you that, in general, one should begin with a more lenient approach and tighten it up as becomes necessary.

    How would you regulate LLM use on a forum such as this?
  • ssu
    9.5k
    I agree, but my point is a bit different. Suppose all my posts are LLM-generated content, and this is undisclosed. This is against the forum rules as they currently stand. But now suppose that all my posts are LLM-generated content, and this is disclosed. Thus for every one of my LLM-generated posts, I enclose it in quote brackets and prepend the clause, "I agree with what the LLM says here:..." This is not against the forum rules as they are currently being interpreted. That seems odd to me, and it makes me think that the mere matter of disclosure doesn't get to the heart of the issue.Leontiskos
    If all of your posts are LLM-generated, what's the point?

    We aren't in a classroom and aren't getting any points or merit for the interaction in TPF. There's nothing to gain for me to get over 10 000 posts here. Anyway, If someone is clueless, LLM-generated content won't help you. I assume that if someone uses LLM-generated content, he or she at least reads it first! And the vast time people respond to others comments, not just start threads.

    LLM-generated content is rather good in simple things like definitions. So you don't have to look it up from Wikipedia or some other net encyclopedia. Especially for someone like me, whose mother tongue isn't English, checking up meanings and definitions of words is important. If one can get a great understandable definition and synopsis to Heidegger's Dasein, great! No problem.

    But using LLM-generated responses and OP's all the time? People will notice. Similar to copy pasting text from somebody else... if one doesn't bother even to write the same thing without changing the wording, then the accusation of plagiarism is justified. Hence if you get your answer/comment with LLM, then change the wording and I think you are there what @Banno marked as "groundwork". Is it hypocritical? Nah. A lot of what we say as our own reasoning has been learnt from others anyway.

    In the end I think this is really on the level of using social media and the ban on sharing viral clips. Just posting some video etc from social media isn't a worthy thing for TPF, yet naturally when the social post shows something to the whole discussion, one can reference it. This is something similar.
  • apokrisis
    7.7k
    You are the one setting the LLM into motion for your own purposes.Leontiskos

    Well yes. Just like tossing a post into the TPF bear pit.

    But one is casting a very wide net. You can do some rapid prototyping without having to be too polished. Publish the roughest top-of-the-head draft.

    The other has the promise of accelerating the polishing part of some argument which you have just tossed out to see if even you still think it might fly. :wink:

    People who are interacting with LLMs know that they are not interacting with a person, and as a result they go to an internet forum and say, "Hey, my LLM just said this! Isn't this interesting? What do you guys think?," followed by a giant wall of AI-generated text.Leontiskos

    And I agree that there should be constraints on low-effort posting. It is standard practice for posters to simply assert your wrongness and scamper off without providing any argument. Just muttering excuses about it being lunchtime.

    So yes, if one makes an effort, then one wants others to return that effort. Perfectly reasonable.

    And cut and pasting LLM cleverness is something to object to, even on a forum that seems remarkably tolerant of low effort OPs and responses.

    While I don't necessarily agree, I don't think there is much danger in making mistakes with the rules.Leontiskos

    OK. So that is part of the experimenting too. :up:

    How would you regulate LLM use on a forum such as this?Leontiskos

    I mentioned some ground rule ideas already. But I'm not really big on rules being more a constraints-based guy. And as I said, a public discussion board on philosophy is already going to wind up in a forum much as we see it.

    So I say I am annoyed by low effort responses. But that just goes with the territory. Mandating high effort would be ridiculous.

    But banning LLM generated OPs, and clamping down on masquerading cut-and-paste brilliance, seems quite doable. The mods say this is the priority I think.

    Then if LLMs do turn low effort posters into folk who can focus well enough to at least sense some flaw in your argument and drum up an instant "but AI says..." riposte, then that seems a step forward to me.

    That could be the experiment to see how it goes. But you might have to add subclauses like that if you deploy the insta-LLM text, you then have to still defend it after that. You have to take the risk of being forced into a higher effort mode as a result of being low effort.

    At the moment, there is no comeback at all on the insta-responses along the lines of "you're just wrong, I can't understand you, the lunch gong just rang".
  • Baden
    16.7k
    How does TPF respond to this new technology of LLM thought assistance and recursive inquiry? Does it aim to get sillier or smarter? More a social club/long running soap opera or more of an open university for all comers?apokrisis

    It gets sillier when people outsource their thinking and writing skills to AI. Although in your case it might be worthwhile to make an exception so we wouldn't have to listen to all the snide badly thought out criticisms of the mods and the site that you just can't help spitting out to make yourself feel superior.

    You consistently ignore posts that don't fit your narrative that we're backward anti-AI etc., so you can play your silly game. Get a new hobby. Start listening. Realize there are intelligent people here who can think and see through your twaddle. I mean just read what you've written above in the context of the conversation. Reflect a little on how transparent you are. Develop some self-awareness.
  • Baden
    16.7k
    I mean how hard is to understand the following that @apokrisis just really can't manage to get no matter how many times we repeat it:

    1) We're happy for people to experiment with AI outside the site, improve themselves with it, test their arguments, sharpen their mind. [Positive use of AI / Positive for site ]

    2) We're not happy for people to be so lazy they don't write their own posts and then fill our site with bland homogenised content. [Negative use of AI / Negative for site]

    3) This approach is exactly the right one to encourage intellectual effort and integrity as well as to maintain diversity of content. The idea that it will turn us into a "soap opera" rather than apo's imaginary open university / AI utopia is utter nonsense.

    I cannot make it any more ABC for APO. But nonetheless, I'm sure he has not exhausted his reservoir of self-inflating B.S.
  • Harry Hindu
    5.8k
    Asking AI for information is a far too easy solution. It pops back in a few seconds -- not with a list of links to look at, but a complete answer in text and more. Seems convenient, but it rapidly undermines one's willingness to look for answers one's self -- and to use search engines to find sources.BC
    Well, yeah. The problem isn't AI. It is using AI, or any source, as your only source.
  • Harry Hindu
    5.8k
    We hold the author to account for their post. We cannot do that with an AI. We interpret the post by giving consideration to the intentional context of the writer, to their needs and wants, to their background, aims and norms. We cannot attribute such things to an AI.Banno
    Most of us are not aware of other members' backgrounds, aims and norms. These things are irrelevant to the discussion. The focus on the source rather than the content is a known logical fallacy - the genetic fallacy.
  • Harry Hindu
    5.8k
    So you think philosophy is always bad or poor, and therefore those words would be redundant? Philosophy is not entirely reliant on science, although I agree that a philosophy which does not take science into account would be poor or bad.Janus
    "Bad" and "poor" were your words, not mine. All I am saying is that any progress in philosophy is dependent upon progress in science and technology. The last sentence sounds like we agree except for your injection of "bad" and "poor" into it.
  • Fire Ologist
    1.7k
    backgrounds, aims and norms. These things are irrelevant to the discussion. The focus on the source rather than the content is a known logical fallacy - the genetic fallacy.Harry Hindu

    I disagree. When you are presented with something new and unprecedented, the source matters to you when assessing how to address the new unprecedented information. You hear “The plant Venus has 9 small moons.” You think, “how did I not know that?” If the next thing you learned was that this came from a six year old kid, you might do one thing with the new fact of nine moons on Venus; if you learned it came from NASA, you might do something else; and if it came from AI, you might go to NASA to check.

    Backgrounds, aims and norms are not irrelevant to determining what something is. They are part of the context out of which things emerge, and that shape what things in themselves are.

    We do not want to live in a world where it doesn’t matter to anyone where information comes from. Especially where AI is built to confuse the fact that it is a computer.
17891011Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.