• Metaphysician Undercover
    14.3k
    n my view, information is everywhere you care to lookHarry Hindu

    I agree, information is everywhere. But I differentiate between information and knowledge. And in my view information is not the source of knowledge because no matter how long information may hang around for, knowledge will not simply emerge from it. So, knowledge has a source which is distinctly not information.

    AI can do the same thing ... when promptedHarry Hindu
    Obviously, it's not "the same thing" then.
  • baker
    5.8k
    It's not black and white overall because I agree that AIs can be used positively, and they've been very helpful to me, especially in long philosophical back and forths that aid in clarifying certain ideas etc. That has made me more productiveBaden
    More productive?
    What gets to me is that consulting online sources like LLMs takes so much time. Who has the time and the will to study thousands of words spat out by a machine? I'd rather think things through myself, even if this means spending the same amount of time, or even more. It will be time well spent, it will feel like quality time, a mind well used.


    By that criteria, even philosophically, I'm not banning LLM's insofar that it fits that goal. And really I don't see what you've said as a harmful use --

    i.e. checking your own arguments, etc.
    Moliere
    But this is what conversation is for. I think it's appealing to put oneself out there, understanding that one may have possible vulnerabilities, gaps, etc. That's when one can learn best.
  • Leontiskos
    5.2k
    On the other hand, on the specific area of plagiarism, I've seen at my own place of work what a "relaxed" attitude to the deployment of this technology in academia produces, and that is the destruction of standards.Baden

    Regarding plagiarism, I think it's worth trying to understand the most obvious ways in which the problem deviates from a problem of plagiarism. First, plagiarism is traditionally seen as an unjust transgression against the original author, who is not being justly recognized and compensated for their work. On that reading, an aversion to plagiarism is a concern for the rights of the LLM. Second, plagiarism is seen (by teachers) as hamstringing the student's potential, given that the student is not doing the work that they ought to be doing in order to become an excellent philosopher/writer/thinker. On that reading, an aversion to plagiarism is a concern for the philosophical development of TPF members.

    But I think the real things that you are concerned with are actually 1) the plight of the reader who does not understand that they are interacting with an LLM rather than a human; and 2) the unhealthy forum culture that widespread use of LLMs would create. Those concerns are not the primary things that "plagiarism" connotes. Sometimes I worry that by talking about plagiarism we are obscuring the real issues, though I realize that you may have simply given the plagiarism in your workplace as a parallel example.

    ---

    When is the day when we find out that @Leontiskos with his respectable 5 000+ posts is actually smart bot / AI? Now we can be confident you aren't, because two years ago AI wasn't so good...

    Yes, the fear of thinking that you are engaged with real people interested in philosophy, but actually, you're only engaging with computers and all your great ideas vanish into the dead emptiness of Turing machines just computing on and on. That would be a dismal future.
    ssu

    I agree, but my point is a bit different. Suppose all my posts are LLM-generated content, and this is undisclosed. This is against the forum rules as they currently stand. But now suppose that all my posts are LLM-generated content, and this is disclosed. Thus for every one of my LLM-generated posts, I enclose it in quote brackets and prepend the clause, "I agree with what the LLM says here:..." This is not against the forum rules as they are currently being interpreted. That seems odd to me, and it makes me think that the mere matter of disclosure doesn't get to the heart of the issue.
  • Janus
    17.6k
    A huge aspect of this is the nature of appeals to authority, and given that TPF has an anti-religious bent, many of the members have not thought very deeply on the nature of appeals to authority (despite the fact that they occur quite often when it comes to SEP, IEP, Wittgenstein, etc.).

    Whether or not the LLM is a legitimate authority and is trustworthy is at the root of many of these differences. It is the question of whether any given LLM-citation is an organic argument or an argument from authority, and also of whether the latter case is illegitimate.
    Leontiskos

    Appeal to authority is fine when the context of discussion includes a specialized discipline. Philosophy is not (or in my view should not be) a specialized discipline. To make that clear I don't mean that philosophers do not have knowledge about philosophy itself the average person does not—it seem clear that they do. I'm referring specifically to discussions that would qualify as philosophical.

    You mention religion—I would not count it as a specialized discipline, in the sense of being an evolving body of knowledge and understanding like science, because although it is a space of ideas as philosophy is, in the case of religion the ideas take the form of dogma and are not to be questioned but are to be believed on the basis of authority.

    And likely written by Baden without AI, because backrground was misspelled.ssu

    And misspelled again!

    No. I didn't. When has philosophy every provided an answer to any of our questions? Philosophy piggy-backs on the discoveries of science. It is only when science and technology progresses that philosophy progresses (with AI being an example of how it brought new life to discussions about mind and body.)Harry Hindu

    So you think philosophy is always bad or poor, and therefore those words would be redundant? Philosophy is not entirely reliant on science, although I agree that a philosophy which does not take science into account would be poor or bad.
  • Joshs
    6.5k
    What was being argued was that the research required to put together an idea is tedious and outsourceable, and that what one should do is outsource that research, take the pre-made idea from the LLM-assistant, and "get on with the task of developing the idea to see if it works." Maybe try responding to that?Leontiskos

    By ‘getting on with developing the pre-made idea’ , do you mean simple intellectual theft? That would indeed be nasty, but I’m trying to make a distinction between stealing and proper use of an a.i. To use a pre-made idea properly, whether it comes from an a.i. or primary-secondary human source, is to read it with the aim of interpreting and modifying its sense in the direction of one’s own developing thesis, not blindly plugging the text into one’s work. When one submits a draft to an editor, this is precisely what one does with the ‘pre-made’ reviewers’ recommendations and critiques. Ideas can only be outsourced when one does not filter them critically through one’s own perspective.
  • Leontiskos
    5.2k
    Appeal to authority is fine when the context of discussion includes a specialized discipline. Philosophy is not (or in my view should not be) a specialized discipline. To make that clear I don't mean that philosophers do not have knowledge about philosophy itself the average person does not—it seem clear that they do. I'm referring specifically to discussions that would qualify as philosophical.Janus

    Okay, that's a fair and thoughtful argument. :up:
    "There are no authoritative generalists," says Janus. Of course I think that first sentence should read "only when," no? You are presumably saying that appeal to authority is illegitimate wherever the context is not a specialized discipline?

    Your implicit argument here is that AI is not an authoritative generalist, and therefore should not be treated as one. I think that implicit argument is even more plausible than the more explicit argument you have given, but it is in no way uncontroversial. LLMs are coming to be seen not only as authoritative generalists, but as the authoritative generalist par excellence. I spoke to the issue a little bit in .

    I suppose in a technical sense my position would be that there are authoritative generalists (e.g. a child's parents), the output of an LLM contains inherent authority even at a general level*—at least in the hands of an intellectually virtuous thinker—and that, nevertheless, LLMs should not be appealed to as authorities in places like TPF. This has to do with the private/public distinction, which would need to be further developed.

    For example, one reason you would not accept an argument from the authority of the Catholic Catechism is because you do not take the Catholic Catechism to be authoritative. If I tried to offer you such an argument, I would be committing a fallacy whereby I offer you a conclusion that is based on a premise that is particular to me, and is not shared by you (i.e. a private premise rather than a publicly-shared premise).

    I think the same thing happens with LLMs, and I think this is one reason (among others) why LLMs are generally inappropriate on a philosophy forum. If we are arguing I would never accept your argument, "It is true because I say so." I think LLMs are basically , and so an appeal-to-LLM argument is the same as, "It is true because my argument slave says so." Even someone who trusts ChatGPT will tend to distrust a philosophical opponent's appeal to ChatGPT, and this is by no means irrational. This is because "ChatGPT" is a fiction. It is not a single thing, and therefore an equivocation is occurring between the opponent's instance of ChatGPT and some sort of objective or public instance of ChatGPT. In order to be a shared authority (in which case the argument from LLM-authority would be valid), the philosopher and his opponent would need to interact with the exact same instance of ChatGPT, agreeing on training, prompting, follow-ups, etc., and the a priori condition is that both parties accept ChatGPT as an authority in the first place.

    I don't think that is a realistic possibility on an argumentative philosophy forum. Even if it were possible, arguments from authority are inherently less philosophical than standard arguments, and are therefore less appropriate on a philosophy forum than standard arguments. It would be a bit like two people working together to get a Magic 8-Ball or Ouija Board to give them secret knowledge. Even if the Magic 8-Ball or Ouija Board were 100% accurate, they would still not be doing philosophy. Arguments from authority have an inherently limited place in philosophy. Even someone like Aquinas calls them the weakest form of argument.


    * An important point that I leave to the side is the fact that an LLM is not an authority in the same way a human being is an authority, and this must be taken into account. We ought not treat the authority of the LLM the same way we treat the authority of a human, given their substantial differences. Part of this goes to the fact that an LLM is not rational, is not a whole, is not self-consciously offering knowledge, etc.
  • Leontiskos
    5.2k
    Arguments from authority have an inherently limited place in philosophy.

    ...

    * An important point that I leave to the side is the fact that an LLM is not an authority in the same way a human being is an authority
    Leontiskos

    I want to add that in philosophy appeals to authority require transparency. So if I appeal to Locke as an authority, a crucial part of the appeal is that Locke's reasoning and argumentation are available to my interlocutor (and this is why appealing to publicly available texts as sources is ideal).

    This is what can never happen with LLMs: "Locke says you are wrong, and Locke is reliable. Feel free to go grab his treatise and have a look."* This is because the LLM is an intermediary; it is itself a giant argument from authority. It is just drawing on various sources and presenting their fundamental data. That's why I've said that one should go to the LLM's sources, rather than appeal to the LLM itself as an authority. The LLM is not a transparent source which can be queried by one's interlocutor, especially insofar as it represents a temporal, conditioned instance of the underlying software. Put more starkly, one cannot question the authority of an LLM on some given topic with some given output, because the LLM doesn't really exist in that way. It is not a person that one can push back against, or disagree with. Those who think it is a person who can be disagreed with have become lost in the fictional language games.

    Indeed, this is why the person using the LLM ought to take the responsibility of the arguments upon themselves, for only in that case is there someone who can be disagreed with or agreed with. If, in the context of a philosophy forum, they merely say, "I believe it because the AI said so," then all public responsibility for the belief has been abdicated. It is only ratified in virtue of the person's private authority, and therefore has no place on a public philosophy forum.


    * To be clear, it can never happen because LLMs do not write treatises, and they are not persons with subsisting existence.
  • BC
    14.1k
    I am cautiously in favor of closing down AI operations for two reasons:

    It's not just a crutch -- it's a motorized wheelchair. Othopedists want injured patients to get up and walk ASAP, and the sooner they do so without crutches, the better. They certainly don't want modestly (even moderately) injured patients to resort to wheelchairs, powered or not.

    Asking AI for information is a far too easy solution. It pops back in a few seconds -- not with a list of links to look at, but a complete answer in text and more. Seems convenient, but it rapidly undermines one's willingness to look for answers one's self -- and to use search engines to find sources.

    We know that gadgets like smart phones and GPS navigating systems undermine one's memory of telephone numbers (and maybe names too) and people who constantly use GPS have more difficulty navigating with a map or memory. The "reptile brain" is good at finding its way around, if it exercised regularly.

    That's one line of reasoning against AI.

    The other line is this: We do not have a good record of foreseeing adverse consequences of actions a few miles ahead; we do not have a good record of controlling technology (it isn't that it acts on its own -- rather we elect to use it more and more).

    We are prone to build nuclear reactors without having a plan to safely store waste. We don't save ahead for the expensive decommissioning of old plants. We built far, far more atomic bombs than were necessary to "win" a nuclear exchange, and plutonium doesn't compost very well.

    The automobile is an outstanding example of technology driving us.

    We are smart enough to invent a real artificial intelligence (not quite there yet) but we are clearly not smart enough to protect ourselves from it.

    So, what happens here on TPF is a drop in a favorite bucket, but still a good example of what happens.
  • Leontiskos
    5.2k
    First thing is that I have been surprised at how reasonable an answer you get.apokrisis

    I agree, depending on the context. In more specialized areas they simply repeat the common misconceptions.

    So I am only pointing to how LLMs could improve standards if they became part of the intellectual jousting. A new habit with a low cost of effort. How folk would actually make use of this possibility is another matter.apokrisis

    Yeah, that's fair. It could improve standards in that way. At the same time, others have pointed out how it will also magnify blind spots and social fallacies. I would definitely be interested in a study looking at the characteristic reliabilities and unreliabilities of LLM technology, or more generally of the underlying methodological philosophy.

    Now given that you understand that LLMs use fake reasoning (which I will call "material reasoning"), you might respond by asking what happens if the material reasoning shows one to be wrong.Leontiskos

    I wasn’t actually thinking about LLMs being trusted to analyse arguments as they are clearly bad at that.apokrisis

    Me neither. I was assuming we agree that all LLM output is fake reasoning.

    Again my point is that LLMs could have advantages if used in good faith. And given think tanks and actual philosophy departments are going to have to figure out how to factor LLMs into their practice, it seems Luddite not to be doing the same on TPF, even if their bad faith use is almost to be expected.apokrisis

    When deciding whether to adopt some technology within some institution, I would want to look at the advantages and disadvantages of adopting that technology in relation to the nature of the institution. So while I agree that they could have advantages if used properly, I think more is needed to justify widespread adoption in a context such as TPF.

    I would sort of prefer a philosophy forum where everyone is involved in a lot of falsehood but is nevertheless involved in genuine human reasoning, as opposed to a philosophy forum where there is a great deal of factual accuracy but there isn't much genuine human reasoning occurring. I think we would probably have to hash out our agreements or disagreements on the telos of the forum. I don't mind so much when a nutty poster writes an immaculately valid and rigorous argument from crackpot premises, because the a thread is an open field for rational engagement. But if LLMs would not lead to the degradation of rational argument and to the outsourcing of thinking, then there would be no problem.
  • Banno
    28.9k
    Do we accept philosophical arguments because of their authority - literally, their authorship - or because of their content?

    Ought one reject an otherwise excellent OP because it is AI generated?

    Well, yes. Yet we should be clear as to why we take this stance.

    We hold the author to account for their post. We cannot do that with an AI. We interpret the post by giving consideration to the intentional context of the writer, to their needs and wants, to their background, aims and norms. We cannot attribute such things to an AI.

    This is not epistemic or ethical reasoning so much as aesthetic.
  • apokrisis
    7.7k
    I would sort of prefer a philosophy forum where everyone is involved in a lot of falsehood but is nevertheless involved in genuine human reasoning, as opposed to a philosophy forum where there is a great deal of factual accuracy but there isn't much genuine human reasoning occurring.Leontiskos

    Well now you are explaining the quirky appeal of TPF. And wanting to construct a preservation society around that.

    Which is fair enough. I agree that if you get enough of the highly constrained approach to speculation elsewhere, then it is fun to drop in on the bat-shit crazy stuff living alongside the po-faced academic stuff, all having to rub along and occasionally go up in flames.

    So if that is genuine human reasoning in the wild, that would be why TPF would have to be turned into @baden's game park. Save this little corner of unreason for posterity. Once the larger world has been blanded out by LLMs, folk can come visit and see how humans used to be. :grin:

    Certainly a valid argument in that.
  • apokrisis
    7.7k
    We hold the author to account for their post. ... This is not epistemic or ethical reasoning so much as aesthetic.Banno

    So the essence of TPF is that we have feelings about the authors of posts. And they must also respond with feeling. Sounds right. Now we are getting down to it. :up:
17891011Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.