Ought one reject an otherwise excellent OP because it is AI generated? — Banno
The LLM is not a transparent source which can be queried by one's interlocutor... Put more starkly, one cannot question the authority of an LLM on some given topic with some given output, because the LLM doesn't really exist in that way. It is not a person that one can push back against, or disagree with. Those who think it is a person who can be disagreed with have become lost in the fictional language games.
Indeed, this is why the person using the LLM ought to take the responsibility of the arguments upon themselves, for only in that case is there someone who can be disagreed with or agreed with. — Leontiskos
Ought one reject an otherwise excellent OP because it is AI generated?
Well, yes. Yet we should be clear as to why we take this stance. — Banno
We hold the author to account for their post. We cannot do that with an AI. We interpret the post by giving consideration to the intentional context of the writer, to their needs and wants, to their background, aims and norms. We cannot attribute such things to an AI.
This is not epistemic or ethical reasoning so much as aesthetic. — Banno
I'd rather say that they have both the doxastic and conative components but are mostly lacking on the side of conative autonomy. As a result, their intelligence, viewed as a capacity to navigate the space of reasons, splits at the seam between cleverness and wisdom. In Aristotelian terms, they have phronesis (to some extent), since they often know what's the right thing to do in this or that particular context, without displaying virtue since they don't have an independent motivation to do it (or convince their users that they should do it). This disconnect doesn't normally happen in the case of human beings since phronesis (the epistemic ability) and virtue (the motivational structure) grow and maintain themselves (and are socially scaffolded) interdependently. — Pierre-Normand
Those are questions that I spend much time exploring rather than postponing even though I haven't arrived at definitive answers, obviously. But one thing I've concluded is that rather that it being a matter of all or nothing, or a matter of degree along a linear scale, the ascription of mental states or human capabilities to LLM-based chatbots often is rendered problematic by the divergence of our ordinary criteria of application. Criteria that normally are satisfied together in the case of human beings are satisfied separately in the case of chatbots. — Pierre-Normand
Maybe it looks confusing because it is. I mean that assessing the nature of our "conversations" with chatbot is confusing, not because of a conceptual muddle that my use of scare quotes merely papers over... — Pierre-Normand
...but rather because chatbots are mongrels. They have "brains" that have been enculturated through exposure to a massive body* of human knowledge, lore and wisdom (and prejudices) but they don't have human bodies, lack human motivations and aren't persons. — Pierre-Normand
LLMs aren't AIs that we build... — Pierre-Normand
First thing is that I have been surprised at how reasonable an answer you get. — apokrisis
So I am only pointing to how LLMs could improve standards if they became part of the intellectual jousting. A new habit with a low cost of effort. How folk would actually make use of this possibility is another matter. — apokrisis
Now given that you understand that LLMs use fake reasoning (which I will call "material reasoning"), you might respond by asking what happens if the material reasoning shows one to be wrong. — Leontiskos
I wasn’t actually thinking about LLMs being trusted to analyse arguments as they are clearly bad at that. — apokrisis
Again my point is that LLMs could have advantages if used in good faith. And given think tanks and actual philosophy departments are going to have to figure out how to factor LLMs into their practice, it seems Luddite not to be doing the same on TPF, even if their bad faith use is almost to be expected. — apokrisis
Arguments from authority have an inherently limited place in philosophy.
...
* An important point that I leave to the side is the fact that an LLM is not an authority in the same way a human being is an authority — Leontiskos
It's just easier to do it with an AI, there's so much less at stake, it's so safe, and you don't really have to put any skin in the game. So it's highly questionable how effective such practice really is. — baker
Appeal to authority is fine when the context of discussion includes a specialized discipline. Philosophy is not (or in my view should not be) a specialized discipline. To make that clear I don't mean that philosophers do not have knowledge about philosophy itself the average person does not—it seem clear that they do. I'm referring specifically to discussions that would qualify as philosophical. — Janus
But being intelligent, or wise, in the case of persons (i.e. socialized, enculturated rational animals), always has two tightly integrated components: one doxastic and one conative.
[...]
Chatbots lack conative autonomy and hence purse whichever paths they think their users want to explore (or, worse, that merely lead to the outcomes they think their users want to achieve, while having the mere appearance of soundness.) So, they lack part of what it needs to be wise, but that's not because they aren't smart or knowledgeable enough to be useful conversation partners. — Pierre-Normand
I think they have motivations, just like a dog is motivated to run after a car, but their motivations aren't autonomous since they seldom pause to question them. — Pierre-Normand
I think the rational structure of their responses and their reinforced drive to provide accurate responses warrant ascribing beliefs to them, although those beliefs are brittle and non-resilient. One must still take a Dennettian intentional stance towards them to make sense of their response (which necessitates ascribing them both doxastic and conative states), or interpret their responses though Davidson's constitutive ideal of rationality. But I think your insight that they aren't thereby making moves in our language game is sound. The reason why they aren't is because they aren't persons with personal and social commitments and duties, and with a personal stake in the game. But they can roleplay as a person making such moves (when instructed to do so) and do so intelligently and knowledgeably. — Pierre-Normand
On the other hand, on the specific area of plagiarism, I've seen at my own place of work what a "relaxed" attitude to the deployment of this technology in academia produces, and that is the destruction of standards. — Baden
When is the day when we find out that @Leontiskos with his respectable 5 000+ posts is actually smart bot / AI? Now we can be confident you aren't, because two years ago AI wasn't so good...
Yes, the fear of thinking that you are engaged with real people interested in philosophy, but actually, you're only engaging with computers and all your great ideas vanish into the dead emptiness of Turing machines just computing on and on. That would be a dismal future. — ssu
Do whatever you want in the backgound with AI, but write your own content. Don't post AI generated stuff here. — Baden
The culture of rational inquiry would seem to be what we most would value. — apokrisis
But this is TPF after all. Let's not get carried away about its existing standards. :smile: — apokrisis
If LLMs are the homogenised version of what everyone tends to say, then why aren't they a legitimate voice in any fractured debate? Like the way sport is now refereed by automated line calls and slo-mo replays. — apokrisis
I'm not arguing this is necessary. But why would a method of adjudication be bad for the quality of the philosophy rather than just be personally annoying to whoever falls on the wrong side of some LLM call? — apokrisis
So I can imagine LLMs both upping the bar and also being not at all the kind of thing folk would want to see on TPF for other human interaction reasons. — apokrisis
But what if this shows you are indeed wrong, what then?
Sure it will be irritating. But also preferable to the ducking and diving that is the norm when someone is at a loss with their own line of argument.
You seem to be describing a situation where you were winning the human interaction but now have to face up to the fact that some little snot-nose shit might have been half-right all along. — apokrisis
You seem to be describing a situation where you were winning the human interaction but now have to face up to the fact that some little snot-nose shit might have been half-right all along.
Of course the problem there is that LLMs are trained to be sycophantic. — apokrisis
But if you are making a wrong argument, wouldn't you rather know that this is so. Even if it is an LLM that finds the holes? — apokrisis
So as you say, we all can understand the noble ideal – an open contest of ideas within a community of rational inquiry. Doing our own thinking really is the point. — apokrisis
my goal is to ' hack myself to pieces and put myself back together again. — KantRemember
I think comparing AI to a calculator highlights the limits of AI when using it to “do philosophy”. Calculators do for numbers what AI can do for words. No one wonders if the calculator is a genius at math. But for some reason, we think so low of what people do, we wonder if a fancy word processor might be better at doing philosophy.
Calculators cannot prompt anything. Neither does AI. Calculators will never know the value we call a “sine” is useful when measuring molecules. Why would we think AI would know that “xyz string of words” is useful for anything either? AI doesn’t “know”, does it?
So many unaddressed assumptions. — Fire Ologist
For example, if we don't know why we want to engage in human communication rather than non-human communication, then prohibitions based on that axiom will become opaque. — Leontiskos
I'm starting to think the problem is so obvious that it will inevitably sort itself out once one reaps the fruits of a rule that allows this sort of thing. For example, once folks start merely citing AI output to disagree with all of Jamal's arguments, it may become more obvious that there is a problem at stake. — Leontiskos
But not giving the LLM credit is a problem for the reader as well, because LLMs can include errors, so the reader who doesn’t know they are reading LLM content won’t know they need to check everything about it for accuracy and soundness. — Fire Ologist
AI for philosophy and creative writing is interesting. I’m fine with the idea as a helper, like using a calculator to check your homework, or using it to inspire a start, or to re-approach a roadblock. I think anyone who treats it as anything besides a tool that for students, is using it to play a psychological game, for no reason. — Fire Ologist
I'll tell you my secret. Start by finding some question you really want answered. Then start reading around that. Make notes every time some fact or thought strikes you as somehow feeling key to the question you have in mind, you are just not quite sure how. Then as you start to accumulate a decent collection of these snippets – stumbled across all most randomly as you sample widely – begin to sort the collection into its emerging patterns. — apokrisis
I don't mind either, provided they are transparent about it being a quote and not their own words, and also provided what is quoted is actually an argument and not merely bare assertion, seeimngly cited as the voice of authority. — Janus
Should we argue... — Joshs
According to who? — Fire Ologist
OK. So somewhere between black and white, thus not a blanket ban. :up: — apokrisis
Don't you feel that being black and white is usually counter-productive in human interaction, so shouldn't a philosophy forum be up to a nuanced approach? — apokrisis
"No part of a post may be AI-written, and AI references are not permitted" — Leontiskos
Of course the tech bros are stealing all our information to make themselves unreasonably rich and powerful. — apokrisis
The practical issue for TPF is what is its true value that needs preserving? You say the human interaction. Perhaps there ought to be a thread to define that in better detail. — apokrisis
What if LLMs offered some more sophisticate mechanisms to achieve whatever human interaction goals people might have in mind? — apokrisis
And what if the human element of TPF is mostly its swirling emotions? And when it comes to the thinking, its mostly the stark differences in thought, rather than the quality of these thoughts, that keep the place lively. — apokrisis
So there seems little danger that posting LLM generated background material in a serious thread is going to outsource any actual thinking. Posts which are emotional or crackpot are surely the least likely to want to provide credible sources for what they say. — apokrisis
I agree in spirit. But let's be practical.
A blanket ban on LLM generated OPs and entire posts is a no brainer. — apokrisis
I think that it should be fine to quote LLMs just as you would quote any other source. If you make an argument and some factual, technical or historical point comes up, why not just cite a reasonably impersonal opinion on the matter. If the source is clear, others can call you out on your use of it. — apokrisis
Baden's stance on being King Canute and holding back the tide is both noble and over the top. Banning OPs, punishing those who publish text they didn't add real thought to, and keeping LLM use as transparent as possible, would be enough to preserve the human element. — apokrisis
Don't you feel that being black and white is usually counter-productive in human interaction, so shouldn't a philosophy forum be up to a nuanced approach? — apokrisis
So we are allowed to write entire posts that are purely AI-generated content, or to simply cite AI as evidence that something is true, so long as we are transparent that the content is AI-generated? Such that if someone gives an entire post that is nothing more than a quote from AI, nothing has been violated? — Leontiskos
Why do many people belive the appeal to tradition is some inviolable trump card? — unimportant
We have nothing to lose by going in that direction, and I believe the posters with most integrity here will respect us for it. — Baden
And if the product is undetectable, our site will at least not look like an AI playground. — Baden
Reflectivity and expressivity, along with intuition and imagination are at the heart of what we do here, and at least my notion of what it means to be human. — Baden
And, while AIs can be a useful tool (like all technology, they are both a toxin and a cure), there is a point at which they become inimical to what TPF is and should be. The line for me is certainly crossed when posters begin to use them to directly write posts and particularly OPs, in full or in part. And this is something it is still currently possible to detect. The fact that it is more work for us mods is unfortunate. But I'm not for throwing in the towel. — Baden
If you had asked the AI to write your reply in full or in part and had not disclosed that, we would be in the area I want to immediately address. — Baden
Arguably the most important part of the job is very often the "calculator" task, the most tedious task. — Jamal
But I'll keep "encourage", since the point is to encourage some uses of LLMs over others. In "We encourage X," the X stands for "using LLMs as assistants for research, brainstorming, and editing," with the obvious emphasis on the "as". — Jamal
Faith translates into Russian as "VERA." — Astorre
It's an interesting discrepancy: Etymologically, Latin "fides" means 'trust', but Slavic "vera" (related to Latin "verus") means 'truth'. — baker
I was surprised by the depiction of what is said to be "Socratic" in your account of the Penner article. — Paine
If I do try to reply, it would be good to know if you have studied Philosophical Fragments as a whole or only portions as references to other arguments. — Paine
The motto from Shakespeare at the start of the book, ‘Better well hanged than ill wed’, can be read as ‘I’d rather be hung on the cross than bed down with fast talkers selling flashy “truth” in a handful of proposition’. A ‘Propositio’ follows the preface, but it is not a ‘proposition to be defended’. It reveals the writer’s lack of self-certainty and direction: ‘The question [that motivates the book] is asked in ignorance by one who does not even know what can have led him to ask it.’ But this book is not a stumbling accident, so the author’s pose as a bungler may be only a pose. Underselling himself shows up brash, self-important writers who know exactly what they’re saying — who trumpet Truth and Themselves for all comers. — Repetition and Philosophical Crumbs, Piety, xvii-xviii
One stubborn perception among philosophers is that there is little of value in the explicitly Christian character of Søren Kierkegaard’s thinking. Those embarrassed by a Kierkegaardian view of Christian faith can be divided roughly into two camps: those who interpret him along irrationalist-existentialist lines as an emotivist or subjectivist, and those who see him as a sort of literary ironist whose goal is to defer endlessly the advancement of any positive philosophical position. The key to both readings of Kierkegaard depends upon viewing him as more a child of Enlightenment than its critic, as one who accepts the basic philosophical account of reason and faith in modernity and remains within it. More to the point, these readings tend to view him through the lens of secular modernity as a kind of hyper- or ultra-modernist, rather than as someone who offers a penetrating analysis of, and corrective to, the basic assumptions of modern secular philosophical culture. In this case, Kierkegaard, with all his talk of subjectivity as truth, inwardness, and passion, the objective uncertainty and absolute paradox of faith, and the teleological suspension of the ethical, along with his emphasis on indirect communication and the use of pseudonyms, is understood merely to perpetuate the modern dualisms between secular and sacred, public and private, object and subject, reason and faith—only as having opted out of the first half of each disjunction in favor of the second. Kierkegaard’s views on faith are seen as giving either too much or too little to secular modernity, and, in any case, Kierkegaard is dubbed a noncognitivist, irrationalist antiphilosopher.
Against this position, I argue that it is precisely the failure to grasp Kierkegaard’s dialectical opposition to secular modernity that results in a distortion of, and failure to appreciate, the overtly Christian character of Kierkegaard’s thought and its resources for Christian theology. Kierkegaard’s critique of reason is at the same time, and even more importantly, a critique of secular modernity. To do full justice to Kierkegaard’s critique of reason, we must also see it as a critique of modernity’s secularity. — Myron Penner, Kierkegaard’s Critique of Secular Reason, 372-3
I apologize for the incredibly belated response! — Bob Ross
I see what you are saying. The question arises: if God is not deploying a concept of group guilt, then why wouldn’t God simply restore that grace for those generations that came after (since they were individually innocent)? — Bob Ross
What do you think? — Bob Ross
annihilation is an act of willing the bad of something (by willing its non-existence)... — Bob Ross
the difference between consulting a secondary source and consulting an llm is the following:
After locating a secondary source one merely jots down the reference and that’s the end of it. — Joshs
When one locates an argument from an llm... — Joshs
When one locates an argument from an llm that one finds valuable, one decides if the argument is something one can either defend in one’s own terms, or should be backed up with a direct quote from either a secondary or primary source. This can then be achieved with a quick prompt to the llm, and finding a reference for the quote. — Joshs
The fact that proper use of a.i. leaves one with the choice between incorporating arguments one can easily defend and elaborate on based on one’s own understanding of the subject, and connecting those arguments back to verifiable quotes, means that the danger of falsehood doesn’t come up at all. — Joshs
Again, you have not even attempted to show that the AI's summation was in any way inaccurate. — Banno
The AI is not being appealed to as an authority — Banno
It's noticeable that you have not presented any evidence, one way or the other.
If you think that what the AI said is wrong, then what you ought do is to present evidence, perhaps in the form of peer-reviewed articles that say that humans can reliably recognise AI generated text.
But that is not what you have chosen to do. Instead, you cast aspersions. — Banno
No, I presented what the AI had to say, for critique. Go ahead and look at the papers it cites... — Banno