n my view, information is everywhere you care to look — Harry Hindu
Obviously, it's not "the same thing" then.AI can do the same thing ... when prompted — Harry Hindu
More productive?It's not black and white overall because I agree that AIs can be used positively, and they've been very helpful to me, especially in long philosophical back and forths that aid in clarifying certain ideas etc. That has made me more productive — Baden
But this is what conversation is for. I think it's appealing to put oneself out there, understanding that one may have possible vulnerabilities, gaps, etc. That's when one can learn best.By that criteria, even philosophically, I'm not banning LLM's insofar that it fits that goal. And really I don't see what you've said as a harmful use --
i.e. checking your own arguments, etc. — Moliere
On the other hand, on the specific area of plagiarism, I've seen at my own place of work what a "relaxed" attitude to the deployment of this technology in academia produces, and that is the destruction of standards. — Baden
When is the day when we find out that @Leontiskos with his respectable 5 000+ posts is actually smart bot / AI? Now we can be confident you aren't, because two years ago AI wasn't so good...
Yes, the fear of thinking that you are engaged with real people interested in philosophy, but actually, you're only engaging with computers and all your great ideas vanish into the dead emptiness of Turing machines just computing on and on. That would be a dismal future. — ssu
A huge aspect of this is the nature of appeals to authority, and given that TPF has an anti-religious bent, many of the members have not thought very deeply on the nature of appeals to authority (despite the fact that they occur quite often when it comes to SEP, IEP, Wittgenstein, etc.).
Whether or not the LLM is a legitimate authority and is trustworthy is at the root of many of these differences. It is the question of whether any given LLM-citation is an organic argument or an argument from authority, and also of whether the latter case is illegitimate. — Leontiskos
And likely written by Baden without AI, because backrground was misspelled. — ssu
No. I didn't. When has philosophy every provided an answer to any of our questions? Philosophy piggy-backs on the discoveries of science. It is only when science and technology progresses that philosophy progresses (with AI being an example of how it brought new life to discussions about mind and body.) — Harry Hindu
What was being argued was that the research required to put together an idea is tedious and outsourceable, and that what one should do is outsource that research, take the pre-made idea from the LLM-assistant, and "get on with the task of developing the idea to see if it works." Maybe try responding to that? — Leontiskos
Appeal to authority is fine when the context of discussion includes a specialized discipline. Philosophy is not (or in my view should not be) a specialized discipline. To make that clear I don't mean that philosophers do not have knowledge about philosophy itself the average person does not—it seem clear that they do. I'm referring specifically to discussions that would qualify as philosophical. — Janus
Arguments from authority have an inherently limited place in philosophy.
...
* An important point that I leave to the side is the fact that an LLM is not an authority in the same way a human being is an authority — Leontiskos
First thing is that I have been surprised at how reasonable an answer you get. — apokrisis
So I am only pointing to how LLMs could improve standards if they became part of the intellectual jousting. A new habit with a low cost of effort. How folk would actually make use of this possibility is another matter. — apokrisis
Now given that you understand that LLMs use fake reasoning (which I will call "material reasoning"), you might respond by asking what happens if the material reasoning shows one to be wrong. — Leontiskos
I wasn’t actually thinking about LLMs being trusted to analyse arguments as they are clearly bad at that. — apokrisis
Again my point is that LLMs could have advantages if used in good faith. And given think tanks and actual philosophy departments are going to have to figure out how to factor LLMs into their practice, it seems Luddite not to be doing the same on TPF, even if their bad faith use is almost to be expected. — apokrisis
I would sort of prefer a philosophy forum where everyone is involved in a lot of falsehood but is nevertheless involved in genuine human reasoning, as opposed to a philosophy forum where there is a great deal of factual accuracy but there isn't much genuine human reasoning occurring. — Leontiskos
We hold the author to account for their post. ... This is not epistemic or ethical reasoning so much as aesthetic. — Banno
Ought one reject an otherwise excellent OP because it is AI generated?
Well, yes. Yet we should be clear as to why we take this stance. — Banno
We hold the author to account for their post. We cannot do that with an AI. We interpret the post by giving consideration to the intentional context of the writer, to their needs and wants, to their background, aims and norms. We cannot attribute such things to an AI.
This is not epistemic or ethical reasoning so much as aesthetic. — Banno
Ought one reject an otherwise excellent OP because it is AI generated? — Banno
The LLM is not a transparent source which can be queried by one's interlocutor... Put more starkly, one cannot question the authority of an LLM on some given topic with some given output, because the LLM doesn't really exist in that way. It is not a person that one can push back against, or disagree with. Those who think it is a person who can be disagreed with have become lost in the fictional language games.
Indeed, this is why the person using the LLM ought to take the responsibility of the arguments upon themselves, for only in that case is there someone who can be disagreed with or agreed with. — Leontiskos
The reason TPF is not a place where you argue with LLMs is because there are no places where you argue with LLMs. When someone gets in an argument with an LLM they have become caught up in a fictional reality. What is occurring is not an actual argument. — Leontiskos
But your deepest arguments are the ones you are willing to have against yourself. — apokrisis
Anyone serious about intellectual inquiry is going to be making use of LLMs to deepen their own conversation with themselves. — apokrisis
It would seem to me that this is still a time for experimenting rather than trying to ring fence the site. — apokrisis
If all of your posts are LLM-generated, what's the point?I agree, but my point is a bit different. Suppose all my posts are LLM-generated content, and this is undisclosed. This is against the forum rules as they currently stand. But now suppose that all my posts are LLM-generated content, and this is disclosed. Thus for every one of my LLM-generated posts, I enclose it in quote brackets and prepend the clause, "I agree with what the LLM says here:..." This is not against the forum rules as they are currently being interpreted. That seems odd to me, and it makes me think that the mere matter of disclosure doesn't get to the heart of the issue. — Leontiskos
You are the one setting the LLM into motion for your own purposes. — Leontiskos
People who are interacting with LLMs know that they are not interacting with a person, and as a result they go to an internet forum and say, "Hey, my LLM just said this! Isn't this interesting? What do you guys think?," followed by a giant wall of AI-generated text. — Leontiskos
While I don't necessarily agree, I don't think there is much danger in making mistakes with the rules. — Leontiskos
How would you regulate LLM use on a forum such as this? — Leontiskos
How does TPF respond to this new technology of LLM thought assistance and recursive inquiry? Does it aim to get sillier or smarter? More a social club/long running soap opera or more of an open university for all comers? — apokrisis
Well, yeah. The problem isn't AI. It is using AI, or any source, as your only source.Asking AI for information is a far too easy solution. It pops back in a few seconds -- not with a list of links to look at, but a complete answer in text and more. Seems convenient, but it rapidly undermines one's willingness to look for answers one's self -- and to use search engines to find sources. — BC
Most of us are not aware of other members' backgrounds, aims and norms. These things are irrelevant to the discussion. The focus on the source rather than the content is a known logical fallacy - the genetic fallacy.We hold the author to account for their post. We cannot do that with an AI. We interpret the post by giving consideration to the intentional context of the writer, to their needs and wants, to their background, aims and norms. We cannot attribute such things to an AI. — Banno
"Bad" and "poor" were your words, not mine. All I am saying is that any progress in philosophy is dependent upon progress in science and technology. The last sentence sounds like we agree except for your injection of "bad" and "poor" into it.So you think philosophy is always bad or poor, and therefore those words would be redundant? Philosophy is not entirely reliant on science, although I agree that a philosophy which does not take science into account would be poor or bad. — Janus
backgrounds, aims and norms. These things are irrelevant to the discussion. The focus on the source rather than the content is a known logical fallacy - the genetic fallacy. — Harry Hindu
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.