n my view, information is everywhere you care to look — Harry Hindu
Obviously, it's not "the same thing" then.AI can do the same thing ... when prompted — Harry Hindu
More productive?It's not black and white overall because I agree that AIs can be used positively, and they've been very helpful to me, especially in long philosophical back and forths that aid in clarifying certain ideas etc. That has made me more productive — Baden
But this is what conversation is for. I think it's appealing to put oneself out there, understanding that one may have possible vulnerabilities, gaps, etc. That's when one can learn best.By that criteria, even philosophically, I'm not banning LLM's insofar that it fits that goal. And really I don't see what you've said as a harmful use --
i.e. checking your own arguments, etc. — Moliere
On the other hand, on the specific area of plagiarism, I've seen at my own place of work what a "relaxed" attitude to the deployment of this technology in academia produces, and that is the destruction of standards. — Baden
When is the day when we find out that @Leontiskos with his respectable 5 000+ posts is actually smart bot / AI? Now we can be confident you aren't, because two years ago AI wasn't so good...
Yes, the fear of thinking that you are engaged with real people interested in philosophy, but actually, you're only engaging with computers and all your great ideas vanish into the dead emptiness of Turing machines just computing on and on. That would be a dismal future. — ssu
A huge aspect of this is the nature of appeals to authority, and given that TPF has an anti-religious bent, many of the members have not thought very deeply on the nature of appeals to authority (despite the fact that they occur quite often when it comes to SEP, IEP, Wittgenstein, etc.).
Whether or not the LLM is a legitimate authority and is trustworthy is at the root of many of these differences. It is the question of whether any given LLM-citation is an organic argument or an argument from authority, and also of whether the latter case is illegitimate. — Leontiskos
And likely written by Baden without AI, because backrground was misspelled. — ssu
No. I didn't. When has philosophy every provided an answer to any of our questions? Philosophy piggy-backs on the discoveries of science. It is only when science and technology progresses that philosophy progresses (with AI being an example of how it brought new life to discussions about mind and body.) — Harry Hindu
What was being argued was that the research required to put together an idea is tedious and outsourceable, and that what one should do is outsource that research, take the pre-made idea from the LLM-assistant, and "get on with the task of developing the idea to see if it works." Maybe try responding to that? — Leontiskos
Appeal to authority is fine when the context of discussion includes a specialized discipline. Philosophy is not (or in my view should not be) a specialized discipline. To make that clear I don't mean that philosophers do not have knowledge about philosophy itself the average person does not—it seem clear that they do. I'm referring specifically to discussions that would qualify as philosophical. — Janus
Arguments from authority have an inherently limited place in philosophy.
...
* An important point that I leave to the side is the fact that an LLM is not an authority in the same way a human being is an authority — Leontiskos
First thing is that I have been surprised at how reasonable an answer you get. — apokrisis
So I am only pointing to how LLMs could improve standards if they became part of the intellectual jousting. A new habit with a low cost of effort. How folk would actually make use of this possibility is another matter. — apokrisis
Now given that you understand that LLMs use fake reasoning (which I will call "material reasoning"), you might respond by asking what happens if the material reasoning shows one to be wrong. — Leontiskos
I wasn’t actually thinking about LLMs being trusted to analyse arguments as they are clearly bad at that. — apokrisis
Again my point is that LLMs could have advantages if used in good faith. And given think tanks and actual philosophy departments are going to have to figure out how to factor LLMs into their practice, it seems Luddite not to be doing the same on TPF, even if their bad faith use is almost to be expected. — apokrisis
I would sort of prefer a philosophy forum where everyone is involved in a lot of falsehood but is nevertheless involved in genuine human reasoning, as opposed to a philosophy forum where there is a great deal of factual accuracy but there isn't much genuine human reasoning occurring. — Leontiskos
We hold the author to account for their post. ... This is not epistemic or ethical reasoning so much as aesthetic. — Banno
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.