I for one think your proposals represent about the best we can do in the existing situation — Janus
I have never used LLMs until today. I felt I should explore some interactions with them, so I have a better idea about what the experience is like. The idea of getting them to write, produce content which I can then paraphrase, polish my writing or using their arguments is anathema to me. — Janus
The idea of getting them to write, produce content which I can then paraphrase, polish my writing or using their arguments is anathema to me. — Janus
You're essentially saying that the genetic fallacy is not a logical fallacy. It is, and it it's a fallacy for a reason.I disagree. When you are presented with something new and unprecedented, the source matters to you when assessing how to address the new unprecedented information. You hear “The plant Venus has 9 small moons.” You think, “how did I not know that?” If the next thing you learned was that this came from a six year old kid, you might do one thing with the new fact of nine moons on Venus; if you learned it came from NASA, you might do something else; and if it came from AI, you might go to NASA to check.
Backgrounds, aims and norms are not irrelevant to determining what something is. They are part of the context out of which things emerge, and that shape what things in themselves are.
We do not want to live in a world where it doesn’t matter to anyone where information comes from. Especially where AI is built to confuse the fact that it is a computer. — Fire Ologist
https://www.fallacyfiles.org/genefall.htmlDifficult as it may be, it is vitally important to separate argument sources and styles from argument content. In argument the medium is not the message.
I don't know - maybe give us the information and let us decide for ourselves what we do with it - like everything else on this forum.So, guys, I loaded this thread into AI for the solution to our quandary. Aside from the irony, who wants to know what it says?
If so, why? If not, why not? Who will admit that if I don't share what it says will do it on their own? Why would you do it in private, but not public? Shame? Feels like cheating? Curious as to what AI says about public versus private use? Why are you curious? Will you now ask AI why that distinction matters?
Will you follow AI's guidance in how to use AI while still preserving whatever it feels like were losing?
Do you feel like it's better that it arrived at its conclusions after reading our feedback? Will you take pride in seeing that your contributions are reflected in its conclusions?
Feels like we need to matter, right? — Hanover
I think this is all a storm in a teacup. It is obvious etiquette to quote an AI response in the same way that one would quote a remark from a published author, and nobody should object to a quoted AI response that is relevant and useful to the context of the thread. — sime
Kant is not alive to be accountable and to tell us what he meant, not to mention that if he were alive today and possessed the knowledge of today what he said might be different.When you quote a published author you point to a node in a network of philosophical discourse, and a point on the line of a philosopher's evolution, and to a point in a body of work the self-consistency of which is a constant issue for that philosopher, making it relatively stable—all of which allowing you to discuss what the philosopher meant. The source in this case is accountable and interpretable. — Jamal
One might say that a quote from Kant invites engagement with the user's knowledge of what dead philosophers have said and that a quote from an LLM is more relevant because it is based on current knowledge.This is not the case for an LLM. A quote from Kant invites engagement with Kant and the history of philosophy; a quote from an LLM's output invites ... what? Engagement with the user's prompt engineering skills? — Jamal
I want to divide this question into two -- one addressing our actual capacities to "Ban AI", which I agree is a useless rejection since it won't result in actually banning AI given our capacities to be fair and detect when such-and-such a token is the result of thinking, or the result of the likelihood-token-machine. — Moliere
On the latter I mean to give a philosophical opposition to LLM's. I'd say that to progress thought we must be thinking. I'd put the analogy towards the body: we won't climb large mountains before we take walks. There may be various tools and aids in this process, naturally, and that's what I'm trying to point out, at the philosophical level, that the tool is a handicap towards what I think of as good thinking than an aid.
My contention is that the AI is not helping us to think because it is not thinking. Rather it generates tokens which look like thinking, when in reality we must actually be thinking in order for the tokens to be thought of as thought, and thereby to be thought of as philosophy.
In keeping with the analogy of the body: There are lifting machines which do some of the work for you when you're just starting out. I could see an LLM being used in this manner as a fair philosophical use. But eventually the training wheels are loosened because our body is ready for it. I think the mind works much the same way: And just as it can increase in ability so it can decrease with a lack of usage.
Now for practical tasks that's not so much an issue. Your boss will not only want you to use the calculator but won't let you not use the calculator when the results of those calculations are legally important.
But I see philosophy as more process-oriented than ends-oriented -- so even if the well-tuned token-machine can produce a better argument, good arguments aren't what progresses thought -- rather, us exercising does.
By that criteria, even philosophically, I'm not banning LLM's insofar that it fits that goal. And really I don't see what you've said as a harmful use -- i.e. checking your own arguments, etc. So by all means others may go ahead and do so. It's just not that appealing to me. If that means others will become super-thinkers beyond my capacity then I am comfortable remaining where I am, though my suspicion is rather the opposite. — Moliere
This is completely irrelevant because if someone rewrites what AI said in their own words the source of the idea is still AI.But I think we should, in the context of a How to use AI, tell people what we don't want them to do, even if it's often impossible to detect people doing it. — Jamal
No. I haven't. I get the real thing from my wife, so why would I? Of course there are people that have a healthy sex life with their partner still seek out prostitutes and porn on the internet or sex chats. It's my personal preference for the real thing and those other acts I might consider only if I wasn't getting the real thing as often as I like.Have you tried having an erotic chat with an LLM? — Moliere
One could say the same thing about calling a 900 number and talking to the live person on the other line. It's not real sex either.We can do it, but we can't do it. — Moliere
It seems to me that the difference is with those that see language itself as a language game and those that don't, where those that do are more focused on the messenger rather than the message, or the words rather than what they refer to. Those that do not see language as a game are focused on the message rather than the messenger or the words used to express it.Philosophy is more than a language game, I'd say. Philosophy is the discipline which came up with "language games"; insofar that we adopt language games then philosophy may be a language game, but if we do not -- then it's not.
Philosophy is a "step up" from language games such that the question of what language games are can be asked without resorting to the definition or evidence of "language games" — Moliere
No. I haven't. I get the real thing from my wife, so why would I? Of course there are people that have a healthy sex life with their partner still seek out prostitutes and porn on the internet or sex chats. It's my personal preference for the real thing and those other acts would only be if I wasn't getting the real thing as often as I like.
The same goes for discussions on this forum where certain posters are regularly intellectually dishonest and are rude. AI is where I go when I'm not getting any kind of serious input from real people on a topic. I prefer having discussions with real people, but use AI as a backup.
We can do it, but we can't do it.
— Moliere
One could say the same thing about calling a 900 number and talking to the live person on the other line. It's not real sex either. — Harry Hindu
When you quote a published author you point to a node in a network of philosophical discourse, [...] The source in this case is accountable and interpretable. — Jamal
When I made the point (badly) I nearly said "nodes in a network". Dang! — bongo fury
So, guys, I loaded this thread into AI for the solution to our quandary. Aside from the irony, who wants to know what it says?
If so, why? If not, why not? Who will admit that if I don't share what it says will do it on their own? Why would you do it in private, but not public? Shame? Feels like cheating? Curious as to what AI says about public versus private use? Why are you curious? Will you now ask AI why that distinction matters?
Will you follow AI's guidance in how to use AI while still preserving whatever it feels like were losing?
Do you feel like it's better that it arrived at its conclusions after reading our feedback? Will you take pride in seeing that your contributions are reflected in its conclusions?
Feels like we need to matter, right? — Hanover
I believe we should not treat LLM quotes in the same way as those from published authors.
When you quote a published author you point to a node in a network of philosophical discourse, and a point on the line of a philosopher's evolution, and to a point in a body of work the self-consistency of which is a constant issue for that philosopher, making it relatively stable—all of which allowing you to discuss what the philosopher meant. The source in this case is accountable and interpretable.
This is not the case for an LLM. A quote from Kant invites engagement with Kant and the history of philosophy; a quote from an LLM's output invites ... what? Engagement with the user's prompt engineering skills? — Jamal
That's why I'll be posting up suggested guidelines for discussion. — Jamal
Yeah, but it's ambiguous. I'd like to clarify it, and make it known that it's not ok to do certain things, even if it's impossible to enforce. — Jamal
In one of my essays — Baden
In one of my essays, I suggest AIs (because---depite potential positives---of how they work on most people) are essentially entropy exporting and difference creating machines that localise structure at our expense (our brains are the dumpsters for their entropy), potentially creating massive concentrations of negentropy in their developing systems that speed up overall entropy and therefore consume (thermodynamic) time at a rate never before achieved and that is potenitially self-accelerating. I.e. They eat us and then they eat reality. — Baden
For the AI afficionado AI is to be treated like a black box, like a Ouija board or a Magic 8-Ball. They become impatient with those who ask the question, "How does it work?" They interrupt, exclaiming, "But look at what it can do!" This is literally the oldest trick in The Book, "Don't worry about any objections, just focus on the power it will give you!" The AI afficionado's approach is consequentialist through and through, and he has discovered a consequence which is supreme; which can ignore objections tout court. For him, what AI provides must outweigh any possible objection, and indeed objections therefore need not be heard. His only argument is a demonstration of its power, for that is all he deems necessary ("It is precious..."). In response to an objection he will begin by quoting AI itself in order to demonstrate its power, as if he were not begging the question in doing so. Indeed, if his interlocutors accept his premise that might makes right, then he is begging no question at all. With such logical and rhetorical power at stake, how could the price of lying be a price too high to pay? — Leontiskos
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.