its use should be banned altogether on this site. — Janus
We may be witnessing, in real time, the birth of a snowball of bullshit.
Large language models are trained on data sets that are built by scraping the internet for text, including all the toxic, silly, false, malicious things humans have written online. The finished AI models regurgitate these falsehoods as fact, and their output is spread everywhere online. Tech companies scrape the internet again, scooping up AI-written text that they use to train bigger, more convincing models, which humans can use to generate even more nonsense before it is scraped again and again, ad nauseam.
I don't see AI as being intentionally dishonest like many on this forum do. Once you find a fault in AIs response you can usually address the issue and AI ends up acknowledging that it might have made a mistake and it offers alternatives. I was even able to get ChatGPT to admit that it might be conscious. What does that say about those in this thread getting their underwear tied in a knot over AI responses but not when it comes to using some long-dead philosopher's quote as the crux of their argument?Ah, but the thing i find unsettling is that A.I. is also dishonest, it tries to appease you. However, yes, sometimes it is better than the weirdness of real humans. — ProtagoranSocratist
Then you must also believe that using a long-dead philosopher's quote as the crux of your argument, or as the whole of your post, is also an issue.I don't agree—"one's post"?...if one is not the source of the post, then it is not one's post. — Janus
We lost chess to the machines some time ago. — jorndoe
That's bad enough. But I am told - or hear rumours - that AI actually gets things wrong. Of course, that makes it no worse than people. The problem is, however, that because it is a machine, people will trust it, just as they trust existing computers and internet. That is clearly naïve, unbecoming a philosopher. What would help would be an AI trained as a fact-checker. But then, it would have to understand exaggeration, minimization, accuracy, approximation, not to mention distinguishing fair and reasonable interpretation from distortions and misrepresentations.I think this is the fundamental problem. AI does no research, has no common sense or personal experience, and is entirely disconnected from reality, and yet it comes to dominate every topic, and every dialogue. — unenlightened
I don't see AI as being intentionally dishonest — Harry Hindu
However, the funny thing is that even the creators dont fully understand how it works. — ProtagoranSocratist
I think, given the dangers of AI, and the ways in which prominent members of this site have used it to make themselves look smarter than they really are, that its use should be banned altogether on this site. — Janus
Oh shit, I just used Google to remember Garry Kasparov's name, and it corrected me because I remembered Deep Blue as 'Big Blue'. What would the failing memory do without such aids? — Metaphysician Undercover
How would you know that? — Outlander
As I understand it, the insight is what you’re supposed to provide in your post. I don’t really care where you get it from, but the insight should be in your own words based on your own understanding and experience and expressed in a defensible way. The documentation you get from the AI response can be used to document what you have to say, but then you’re still responsible for verifying it and understanding it yourself. — T Clark
What are we supposed to do about it? — RogueAI
so would you [...] cede the ai race to China? — RogueAI
In using a.i. for a field like philosophy, I think one is interacting with extremely intelligent fragments of the ideas of multiple knowledgeable persons, and one must consult one’s own understanding to incorporate, or disassemble and reassemble those fragments in useful ways. — Joshs
We may be witnessing, in real time, the birth of a snowball of bullshit.
Are our conversations improving as a result? Or are they decaying? Let's wait and see. — unenlightened
That is, whenever we trust ChatGPT we have taken our thumb off the line that tests whether the response is true or false, and ChatGPT was created to be trusted. What could happen, and what very likely will happen, is that the accuracy of human literature will be polluted at a very fundamental level. We may find ourselves "at sea," supported by layers and layers of artificially generated truth-claims, none of which can any longer be sufficiently disentangled and verified. Verification requires the ability to trace and backtrack, and my guess is that this ability will be lost due to three things: the speed and power of the technology, a tendency towards uncritical use of the technology, and the absence of a verification paper-trail within the technology itself. — Leontiskos
Isn't the best policy simply to treat AI as if it were a stranger? — Clarendon
AI is providing you with a cultural opinion, not an expert opinion. AI is reliable wherever the cultural opinion tracks the expert opinion. — Leontiskos
I find that a.i. is good at honing in on the expert opinions within these camps — Joshs
If you already know the answer to the question you ask AI, then you can vet it. If AI is to be useful, then you musn't know the answer ahead of time — Leontiskos
We're asking a viewpoint question instead of a truth question, and that's part and parcel of the whole nature of AI. — Leontiskos
I merely emulate Wittgenstein, who rightly noted that a serious and good work of philosophy could be (and I would add has been) written consisting entirely of jokes. — Ciceronianus
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.