tl;dr, I fully agree with the proposed site rules amendment, which seems to me warranted regardless of the degree of accuracy or reliability of LLM outputs. — Pierre-Normand
Al LLMs make naught but verbal cardboard. There is an all-pervasive ploddingness and insubstantiality to their cadence that betrays their empty core. They are the McDonald's of the written word, churning out processed verbiage that can truly impress no one but those amputated of taste, of inner poetry. They do not strike, they pose. They do not inspire, they anaesthetise. They are one more incursion of the commercial juggernaut into the beauty of human existence. And their aim is to crush it.
They are an ugly black hole of nothingness that want our souls, which some of us will gladly trade for anything new and shiny in this new shiny world of second class techno toys our masters will keep tossing to us until we babble and drool our way to mental oblivion.
I do not want their shit-encrusted footprints mucking up this site.
Resist.
My rant is over. Give me a cookie. And a hammer. — Baden
Do I interpret it correctly that we can use ChatGPT in arguments as long as we mark it as a ChatGPT reference? Like, supporting reasoning, but not as a factual source? — Christoffer
Do you actually use LLMs to solve problems, answer questions, o — T Clark
You clearly have put a lot of thought and effort into how LLMs work and how to make them work better. That seems like a useful exercise. It also raises a question. Do you actually use LLMs to solve problems, answer questions, or discuss issues in the non LLM world or only those directly related to the LLMs themselves. — T Clark
I mean even banning it for simple purposes such as improving grammar and writing clarity. Of course this will rely on the honesty of posters since it would seem to be impossible to prove that ChatGPT has been used. — Janus
I've briefly experimented with some free AI tools for improving style, but so far I haven't been very impressed by them. — SophistiCat
Seems to me to leave you wide open to being misled. — Banno
You attribute intent to LLMs. That's at best premature. LLMs have no idea what it is to tell the truth, any more than they know how to lie. They do not soak up reasons, stake grounds or make claims. — Banno
Don't just put what someone says on the forum into it and get it to write your response. — fdrake
You need to check what it says in the source, too. Do NOT trust its word on anything. — fdrake
The intent of the rule is to stop people from using it to spread misinformation and from generating reams of undigested content. — fdrake
Don't just put what someone says on the forum into it and get it to write your response. That's the kind of stuff which will eventually make us have to enforce a no-tolerance policy on it. — fdrake
They therefore have the latent ability to burst epistemic bubbles from the outside in, as it were. But this process must be initiated by a human user willing to burst their own epistemic bubbles with some assistance by the LLM. — Pierre-Normand
With the kind of use you both put it to, it does produce posts which are at least indistinguishable from human generated creativity — fdrake
you cannot trust the generic end user to use it responsibly. I wish this were not true, but it is. — fdrake
Should we have some guidelines on acceptable use of plagiarism on the forum? — bongo fury
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.