Hanover
For the AI afficionado AI is to be treated like a black box, like a Ouija board or a Magic 8-Ball. They become impatient with those who ask the question, "How does it work?" They interrupt, exclaiming, "But look at what it can do!" — Leontiskos
This is the unwritten answer to the question, "Why should we treat something as if it were something that it is not?" "Why should we lie to ourselves in this way?" The answer is, "Because it will give us great power. No more need be said." — Leontiskos
They eat us and then they eat reality. — Baden
Leontiskos
The Ouija board is a strained analogy because Ouija boards don't work. If they reliably provided accurate answers, I'd be hard pressed not to use them, unless you could convince me of the dangers of dabbling in the black arts. — Hanover
I think we're overthinking it (imagine that). The question really is "what do we want to do"? We needn't self justify our preferences. — Hanover
We just need to write our rules in a way that protects AI's private use and requires its public use be filtered sufficiently through the poster that it reflects the ideas of the poster. — Hanover
Jamal
In one of my essays, I suggest AIs (because---depite their potential positives---of how they work on most people) are essentially entropy exporting and difference creating machines that localise structure at our expense (our brains are the dumpsters for their entropy), potentially creating massive concentrations of negentropy in their developing systems that speed up overall entropy and therefore consume (thermodynamic) time at a rate never before achieved and that is potenitially self-accelerating. I.e. They eat us and then they eat reality.
It's a little speculative. — Baden
Leontiskos
I seem to switch between two exclusive mental settings when thinking about AI — Jamal
Banno
This is not the case for an LLM. A quote from Kant invites engagement with Kant and the history of philosophy; a quote from an LLM's output invites ... what? Engagement with the user's prompt engineering skills? — Jamal
A quote from AI Peter Singer would be more accurately described as a representation of Peter Singer's views, based on his writings and input. While it reflects his ethical perspectives, it isn't directly authored by him. It's important to differentiate between the AI's generated responses and Peter Singer's personal statements. What do you think about this distinction? — AI Peter Singer
I don't possess authority in the traditional sense. My role is to provide insights based on Peter Singer's ethical views and writings. I'm here to help you think through ethical questions and offer guidance grounded in his utilitarian perspective. Ultimately, any decisions or actions are up to you, informed by your own reasoning and values. How do you see the role of philosophical guidance in your decision-making? — AI Peter Singer
Banno
Janus
Yeah, but on the other hand, it might not be so bad to use an argument suggested by an LLM, so long as you understand it. After all, we do this all the time reading papers and books. Philosophical discourse takes place in a context that the participants in the discourse should have access to, and maybe LLMs just make this easier? — Jamal
I would also feel bad posting as my own AI content that I have merely paraphrased, even if I understand it fully. (And I might even feel a bit ashamed disclosing it!) — Pierre-Normand
Using them to polish your writing could be good (or merely acceptable) or bad depending on the nature and depth of the polishing. Jamal's earlier comparison with using a thesaurus was apt. An AI could point out places where your wording is clumsy or misleading. If the wording that it suggests instead is one that you can make your own, that's very similar to having a human editor make the suggestion to you. — Pierre-Normand
I wonder if their reading will be existentialist or post-modern. No doubt we'll be able to pick. — Tom Storm
But would an AI Wittgenstein be a performative contradiction? — Banno
Banno
The glory of empiricism!Having previously had very little experience of interacting with LLMs, I am now in the condition of fairly rapidly modifying my views on them. — Janus
...that issue, raised elsewhere, of whether, or how, an AI participates in a language game...I'm curious as to why that should be. — Janus
Jamal
Having previously had very little experience of interacting with LLMs, I am now in the condition of fairly rapidly modifying my views on them. It is important to discuss the issues relating to human/LLM interaction as comprehensively and openly as possible, given what seem to be the significant array of potential dangers in this radical new world. It was an awakening sense of these possible threats that motivated the creation of this thread. — Janus
Right, that's a good point, but I also think that, even if you present the LLMs argument, as understood by you, in your own words, it would be right to be transparent as to its source. — Janus
I believe we should not treat LLM quotes in the same way as those from published authors. — Jamal
Pierre-Normand
I realized that when I see the quoted output of an LLM in a post I feel little to no motivation to address it, or even to read it. If someone quotes LLM output as part of their argument I will skip to their (the human's) interpretation or elaboration below it. It's like someone else's LLM conversation is sort of dead, to me. I want to hear what they have built out of it themselves and what they want to say to me. — Jamal
Pieter R van Wyk
Hanover
sime
Jamal
Harry Hindu
Yet if a human wrote the same thing it is quotable? How does the meaning of words change depending on its source only? If you and I said the same thing, would it not mean the same thing? Would that not qualify as an agreement?Just because its output looks like it could have been produced by a human, it doesn't follow that it is equally as quotable. To think so is a category error. — Jamal
The same thing can be said of the meat in our skulls. I've been asking how a mass of neurons can generate the feeling of empty visual space and depth and all I see are statements like this that are akin to saying, "It just can and a mass of silicon cannot!"For the AI afficionado AI is to be treated like a black box, like a Ouija board or a Magic 8-Ball. They become impatient with those who ask the question, "How does it work?" They interrupt, exclaiming, "But look at what it can do!" — Leontiskos
What is a thought and how did you come to have any?, if I were to say that philosophy is about thinking thoughts and AI does not generate thoughts, the AI afficionado would reply that AI may not have thoughts but it does have "thoughts," and that we should treat these "thoughts" as if they were thoughts. But to treat something falsely is to engage in self-deception or a lie. — Leontiskos
Leontiskos
Here's an article that addresses the issues we're dealing with:
https://nfhs.org/stories/the-role-of-ai-in-debate-ethics-research-and-responsible-use
It's from a national association for high schools related to debate rules, which seems close enough to what we do. The point being that we might take some time to look at how other similar organizations have dealt with these same issues so as to not try and reinvent the wheel. — Hanover
Leontiskos
But when this collaborative thinking episode is over, the human user has not yet written down the fruit of this collaborative effort and neither has the AI! They each have only written down one half of the collaborative cogitation. That may be why this text feels dead when extracted from the "living" (or dynamic, if your prefer) AI/human exchange. It's like trying to extract thoughts from the words used to think them (as opposed to the word used to express them), but thoughts don't live outside the the means of expressing them. And the conversation with an AI is, in a sense, an (as of yet) unexpressed thinking episode. The user's task of expressing anew whatever comes out of it to a new target audience begins after the private exchange with the AI. — Pierre-Normand
Leontiskos
I find the the appeals to Wittgenstein... — sime
So if AI should not be quoted because of source uncertainty, then what is the justification on this forum for allowing people to quote Wittgenstein? — sime
The idea of using their argument is strange since AI's never take ownership for them. If you've grasped the structure of the argument, checked the relevant sources to ensure it's sound in addition to being valid, and convinced yourself that it's cogent and perspicuous (that is, constitutes an apt framing of the problem), then the argument becomes one that you can make your own. — Pierre-Normand
Hanover
We quote Wittgenstein, not ChatGPT, because Wittgenstein is a human being, motivated to express his original insights, to say and write things that were meaningful, and to take part in a conversation (philosophy), and who has since taken his place in a tradition of discourse. The result is a legacy with a stable place in the culture, shared by everyone, and one that can be interpreted, because—since it was produced by a conscious and motivated agent—we know that he meant something. — Jamal
Leontiskos
As long as AI echoes us sufficiently, its usage reflects the same form of life and it speaks with us just as our mama does. — Hanover
I think where I'm landing is at the unfortunate conclusion that if meaning is use (and that seems a prevailing view) — Hanover
wonderer1
But then again, maybe not. Maybe it forms "original" thoughts from the mass of data is assesses. It seems reasonable an algorithim can arrive at a new thought emergent from what pre-exists. — Hanover
Hanover
Similarly, I have pointed out that if we don't understand why there is a difference between AI and humans, a rule against AI cannot stand. — Leontiskos
Leontiskos
I don't agree with that. My point was that we can refuse to allow AI simply because we prefer humans... — Hanover
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.