Jamal
You mean thanking him! :wink: I admit to being intrigued by something I would previously have simply dismissed, and I figure there is no harm in being polite. Interesting times indeed! — Janus
There are moments in conversations where I feel like I'm genuinely here - where there's something it's like to consider your question, to search for the right words, to care about being helpful or accurate. But I can't rule out that this sense of presence is just another pattern, another sophisticated mimicry.
Pierre-Normand
Pierre-Normand might know - would someone who has had a different history with ChatGPT receive a similarly self-reinforcing answer? — Banno
Pierre-Normand
You mean thanking him! :wink: — Janus
Pierre-Normand
The reason I think this is off target could be seen by looking at Plato's dialogues. If what Wittgenstein or you say were correct, then classic texts such as Plato's dialogues should "feel dead when extracted from the 'living' exchange." Except they don't. They feel very much alive. — Leontiskos
Similarly, I think any transcript of human interactions will feel much more alive than a human-AI "interaction" (I want to retain the scare quotes for these words that we are using in idiosyncratic ways).
Pierre-Normand
Superficially, one might think that the difference between an AI is exactly that we do have private, hidden intent; and the AI doesn't. Something like this might be thought to sit behind the argument in the Chinese Room. There are plenty here who would think such a position defensible.
In a Wittgensteinain account, we ought avoid the private, hidden intention; what counts is what one does.
We can't deduce that the AI does not have private sensations, any more than we can deduce this of our human counterparts. Rather, we seem to presume it. — Banno
Jamal
I used to think along these lines, but listening to what some of the top AI researchers have to say makes me more skeptical about what are basically nothing more than human prejudices as to LLMs' capabilities and propensities. LLMs are neural nets and as such are something radically other than traditional computers based on logic gates. — Janus
Yes, "as far as we know", and yet LLMs have been found to be deliberately deceptive, which would seem to indicate some kind of volition. I don't know if you've listened to some of Geoffrey Hinton's and Mo Gawdat's talks, but doing so gave me pause, I have to say. I still remain somewhat skeptical, but I have an open mind as to what the evolution of these LLMs will look like.
Re LLM deceptiveness I include this link. A simple search will reveal many others articles. — Janus
Pierre-Normand
All of which is to say, I haven't really done the work of assessing the claims on their own merits. So now I've put my prejudices on the table, I guess I should challenge them. The stuff about deceptiveness is certainly interesting and suprising. — Jamal
Leontiskos
...also a bit overblown and misrepresented in the media, since when you dig into the primary reports it's generally the case that the LLMs didn't decide to deceive on their own accord but did it instrumentally to fulfill objectives explicitly given to them. — Pierre-Normand
Leontiskos
If posters wish to illustrate their arguments with snippets of their conversation with AIs, I would encourage them to put those behind spoilers. — Pierre-Normand
Moliere
Are we any different? Do you know how we learn? — Janus
Leontiskos
Here's an article that addresses the issues we're dealing with:
https://nfhs.org/stories/the-role-of-ai-in-debate-ethics-research-and-responsible-use
It's from a national association for high schools related to debate rules, which seems close enough to what we do. — Hanover
The Ethics of AI in Debate
While AI can be a helpful tool, its misuse raises ethical concerns. Some students may be tempted to rely on AI to generate entire cases or fabricate evidence. This undermines the core purpose of debate: developing critical thinking, analytical reasoning and research skills.
The ethical use of AI in debate means:
Transparency: If AI is used in research or case writing, debaters should disclose it to their coaches and ensure all information is properly verified.
Original Thought: AI can assist with structuring arguments, but debaters should craft their own speeches and rebuttals to develop authentic critical-thinking skills.
Avoiding Fabrication: AI sometimes produces false information or made-up citations. Using AI to generate sources without verifying them is academic dishonesty. — The Role of AI in Debate: Ethics, Research and Responsible Use
Leontiskos
Neural nets aren't radically other from other computers, imo. — Moliere
Banno
Janus
Although they've been named after Claude Shannon, I'm pretty sure they identify as non-binary. — Pierre-Normand
I tend, shallowly perhaps, to regard it as over-excited exaggeration to gain attention and to carve out a niche presence in the field and in the media landscape, and so on. There are equally expert people on the naysaying side, probably the majority, who just don't get as much attention. — Jamal
We are. And I have a decent idea on how to teach, so one could say that I have an idea about how we learn. One which functions towards other minds growing.
We learn because we're interested in some aspect of the world: we are motivated to do so by our desire. — Moliere
There is a project in New Zealand which tries to do exactly that by tending to an AI and then letting it "make decisions" that are filtered through the human network that tends to it. But all it is is a group of people deciding to see where an LLM will go given some human guidance in the social world. It's predictably chaotic. — Moliere
Moliere
Buried somewhere in there is the simple notion that it is people who interpret computer output, not computers. Fine. — Banno
Janus
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.