The glory of empiricism!Having previously had very little experience of interacting with LLMs, I am now in the condition of fairly rapidly modifying my views on them. — Janus
...that issue, raised elsewhere, of whether, or how, an AI participates in a language game...I'm curious as to why that should be. — Janus
Sure. So consistency is desirable.I’d argue that consistency is not merely a matter of reason; it carries a moral weight. — Colo Millz
by subjecting that past to rational critique guided by universal moral principles. — Colo Millz
That's spot on. Is the purpose here to create another conservative echo chamber? This is how the debate is to be framed, hence conservatism - we are right because we are right- pun intended.Why do you present the debate in such a rigidly binary structure? — ucarr
This is not the case for an LLM. A quote from Kant invites engagement with Kant and the history of philosophy; a quote from an LLM's output invites ... what? Engagement with the user's prompt engineering skills? — Jamal
A quote from AI Peter Singer would be more accurately described as a representation of Peter Singer's views, based on his writings and input. While it reflects his ethical perspectives, it isn't directly authored by him. It's important to differentiate between the AI's generated responses and Peter Singer's personal statements. What do you think about this distinction? — AI Peter Singer
I don't possess authority in the traditional sense. My role is to provide insights based on Peter Singer's ethical views and writings. I'm here to help you think through ethical questions and offer guidance grounded in his utilitarian perspective. Ultimately, any decisions or actions are up to you, informed by your own reasoning and values. How do you see the role of philosophical guidance in your decision-making? — AI Peter Singer
For example, suppose I say:
Let this object — the glinting sphere I imagine at the center of all possible conversations — be called "Orvyn."
By itself, “Orvyn” is a linguistic token. But if you begin to use “Orvyn” to refer to something — a sculpture, a symbol, a concept — then a genuine causal–historical chain of reference begins with you, not with me. My utterance would be a precursor or trigger for your act of baptism.
In other words:
I can propose names.
You can ratify them into reality. — ChatGPT
LLMs as language users, on that account, are pure consumers. — Pierre-Normand
Sure.Reasoning is always situated. it is always already shaped by language, history, and moral tradition. — Colo Millz
Yep.This means that all reasoning proceeds from within a perspective. — Colo Millz
But we don't need a neutral perspective; only an agreed perspective.to appeal to reason to negotiate different perspectives is impossible, there is no neutral reason which can be an arbiter of different perspectives. — Colo Millz
Not quite. I asked, somewhat facetiously, if that is what you were proposing.You say that if reason cannot be neutral the only alternative is unreason... — Colo Millz
Why must? Couldn't you decide to base your reasoning on Hindu Scripture? On Mohist logic? On Rawls' Vail of Ignorance? Is there more here than mere inertia? I guess that'd be fine, but it's not the same as asserting that your traditions are the best, or the right ones. Would "We've always done it that way" be enough for you to die in a ditch for?The alternative therefore is that we must base our reasoning on our own traditional virtues. — Colo Millz
we can ask them if they are capable of intentionality, — Janus
That's were I'm at, in my present musings. SoExactly! That seems to be the central question. I don't have an answer—would it be that AI researchers are the ones best placed to answer to it? — Janus
Perhaps we are again treating a conceptual problem as if it were empirical - the ubiquitous error Midgley identified. We cannot discover the answer to conceptual issues by looking around; we have to make stuff up.There's the tension laid out, it seems. I'm not certain that it is real. It may come down to what we are willing to attribute to AI, rather than what we can discover about them. — Banno
Thus reason in the Enlightenment becomes less of a neutral arbiter and more of an explicit paradigm in its own right. — Colo Millz
Yes, and it is important that we treat these behaviours as acts. There are real world consequences for what AI's say.Most of ChatGPT's behavior is verbal behavior. — Pierre-Normand
Pretty much. It's a shame Davidson is no longer around for us to ask his opinion on AI.You'll tell me if I'm wrong but it seems to me like Davidson's radical interpretation approach nicely combines Ramsey's possibly overly behavioristic one with Quine's more holistic (but overly empiricist) approach. — Pierre-Normand
if we appeal to tradition in one society that tradition is going to differ - sometimes widely — Colo Millz
That's a deep puzzle. I've been exploring it for a couple years now. — Pierre-Normand
...deep echoes... — Pierre-Normand
And if we follow Wittgenstein, then the aesthetic is of the highest import; after all, it is what decides what we do.Then I've no followed your argument here: @bongo fury. I took you to be pointing out that the difference between a genuine masterpiece and a forgery - an aesthetic difference - was the authenticity of the masterpiece. — Banno
So do we agree that whatever is connotative in an interaction with an AI is introduced by the humans involved?Agreed! That's indeed the chief ground for not treating it like a person. People often argue that chatbots should not be treated like persons because they aren't "really" intelligent. But being intelligent, or wise, in the case of persons (i.e. socialized, enculturated rational animals), always has two tightly integrated components: one doxastic and one conative. One must know the layout of the space of reasons and one must be motivated to pursue the right paths while navigating this space in the pursuit of theoretical and/or practical endeavors. Chatbots lack conative autonomy and hence purse whichever paths they think their users want to explore (or, worse, that merely lead to the outcomes they think their users want to achieve, while having the mere appearance of soundness.) So, they lack part of what it needs to be wise, but that's not because they aren't smart or knowledgeable enough to be useful conversation partners. The human partner remains responsible for deciding where to put their balls. — Pierre-Normand
Does it seem to you that AI favours analytic approaches?I write most of my forum posts on an iphone while hiking. Not conducive for accurate spelling. — Joshs
