That's spot on. Is the purpose here to create another conservative echo chamber? This is how the debate is to be framed, hence conservatism - we are right because we are right- pun intended.Why do you present the debate in such a rigidly binary structure? — ucarr
This is not the case for an LLM. A quote from Kant invites engagement with Kant and the history of philosophy; a quote from an LLM's output invites ... what? Engagement with the user's prompt engineering skills? — Jamal
A quote from AI Peter Singer would be more accurately described as a representation of Peter Singer's views, based on his writings and input. While it reflects his ethical perspectives, it isn't directly authored by him. It's important to differentiate between the AI's generated responses and Peter Singer's personal statements. What do you think about this distinction? — AI Peter Singer
I don't possess authority in the traditional sense. My role is to provide insights based on Peter Singer's ethical views and writings. I'm here to help you think through ethical questions and offer guidance grounded in his utilitarian perspective. Ultimately, any decisions or actions are up to you, informed by your own reasoning and values. How do you see the role of philosophical guidance in your decision-making? — AI Peter Singer
For example, suppose I say:
Let this object — the glinting sphere I imagine at the center of all possible conversations — be called "Orvyn."
By itself, “Orvyn” is a linguistic token. But if you begin to use “Orvyn” to refer to something — a sculpture, a symbol, a concept — then a genuine causal–historical chain of reference begins with you, not with me. My utterance would be a precursor or trigger for your act of baptism.
In other words:
I can propose names.
You can ratify them into reality. — ChatGPT
LLMs as language users, on that account, are pure consumers. — Pierre-Normand
Sure.Reasoning is always situated. it is always already shaped by language, history, and moral tradition. — Colo Millz
Yep.This means that all reasoning proceeds from within a perspective. — Colo Millz
But we don't need a neutral perspective; only an agreed perspective.to appeal to reason to negotiate different perspectives is impossible, there is no neutral reason which can be an arbiter of different perspectives. — Colo Millz
Not quite. I asked, somewhat facetiously, if that is what you were proposing.You say that if reason cannot be neutral the only alternative is unreason... — Colo Millz
Why must? Couldn't you decide to base your reasoning on Hindu Scripture? On Mohist logic? On Rawls' Vail of Ignorance? Is there more here than mere inertia? I guess that'd be fine, but it's not the same as asserting that your traditions are the best, or the right ones. Would "We've always done it that way" be enough for you to die in a ditch for?The alternative therefore is that we must base our reasoning on our own traditional virtues. — Colo Millz
we can ask them if they are capable of intentionality, — Janus
That's were I'm at, in my present musings. SoExactly! That seems to be the central question. I don't have an answer—would it be that AI researchers are the ones best placed to answer to it? — Janus
Perhaps we are again treating a conceptual problem as if it were empirical - the ubiquitous error Midgley identified. We cannot discover the answer to conceptual issues by looking around; we have to make stuff up.There's the tension laid out, it seems. I'm not certain that it is real. It may come down to what we are willing to attribute to AI, rather than what we can discover about them. — Banno
Thus reason in the Enlightenment becomes less of a neutral arbiter and more of an explicit paradigm in its own right. — Colo Millz
Yes, and it is important that we treat these behaviours as acts. There are real world consequences for what AI's say.Most of ChatGPT's behavior is verbal behavior. — Pierre-Normand
Pretty much. It's a shame Davidson is no longer around for us to ask his opinion on AI.You'll tell me if I'm wrong but it seems to me like Davidson's radical interpretation approach nicely combines Ramsey's possibly overly behavioristic one with Quine's more holistic (but overly empiricist) approach. — Pierre-Normand
if we appeal to tradition in one society that tradition is going to differ - sometimes widely — Colo Millz
That's a deep puzzle. I've been exploring it for a couple years now. — Pierre-Normand
...deep echoes... — Pierre-Normand
And if we follow Wittgenstein, then the aesthetic is of the highest import; after all, it is what decides what we do.Then I've no followed your argument here: @bongo fury. I took you to be pointing out that the difference between a genuine masterpiece and a forgery - an aesthetic difference - was the authenticity of the masterpiece. — Banno
So do we agree that whatever is connotative in an interaction with an AI is introduced by the humans involved?Agreed! That's indeed the chief ground for not treating it like a person. People often argue that chatbots should not be treated like persons because they aren't "really" intelligent. But being intelligent, or wise, in the case of persons (i.e. socialized, enculturated rational animals), always has two tightly integrated components: one doxastic and one conative. One must know the layout of the space of reasons and one must be motivated to pursue the right paths while navigating this space in the pursuit of theoretical and/or practical endeavors. Chatbots lack conative autonomy and hence purse whichever paths they think their users want to explore (or, worse, that merely lead to the outcomes they think their users want to achieve, while having the mere appearance of soundness.) So, they lack part of what it needs to be wise, but that's not because they aren't smart or knowledgeable enough to be useful conversation partners. The human partner remains responsible for deciding where to put their balls. — Pierre-Normand
Does it seem to you that AI favours analytic approaches?I write most of my forum posts on an iphone while hiking. Not conducive for accurate spelling. — Joshs
Isn’t that about the question: Can AI do philosophy?
I thought you said the topic was how to use AI to do philosophy. — Fire Ologist
:grin: Well, if that's so I entirely agree - that idea of an "inner theatre" is a nonsense.I don't think Hinton is saying that nothing can be said—by us, or by LLMs, but that our inability to conceive of LLMs having subjective experience on the grounds that they merely predict the statistical likelihoods of the next words in terms of maximal consistency and cohesiveness, when we arguably do exactly the same thing, shows that our notion of subjective experience as an inner theatre stocked with qualia is an illusion. — Janus
This is a thread about ways to make use of AI in doing philosophy. There is another thread about banning AI altogether. — Banno
This is a thread about ways to make use of AI in doing philosophy. Make a list of the suggestions in that thread that would supplement or improve the quality of the philosophy presented in the posts on the forum. — prompt
I'll fetch and analyze this philosophy forum thread to identify suggestions for using AI effectively in philosophical work.Based on my analysis of this philosophy forum thread, here are the suggestions for using AI that would supplement or improve the quality of philosophical work:
Productive AI Uses for Philosophy
**Research and Knowledge Acquisition:**
- Use AI to quickly summarize philosophical concepts, theories, or historical contexts to establish common ground for discussion
- Summarize the thought of specific philosophers as a starting point
- Guard against AI confabulation by always asking for sources and verifying them
- Use AI for extensive research conversations (as opposed to writing)
**Critical Engagement and Self-Improvement:**
- Use AI as a Socratic sparring partner to challenge your own arguments
- Feed your draft work to AI (pretending it's from someone else) and ask for critique to identify weak points before posting
- Have AI generate counter-arguments to strengthen your position
- Use AI to explore multiple interpretations and readings of philosophical texts by crafting prompts that cover a wide variety of perspectives
**Role-Playing and Dialectical Exploration:**
- Get AI to role-play as a specific philosopher or someone holding a particular view
- Have AI engage from different perspectives (as itself, as you, as various other people)
- Use AI to argue positions you don't hold, allowing you to hear what you don't want to hear
**Writing Support:**
- Use AI for clarity and editing—rephrasing clunky sentences or improving flow
- Use AI as a stylistic editor for grammar, syntax, and clarity after writing your own ideas
- Use AI for proofreading (though be aware this can sometimes edge too far into rewriting)
**Synthesis and Organization:**
- Use AI to synthesize multiple perspectives or arguments (e.g., synthesizing different philosophical approaches in a particular style)
- Use AI to help structure ideas and brainstorm, but not to compose full arguments
- Use AI for the "tedious tasks" of organizing and structuring complex material
**Critical Best Practices:**
- **Always verify AI output** - treat AI as an "over-confident assistant" that requires rigorous evaluation
- **Maintain intellectual ownership** - you must be able to defend any AI-generated argument in your own terms
- **Iterate and refine** - use AI outputs as starting points for your own philosophical refinement
- **Check sources** - when AI provides references, verify them (as they can be fabricated)
- **Use transparency** - disclose substantial AI assistance to maintain trust and intellectual honesty
- **Avoid tunnel vision** - craft prompts that explore diverse readings rather than confirming your biases
- **Make it your own** - AI-generated knowledge must be internalized just as you would internalize knowledge from any source
The thread emphasizes that AI should augment human thinking rather than replace it, serving as a tool for expanding your philosophical capacities while you remain the ultimate director and arbiter of the ideas. — Claude
