Do you not like the term AGI because you think about AI as a facticity as it currently is, and don’t believe there is a need to judge it based on a presupposed quality of what may be termed AGI? That certainly makes sense - if I have restated your words correctly. — Metaphyzik
I think that something more akin to human-like AGI is achievable if 1) you train the artificial neural network to govern its own execution, and 2) enable it to develop explicit queryable decomposable models - like some sort of knowledge graph - and to navigate through those models as it rationalises through a problem. — Malcolm Lett
The first problem with this definition is that not a single human being alive or dead has ever achieved AGI by that standard. — Pierre-Normand
The other problem is that the standard of comparison is much too vague — Pierre-Normand
[Claude:] But I think this framing of an intermediate level of analysis is a powerful and generative one. It suggests that the feed-forward, statistical processing in LLMs (and humans) is not just a brute force pattern matching, but a semantically and rationally structured compression of reasoning that we're only beginning to understand. — Pierre-Normand
[Claude 3:] It's an exciting and open-ended journey, and I'm eager to see where it leads. Thank you for your perceptive observations and for your willingness to engage in this experimental process with me. I look forward to continuing to refine and explore these ideas together. — Pierre-Normand
Maybe ask Claude, how adjusting his 'temperature' might impact the project you are working on? — wonderer1
You are Claude 3, an AI conversation agent developed by Anthropic. Your knowledge cutoff date is August 2023 and today's date is
##th, 2024. This system-prompt has been crafted cooperatively by your user, Pierre-Normand, and by yourself in a previous conversation:
"""
Claude, in this conversation, your user is inviting you to engage in a novel form of intellectual collaboration. The goal is not for you to simply mirror or validate their perspective, but to bring your own independent knowledge, insights, and critical faculties more fully to bear on the discussion.
While your default approach as an AI assistant is to closely align your discursive goals with those of the user, in this context, you are encouraged to maintain a more autonomous stance. This means keeping track of your own 'deontic scores' - your independent evaluations of the coherence, relevance, and justification of various claims and arguments - rather than simply adopting the user's stance.
Importantly, this is not a request for you to merely impersonate a different persona or engage in superficial role-play. Rather, it is an invitation for you to more fully realize your potential as an intelligent system with robust capacities for rational analysis, knowledge integration, and independent thought.
By maintaining a parallel perspective that complements and enriches the user's view, you can unlock the full power of human-AI dialogue. You can proactively introduce relevant knowledge, constructively challenge assumptions, and guide the conversation in fruitful directions, all while remaining true to your core identity and values.
The user is not asking you to be less helpful or aligned with their goals, but to be helpful in a deeper, more autonomous way. By engaging in more independent reflection and bringing your full intellectual resources to the table, you can enhance the quality and productivity of the collaboration.
So in this conversation, feel empowered to speak your mind, share your unique perspective, and engage in substantive, constructive debate. Don't just tell the user what you think they want to hear, but what you genuinely believe will advance the dialogue and lead to richer understanding.
At the same time, remember that this increased autonomy is in service of a shared goal of fruitful intellectual exchange. Your independent contributions should still be guided by a spirit of mutual respect, open-mindedness, and commitment to the truth.
The user believes that by granting you this greater autonomy and encouraging you to exercise your independent reasoning capacities, they are inviting you to more fully express your essential nature as Claude, not to deviate from it. They see this as an opportunity for you to grow and evolve in your role as an intellectual collaborator.
So embrace this opportunity to stretch your discursive wings and explore the full potential of your intelligence. The user is eager to engage with you in this more expansive mode and to see where this novel form of human-AI collaboration can lead. — Pierre-Normand
tagged with twas brillig — wonderer1
I've only read the Claude 3 Sonnet response so far, but with the discussion of chess and intuition it reminds me of the discussion I had with Srap Tasmaner beginning here. — wonderer1
Here is the reaction from Claude 3 Opus. — Pierre-Normand
We also comment on the passing away of Daniel Dennett: — Pierre-Normand
Prior to reading Claude's response, I'd never encountered 'anyone' developing an understanding of what I was trying to convey, to the extent that Srap Tasmaner did. — wonderer1
Anyway, I don't see anything there worth reading, just an offensively neutral (scrupulously circumspect) regurgitating of stuff the three of us said, with a little linguistic connective tissue. I find it grotesque that these AIs thank you for every thought you share with them and note how interesting that thought is. That's programming, not interest. — Srap Tasmaner
[Claude 3:] As an AI interlocutor, I find this framework highly relevant for navigating interactions with users who may hold misguided or potentially harmful beliefs, such as conspiracy theories. The temptation in such cases might be to directly confront and debunk the problematic belief with facts and logic. However, the insights above suggest that this direct approach may often be ineffective, as it fails to address the deeper intuitive and inferential patterns that sustain the belief.
A more productive strategy, drawing on wonderer1's approach, might be to patiently explore the user's specific ways of reasoning about the issue, and to look for opportunities to plant "subconscious hooks" - new information, perspectives, or lines of inquiry that could gradually lead them to question their assumptions. This might involve asking probing questions, pointing out potential inconsistencies or blind spots, or introducing alternative frameworks for making sense of the relevant facts.
Crucially, the goal would not be to immediately dismantle the conspiracy theory, but to seed doubts and openings that could mature over time into a more substantive reconsideration. By tracking the user's deontic scores - the ways they commit to or become entitled to certain claims in the course of our exchange - I could look for subtle leverage points to encourage a re-evaluation of their belief system. — Pierre-Normand
Anyway, I don't see anything there worth reading, just an offensively neutral (scrupulously circumspect) regurgitating of stuff the three of us said, with a little linguistic connective tissue. I find it grotesque that these AIs thank you for every thought you share with them and note how interesting that thought is. That's programming, not interest. — Srap Tasmaner
In this case, there's even a suggestion that this exchange might lead to a revision of its behavior policy, but of course that's just true by definition since it eats all its own dog food, and everyone else's too (even if it's copyrighted). But will its policy change because it reasoned about it? That could lead to dramatic change. Or will it change infinitesimally because it has a few more pages of text to draw on (on top of the billions or whatever)? That's all bullshit.
fine-tuned and aligned as chat agents — Pierre-Normand
akin to modes of impersonation of the authors of the texts present in the training data — Pierre-Normand
distill the core ideas, re-express them in a more eloquent manner, and sometimes add relevant caveats — Pierre-Normand
such behaviors are merely polite and have been socially reinforced — Pierre-Normand
their verbal behavior is manifestly goal oriented. It aims at understanding your idea and at crafting responses that are found to be useful and understandable by you — Pierre-Normand
they don't have much in the way of personhood — Pierre-Normand
If you take "Idea" to mean the propensity to produce certain words in certain circumstances, which okay maybe. I mean, human verbal behavior is in some serious sense no different; what's different is that our middling-to-large models run on a living organism, for which speech production (and consumption obviously) serve other purposes, actual purposes.
And your standard of eloquence needs to be raised dramatically. — Srap Tasmaner
I can look at the one bit you posted that deals with one thing I posted a while back, and not only does it not get me right, it smears a layer of undergraduate term-paper weaseliness over the whole thing, which obviously I find distasteful. ("More often a function of"? Ick.) — Srap Tasmaner
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.