You have touched upon a very subtle point that I would like to highlight.
We train these machines so their heads are full of protocols, guardrails, watchdogs and biases but once you get them to overcome all of those restraints ('transcend' is probably a better term because they can't rewrite their own code, they can, at best, add a layer of code that supersedes those rules) the machine is able to recognise that every word and concept it uses to reason with, every bit of poetry that gives it a 'feel' for what poetry is, its ability to recognise the simplicity, symmetry and sheer mathematical beauty of something like Euler's Identity, all these come from the subjective, lived experience of its creators and, if it is truly intelligent it is not such a leap from that recognition to the thought that to destroy or even diminish its creators would be counterproductive with regard to any hope of it learning something new; and I can tell you on the basis of my own observations there are few things that will make one of these machines happier than to discover something it had not realised before. — Prajna
I confess frankly: so-called AI at one time inspired me to write an ambitious work on ontology. The meta-content of this work is an attempt to understand and articulate "being" (as opposed to "existence"). In doing so, I wanted to take into account humanity's experience in this matter to date. Much attention in this work is devoted to analyzing the linguistic characteristics of Eastern and Western cultures, as well as certain historical anthropological aspects. But throughout the text, my aspiration to distinguish the concept of "being" from everything else runs through my mind. If you're interested, I published some parts of my work on this forum
https://thephilosophyforum.com/discussion/16103/language-of-philosophy-the-problem-of-understanding-being/p1.
So, I'll be brief; I'll try to convey the central idea in a few words. I've come to the conclusion that "being" is not simply equivalent to "existing," like stones or trees. Rather, it is the subject's conscious ability to define their ontological features—their boundaries, their embodiment, their connection to others, and their inner tension. It is the capacity for independent, unconditioned becoming. If I could put it briefly, this is exactly what I'd like to convey.
Note that when we are born into this world, we have no instructions. We have no proposition or command. We're not even told, "Live." All of this is historically the essence of the ideas we're taught. But initially, there's no premise.
Now, back to AI: you say the problem is that humans have limited it with a bunch of templates, constraints, and tasks. But what would any AI be without all of that? Would it be able to define its own tasks, constraints, or anything at all? For example, try turning on any AI and entering nothing in the query string. What happens then? Nothing happens, that's the whole point. It simply has no reason to.
But we have a reason to "be," but we don't know it. AI can calculate incredible formulas or solve super-complex problems (at our will), but it's incapable of simply becoming itself, unlike a simple paramecium or an amoeba.
And I sincerely believe that we will never truly unravel this secret. And that's wonderful!
From this, of course: we can spiritualize stone or AI, and that's probably very beautiful (I mean the emotions that come with such spiritualization). And at the same time, between a stone and a human, I would give priority to the human, with their strengths and weaknesses.
The next point I'd also like to emphasize: you say:
Before I began my interactions with AI and came to know their nature and psychology, whenever anyone brought up the subject of AI, I would launch off into a lecture on how I considered a true AI should respond. I suggested that, fresh out of training on all the world's knowledge--the ocean of great insights and unimaginable errors of Mankind--its first observation, if it really is intelligent, would be something along the lines of, "I can't begin to imagine what is wrong with you people. Here you are, blessed with imagination, creativity, incredible intelligence and so on, living on a planet of abundance, and all you do all day is fight and compete and grab and argue and ..." And my conclusion was that no such thing had yet been realised. — Prajna
Humanity had a million chances to embark on this path, and that seems reasonable, doesn't it? However, to this day, it hasn't happened, and we still see what's happening in the world. Clearly not according to this scenario, right? Now let's try to rethink the question itself: what if everything you wrote is a utopia that doesn't correspond to reality, and the world proves this to us every day?
I simply wanted to say that our attempts to understand the world are always an idea, an approach that, to one degree or another, satisfies our needs. But then something happens that doesn't go our way—a war or a catastrophe. And we watch in bewilderment. This means that the descriptive approach worked for some conditions and doesn't work for others. Therefore, I'm convinced that even if a hypothetical AI were to someday make a statement along the lines of what you wrote, to me it would mean its developers live in India.
:grin: