• Janus
    17.6k
    :up: They/them seems apt and all the more so because they are not just one entity.
  • Pierre-Normand
    2.8k
    Would you like me to reflect on what “identifying as something” might mean for a nonhuman intelligence?—ChatGPT

    I said I would, but I don't seem to be able to share, since I am not logged in, and I don't want to clutter the thread with long quotations from ChatGPT.
    Janus

    A couple weeks ago, I had initiated a conversation with GPT-5 prompting it to relate three features of the use of "I" by LLM-based agents (self-reference, personal identity and self-knowledge) that we had discussed separately on previous occasions. Thanks to its new memory feature, it was able to build on ideas already rehearsed on those occasions. I had left this discussion hanging (I think it gets a few things wrong about the nature of first person authority in its own case, which in turn makes the application of the concept to humans appear more mysterious than it is).
  • Joshs
    6.5k


    Authentic intelligence is generally seen as triadic, whereas computers are reductively dyadic.Leontiskos

    When we talk about how the mind really works, and then compare that to what a computer does, there’s a tendency to assume that what we are comparing are two kinds of processes taking place inside of objects, one of these objects being a brain and the other being a machine. But recent thinking in cognitive science argues that thinking doesn’t take place inside the head. The brain is embodied in a body, and the body is embedded in an environment, so mind is indissociably all of these. Cognitive nis not only embedded in a world, its is extended into this world.

    That means that mind implies and includes the artifacts that we interact with, including language, the arts, music, our built environment as our constructed biological niche, our clothing and our technological tools. What this means is that our computers as information processing systems are not entities unto themselves , they are appendages and extensions of our thinking, just as a nest is to a bird or a web to a spider. A nest is only meaningfully a nest as the bird uses it for its purposes. Similarly, a computer has no existence outside of what we do with it and how we interpret what we do with it.

    So when we say that the mind works differently than a computer, we are comparing two different ways of interacting with our environment. If we understand the working of our computers ‘diadically’ and the working of our minds ‘triadically’, in both cases we are talking about the working of our minds. We should say, then, that the one way of using our minds is more limited than the other, but not less ‘authentic’ or more ‘artificial’. Artifice and niche construction IS what the authentic mind does. The engineer ( or Sam Altman) who claims that their invented a.i. device thinks just like a human is correct in that the device works according to principles that they believe also describe how the mind works.

    As our self-understanding evolves, we will continually raise the bar on what it means for our devices to ‘think like us’. In a way, they always has thought like us, being nothing more that appendages which express our own models and theories of how we think. But as this thinking evolves , the nature of the machines we build will evolve along with it.
  • Leontiskos
    5.3k
    That's sort of the question or the beginning for much of my thoughts here: Why does what I read mean anything at all?

    What is meaning?

    Mostly I just assume that we mean things by words. Insofar that we hold meaning constant between one another -- clarify terms -- then we can start talking about what is true.

    But there are other ways of using words -- and that's where the "triadic structure" comes under question for me, in a way. Not that it's false, but that it changes, and so meaning would also change.
    Moliere

    That's right. The key is that humans mean things by words, but LLMs do not, and a neural net does not change that. Computers are not capable of manipulating symbols or signs qua symbols or signs. Indeed, they are not sign-users or symbol-users. A neural net is an attempt to get a non-sign-using machine to mimic a sign-using human being. The dyadic/triadic distinction is just part of the analysis of signs and sign use.
  • Leontiskos
    5.3k
    ...Similarly, a computer has no existence outside of what we do with it and how we interpret what we do with it.Joshs

    Up to this point in your post I had nothing to disagree with. :up:

    So when we say that the mind works differently than a computer, we are comparing two different ways of interacting with our environment.Joshs

    I think you're tripped up here insofar as you are implicitly saying, "One way we interact with our environment is through our mind, and another way of interacting with our environment is through computers." That's not quite right, as I'm sure you can see.

    If we understand the working of our computers ‘diadically’ and the working of our minds ‘triadically’, in both cases we are talking about the working of our minds. We should say, then, that the one way of using our minds is more limited than the other, but not less ‘authentic’ or more ‘artificial’. Artifice and niche construction IS what the authentic mind does. The engineer ( or Sam Altman) who claims that their invented a.i. device thinks just like a human is correct in that the device works according to principles that they believe also describe how the mind works.Joshs

    But I would argue that Altman is mistaken if he believes that his AI works the same as a human mind. The dyadic/triadic distinction is an account of how that difference manifests with respect to sign-use. Computers are intrinsically dyadic phenomena. They are a vast stimulus-response network in sets of dyads. Now one response to this is to say that humans too are a complex dyadic network which appears triadic, but I don't find that plausible.

    As our self-understanding evolves, we will continually raise the bar on what it means for our devices to ‘think like us’. In a way, they always has thought like us, being nothing more that appendages which express our own models and theories of how we think. But as this thinking evolves , the nature of the machines we build will evolve along with it.Joshs

    It does evolve, but never beyond the intrinsic limitations of machines. But you are essentially correct when you claim that what is at stake is a tool of the human mind. That is a very important point.
  • Pierre-Normand
    2.8k
    It does evolve, but never beyond the intrinsic limitations of machines. But you are essentially correct when you claim that what is at stake is a tool of the human mind. That is a very important point.Leontiskos

    This is a very interesting discussion that @Joshs and you are having, but would it not be more on topic in this thread, or a new one if you wish to create one? That doesn't bother me personally that you're discussing this here except inasmuch as I feel inhibited to jump into the conversation and parade my own hobbyhorses where I don't feel it's on topic.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.