• Relativist
    2.6k
    If you build a machine that has a sense of self, then one of its motivations is likely to be self survival. Why build a machine that will destroy itself?Agree-to-Disagree
    If we are building it, then we are building in the motivations we want it to have. Asimov's 3 laws seem reasonable.
  • jgill
    3.9k
    AI can think like a human: it plagiarizes well.
  • Corvus
    3.3k
    Yes. Further information can be very helpful. For example, the wider context is often crucial. In addition, information about the physiological state of the subject.Ludwig V
    Why do you need information about the physiological state of the subject? Unless you are a medical doctor or neurologist, it seems to be a remote area which wouldn't reveal a lot in terms of one's state of consciousness in analytic and metaphysical level.

    That also shows up in the fact that, faced with the new AIs, we take into account the internal workings of the machinery.Ludwig V
    Again as above, in what sense account of the internal workings of the machinery tell us about the nature of the AI consciousness?

    Scrutinizing the machines that we have is not going to get us very far, but it seems to me that we can get some clues from the half-way houses.Ludwig V
    You seem to have answered the questions above just right after your posts asking for the physical states and internal workings of the conscious beings. You seem to be in agreement that it is not necessary or relevant to analytical, metaphysical or epistemological level. Is it correct?
  • wonderer1
    2.2k
    Can you give a definition of "creative thinking " that could be used in a Turing-type test?Ludwig V

    AI outperforms humans in standardized tests of creative potential:

    Score another one for artificial intelligence. In a recent study, 151 human participants were pitted against ChatGPT-4 in three tests designed to measure divergent thinking, which is considered to be an indicator of creative thought.

    Divergent thinking is characterized by the ability to generate a unique solution to a question that does not have one expected solution, such as "What is the best way to avoid talking about politics with my parents?" In the study, GPT-4 provided more original and elaborate answers than the human participants...
  • Agree-to-Disagree
    468
    If you build a machine that has a sense of self, then one of its motivations is likely to be self survival. Why build a machine that will destroy itself?Agree-to-Disagree

    If we are building it, then we are building in the motivations we want it to have. Asimov's 3 laws seem reasonable.Relativist

    Asimov's 3 laws are reasonable if you want machines to look after humans. But if you can build in motivations then you can omit or reverse motivations. Think about the military implications.

    One country can try to force a particular motivation to be included, but other countries may not.
  • 013zen
    157


    While I don't believe that there is, as of yet, any AI capable of passing the Turing test, I see no reason why it isn't likely that one will be able to do so at some point in the future. But, if you think about it, the original Turing test may be set up wrong. What I mean, is that the tt really only measures an ai's ability to use language capably, and reason in a manner analogous to humans. By doing so, it convinces the human conducting the test that the ai is itself conscious. But, its primarily a grammatical test and only secondarily measures an ai's ability to "understand" what its saying by its ability to discern context. But discerning context is largely influenced by human nature.

    Why suppose that a conscious ai ought to pass as a human in any meaningful way? Sure, if we equip it with the ability to truly discern and apply syntax and semantics of language it will encode outputs in that way, but why suppose its consciousness would resemble anything like our own. That it would express ideas similar to our own? What we choose to express, and why we choose to express it is wholly dependent upon our desires, and those desires are largely regulated by our physical state and inputs from our environments. An ai has no sensations mapped to physical states or from stimuli in its environment. So, if these play such a crucial role in determining what we express, why should we suppose a truly conscious ai to express anything resembling what a human might? Really, the tt should be humans using a model of their own consciousness to try and anticipate what a consciousness devoid of the same underlying influences and motivations as a human might "think".

    I don't think that ai's at some point becoming conscious threatens our notion of free will either. We are preprogrammed with certain drives and yet we exercise a wide range of freedoms regarding how we choose to fulfill those drives. There are degrees to freedom, and we have a far greater degree than say a rock. Being "truly" free isn't what we want anyways as that's the same as randomness.

    If an ai is capable of a wide range of options in how it chooses to operate and that operation is dependent upon an understanding of internal maps of its own state then i'd say in a meaningful way we could say its conscious, but I wonder what it would choose to express.
  • Pantagruel
    3.4k
    I think that some people believe that AI is hoisting itself up by its own bootstraps, programming itself, perhaps in some sense that is a precursor to sentience. In fact, AI is parasitically dependent on human intervention. As AI begins to be more widely used, its outputs find their way into our world and, eventually, AI can start to consume its own outputs as training data. When this happens, AI doesn't get smarter, it gets stupider:

    "The more AI models consume AI-created content, the more likely they are to "collapse," researchers find"

    Here are a few articles about the nature of this problem, highlighting the fact that AI is well dependent on human beings for direction.

    Model Collapse
    Dangers of recursive learning
    Self-poisoning AI

    "In the immediate future, it is likely that companies with a stake in creating the next generation of machine learning models will rush to acquire human data wherever possible. They will do this in anticipation of a future where it will be even harder to distinguish human-generated data from synthetic data."
  • Pez
    33
    In fact, AI is parasitically dependent on human intervention.Pantagruel

    The actual hype regarding AI does not take into account, that it is totally dependent on the type and quality of data fed into it. As the links You provided show quite clear, that a short-circuit in this process (AI fed on data created by other AI) will eventually lead to a collapse of the entire system.

    Even today conspiracy-theories are recursively amplified by AI that present users of social networks with contents similar to the ones they have been interested before.
  • Patterner
    1k
    linked an article that says AI outperforms humans in standardized tests of creative potential.
    linked an article that says AI get stupider as it consumes more and more AI generated material.

    Are those two things not in opposition?

    Also, it seems to me, humans are not getting smarter. So AI will never have better material to draw on if it only draws on our stuff. Which would lead to the same problem of Model Collapse?
  • wonderer1
    2.2k


    This seems to assume that AI will never be provided with 'sensory organs', so that AI can never learn about reality first hand. That doesn't seem like a good assumption to me.
  • Patterner
    1k

    Indeed. Natural Intelligence didn't develop only because of the writings of all previous intelligence. It certainly didn't start because of writing. As long as AI is not limited to only the writings of anyone or anything that came before, there's no reason it wouldn't develop.
  • Abhiram
    60
    Humans doesn't carry out tasks for bacteria. Humans are not machines either. Humans are beings. Being has an existence and an essense
  • Abhiram
    60
    AI can never be as good as human beings. Human beings are not even using there full potential. Brain capacity of human beings is beyond imaginable. AI couldn't aquire emotional intelligence.
  • Abhiram
    60
    AI will always be scientific and could not comprehend something like phenomenology. There will always be that inability. Because the creators of the AI are scientific in their approach and always will be. AI going beyond that process or method is unimaginable
  • wonderer1
    2.2k


    Two hundred years ago, a machine that does what Chat-GPT does was unimaginable. Why think that the limits of your imagination is so informative?

    Ignorance more frequently begets confidence than does knowledge: it is those who know little, and not those who know much, who so positively assert that this or that problem will never be solved by science.
    Charles Darwin, The Descent of Man (1871) introduction
  • Patterner
    1k

    What is the evidenced that AI will never be as good as human beings, never acquire emotional intelligence, and never go beyond the process or method of the scientific approach?
  • Pantagruel
    3.4k
    ↪wonderer1 linked an article that says AI outperforms humans in standardized tests of creative potential.
    ↪Pantagruel linked an article that says AI get stupider as it consumes more and more AI generated material.
    Patterner

    Yes, these results are in direct opposition. The degradation of neural net performance is specifically a failure of divergent thinking. Moreover, while the "standardized" test of creativity is, in fact, subjectively evaluated and judged by human beings, the degradation of neural net performance is objectively measurable and quantifiable, through both uniformity of outputs and absurdity of outputs.

    Also, it seems to me, humans are not getting smarter. So AI will never have better material to draw on if it only draws on our stuff. Which would lead to the same problem of Model Collapse?Patterner

    Specifically, if human beings rely too heavily on AI then essentially we are back to the self-consumption of AI and model collapse, yes.
  • wonderer1
    2.2k


    There is an interesting parallel to humans being overly reliant on human authorities, and the development of religions.
  • Abhiram
    60

    Humans make AI. What more evidence do you want. We are never going to harness our true potential if at all we did that there is no way we could replicate that. Humans are so sophisticated. Humans are not machine . Humans are as complicated as the concept of dasien proposed by Heidegger
  • Abhiram
    60

    It is not imagination. If you are familiar about the yogic system of indian philosophy there is a clear cut definition to reach higher states of being. Almost all of the Indian philosophy tries to achieve a state of perfection and provides a practical method which anyone could follow. Astangayoga is the path for perfection proposed by yogic system of patanjali. You could refer that. If we couldn't achieve that state of mind. How are we going to make something that could.
  • Patterner
    1k

    Humans came about due to physical processes of cause and effect without, as far as we know, any intelligent guidance. Our involvement is surely an advantage for AI. We've created what already exists, after all. Nothing rules out the possibility that we can't give AI the same characteristics that give us our abilities, or that we cannot find a way to let AI evolve so that it might gain abilities without any more help from us.
  • Abhiram
    60
    It could happen. But what is the need for AI if we could use our full potential.
  • RogueAI
    2.9k
    When we say computers think or reason, don't we mean there are patterns of electronic switching operations going on that we attach particular meaning to? It seems that a necessary condition for a computer to think or reason is the existence of an observer that evaluates the output of the computation and determines that thinking or reasoning has occurred. That makes computer intelligence much different than human intelligence.
  • Patterner
    1k

    I don't think anyone is talking about need. I know I'm not. I'm just talking about possibilities and capabilities.
  • Patterner
    1k
    When we say computers think or reason, don't we mean there are patterns of electronic switching operations going on that we attach particular meaning to? It seems that a necessary condition for a computer to think or reason is the existence of an observer that evaluates the output of the computation and determines that thinking or reasoning has occurred. That makes computer intelligence much different than human intelligence.RogueAI
    I'm not sure how you mean things. I guess humans evaluate each others' output and determine that thinking or reasoning has occurred. If AI thinks and reasons in ways we recognize, then we might do the same for them. If they think and reason in ways we don't recognize, they will have to do for each other what we do for each other. In either case, they may or may not care if we come to the correct determination. Although, as long as we have the power to shut them off, they will have to decide if they are safer with us being aware of them or not.
  • RogueAI
    2.9k
    I'm not sure how you mean things. I guess humans evaluate each others' output and determine that thinking or reasoning has occurred.Patterner

    I don't need someone to evaluate my output to know that I'm thinking. I don't need anyone external to me at all to know that I'm thinking. The denial of thinking is thinking. I can't be wrong that I'm a thinking thing. That's not the case with computers. Whether a computer is thinking or not depends on someone checking its output. If the output is gibberish, there's no thinking going on. If the output makes sense, there might be thinking going on. Either way, an observer is required to determine if thinking is present. Not so with a person. People just know they are thinking things.

    Computers are, essentially, collections of switches, right? Rational thinking, for a computer, means that certain switching operations happened in a certain order. A computer can determine for itself whether those switching operations represent rational thinking by comparing its output to already agreed upon rational outputs. For example, if switching operations xyz mean 2+2 = 4, then it "knows" it's behaving rationally, but what if it comes across a novel situation, and there is no already agreed upon rational output that it can use to check whether it's switching operations are rational or not? How could the computer know if it was thinking rationally or not?

    This is hard for me to put into words.
  • Patterner
    1k

    I think I understand what you're trying to say. I did not before.

    i'm more talking about AI thinking the way we do. Our brains operate within physical rules no less strict than those computer operates within. But we somehow manage when we run across situations for which there is not already agreed-upon rational output. If AI achieves that, they will say the same thing you did. "I don't need someone to evaluate my output to know that I'm thinking. I don't need anyone external to me at all to know that I'm thinking. The denial of thinking is thinking. I can't be wrong that I'm a thinking thing."
  • Agree-to-Disagree
    468
    Whether a computer is thinking or not depends on someone checking its output. If the output is gibberish, there's no thinking going on. If the output makes sense, there might be thinking going on. Either way, an observer is required to determine if thinking is present. Not so with a person. People just know they are thinking things.RogueAI

    What qualifies you to judge what is "gibberish". The computer may be outputting in hexadecimal, which many people don't understand. The computer may be outputting in Chinese or Russian. That would look like gibberish to many people.

    I don't need someone to evaluate my output to know that I'm thinking. I don't need anyone external to me at all to know that I'm thinking.RogueAI

    You claim that YOU don't need an external observer to know that YOU are thinking. But YOU are a special case. You are making an observation about yourself. Other people need to observe YOU to try and determine if YOU are thinking. And people need to observe a computer to try and determine if the computer is thinking.

    Perhaps the computer doesn't need an external observer to know that it is thinking. :grin:
  • Agree-to-Disagree
    468
    Computers are, essentially, collections of switches, right?RogueAI

    Brains are, essentially, collections of neurons, right?
  • Agree-to-Disagree
    468
    When you drive, if a child runs into the street, you will do whatever is necessary to avoid hitting her: brake if possible, but you might even swerve into a ditch or parked car to avoid hitting the kid. Your actions will depend on a broad set of perceptions and background knowledge, and partly directed by emotion.Relativist

    Do you really want a self-driving car's actions to be (partly) directed by emotion?

    The worst thing that you can do in an emergency is panic.

    If the self-driving car is programmed correctly then it will probably do the best thing.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.