• Daemon
    591
    That raises the important question of what understanding is and, more importantly, whether it is something beyond the ability of a computer AI? Speaking from my own experience, understanding/comprehension seems to start off at the very basic level of matching linguistic symbols (words, spoken or written) to their respective referents e.g. "water" is matched to "cool flowing substance that animals and plants need", etc. This is clearly something a computer can do, right? After such a basic cognitive vocabulary is built what happens next is simply the recognition of similarities and differences and so, continuing with my example, a crowd of fans moving in the streets will evoke, by its behavior, the word and thus the concept "flowing" or a fire will evoke the word/concept "not cold" and so on. In other words, there doesn't seem to be anything special about understanding in the sense that it involves something more than symbol manipulation and the ability to discern like/unlike thing.TheMadFool

    I have been a professional translator for 20 years. My job is all about understanding. I use a Computer Assisted Translation or CAT tool.

    The CAT tool suggests translations based on what I have already translated. Each time I pair a word or phrase with its translation, I put that into the "translation memory". The CAT tool sometimes surprises me with its translations, it can feel quite spooky, it feels like the computer understands. But it doesn't, and it can't.

    I do a wide range of translation work. I do technical translations, operating and maintenance instructions for machines for example. To understand a text like that, you need to have had experience of work like that. Experience is the crucial element the computer lacks. Experience of all facets of our world. For example, to understand fundamental concepts like "up" and "down", "heavy" and "light", you need to have experienced gravity.

    I translate marketing texts. Very often my clients want me to make their products sound good, and they want their own customers to feel good about their products and their company. To understand "good" you need to have experienced feelings like pleasure and pain, sadness and joy, frustration and satisfaction.

    I translate legal texts, contracts, court documents.

    A. The councillors refused to allow the protestors to demonstrate, because they advocated violence.

    B. The councillors refused to allow the protestors to demonstrate, because they feared violence.

    A computer can't understand that "they" applies to the protestors in A. but the councillors in B, because it's not immersed in our complex world of experience.
  • Tom Storm
    9.2k
    Was there a question? Language is never the thing it describes so it is not surprising that language can be mastered and words mustered that never make contact with experience. It strikes me too that people are often not much better than computers and often master languages - management speak, or whatever and have no idea what they are saying.
  • frank
    16k
    . It strikes me too that people are often not much better than computers and often master languages - management speak, or whatever and have no idea what they are saying.Tom Storm

    Nobody masters a language with no idea what they are saying. I don't know why you would suggest such a thing.
  • Daemon
    591
    It was more of an answer than a question.

    Yes, human translators sometimes don’t understand what they are translating. Everyone has been baffled by poorly translated product instructions I guess. And sometimes this is because the human translator does not have experience assembling or using the product, or products like it.
  • Tom Storm
    9.2k
    Nobody masters a language with no idea what they are saying. I don't know why you would suggest such a thing.frank

    Because it is true? But maybe you don't get my meaning and are making it too concrete. I've met dozens of folk who work in government and management who can talk for an hour using ready to assemble phrases and current buzz words without saying anything and - more importantly - not knowing what they are saying.
  • Tom Storm
    9.2k
    Yes, human translators sometimes don’t understand what they are translating. Everyone has been baffled by poorly translated product instructions I guess. And sometimes this is because the human translator does not have experience assembling or using the product, or products like it.Daemon

    Yes, as long as you obey certain rule understanding the meaning can be secondary
  • Daemon
    591
    Luckily though, there is still work for translators who do understand what they are talking about.
  • Daemon
    591
    Because it is true? But maybe you don't get my meaning and are making it too concrete. I've met dozens of folk who work in government and management who can talk for an hour using ready to assemble phrases and current buzz words without saying anything and - more importantly - not knowing what they are saying.Tom Storm

    You are speaking rather loosely here. Exaggerating.
  • frank
    16k
    . I've met dozens of folk who work in government and management who can talk for an hour using ready to assemble phrases and current buzz words without saying anything and - more importantly - not knowing what they are saying.Tom Storm

    One assumes when they're finished bullshitting you they go back to speaking a language they do understand, so you haven't addressed the OP, have you?
  • Tom Storm
    9.2k
    It was an aside, Frank - the idea being that it is not only computers that can assemble syntax without connecting to the content. You don't have to agree.
  • Tom Storm
    9.2k
    You are speaking rather loosely here. Exaggerating.Daemon

    Not by much.
  • frank
    16k
    It was an aside, Frank - the idea being that it is not only computers that can assemble syntax without connecting to the content. You don't have to agree.Tom Storm

    I think the OP's point was that context is important for translation. You seemed to be arguing to the contrary. Now I don't know what your point was.
  • Tom Storm
    9.2k
    Reveal
    Apologies if I got it wrong. I thought I was agreeing and extending the point.
  • TheMadFool
    13.8k
    I have been a professional translator for 20 years. My job is all about understanding. I use a Computer Assisted Translation or CAT tool.

    The CAT tool suggests translations based on what I have already translated. Each time I pair a word or phrase with its translation, I put that into the "translation memory". The CAT tool sometimes surprises me with its translations, it can feel quite spooky, it feels like the computer understands. But it doesn't, and it can't.

    I do a wide range of translation work. I do technical translations, operating and maintenance instructions for machines for example. To understand a text like that, you need to have had experience of work like that. Experience is the crucial element the computer lacks. Experience of all facets of our world. For example, to understand fundamental concepts like "up" and "down", "heavy" and "light", you need to have experienced gravity.

    I translate marketing texts. Very often my clients want me to make their products sound good, and they want their own customers to feel good about their products and their company. To understand "good" you need to have experienced feelings like pleasure and pain, sadness and joy, frustration and satisfaction.

    I translate legal texts, contracts, court documents.

    A. The councillors refused to allow the protestors to demonstrate, because they advocated violence.

    B. The councillors refused to allow the protestors to demonstrate, because they feared violence.

    A computer can't understand that "they" applies to the protestors in A. but the councillors in B, because it's not immersed in our complex world of experience
    Daemon

    Thanks for your comment. I just watched a video on minds & machines - The Turing Test & Searle's Chinese Room Argument. As per the video a computer that passes the Turing Test does so soley based on the syntactic properties of, and not the semantic properties of, symbols. So, at best, an AI (passed the Turing Test) is simply a clever simulation of a human being, no more no less.

    My own views on the matter is semantics is simply what's called mapping - a word X is matched to a corresponding referent, say, Jesus. I'm sure this is possible with computers. It's how children learn to speak, using ostensive definitions. We could start small and build up from there as it were.
  • Daemon
    591
    My own views on the matter is semantics is simply what's called mapping - a word X is matched to a corresponding referent, say, Jesus. I'm sure this is possible with computers. It's how children learn to speak, using ostensive definitions. We could start small and build up from there as it were.TheMadFool

    The examples I gave were intended to illustrate that semantics isn't simply mapping!

    Mapping is possible with computers, that's how my CAT tool works. But mapping isn't enough, it doesn't provide understanding. My examples were intended to illustrate what understanding is.

    Children learn some things by ostensive definition, but that isn't enough to allow understanding. I have a two-year-old here. We've just asked him "do you want to play with your cars, or do some gluing?"

    He can't understand what it is to "want" something through ostensive definition. He understands that through experiencing wanting, desire.
  • Hermeticus
    181
    Each time I pair a word or phrase with its translation, I put that into the "translation memory". The CAT tool sometimes surprises me with its translations, it can feel quite spooky, it feels like the computer understands.Daemon

    It's not unlike what we humans do. Our "translation memory" simply is the mental act of association. I take input/sensation A (I see, feel or taste an apple) and link it to input/sensation B (the word apple in auditory or visual form).

    Experience is the crucial element the computer lacks.Daemon

    For example, to understand fundamental concepts like "up" and "down", "heavy" and "light", you need to have experienced gravity.Daemon

    He can't understand what it is to "want" something through ostensive definition. He understands that through experiencing wanting, desire.Daemon

    Machines can experience physical phenomena that reflects our perception - from cameras to thermometers, pressure and gyro sensors - none of our senses can't be adopted digitally. This means that fundamental concepts like "up" and "down", "heavy" and "light" can indeed be experienced by computers.

    Your last example though is a whole different phenomena and this is where it gets interesting. Qualia - emotional sensation like happiness, sadness, desire and alike can not be found and measured in the physical realm. I have a hard time imagining that AI will ever get a good grip on these concepts.

    I think it's a question of how machinized someone perceives our human organism. Afterall, the statement that there's no physical phenomena corresponding to emotions is false. Strictly speaking, it's all chemistry - qualitatively and quantitatively measurable. If and how this could possibly translate to machines perceiving emotion is beyond me. All I know is that it does raise one of the most interesting philosophical questions that I have seen in Sci-Fi: If a robot seemingly acts and feels like a human, how are we to know whether they are merely acting or if they actually engage with sensation and stimulation in the same way we do?
  • Daemon
    591
    Machines can experience physical phenomena that reflects our perception - from cameras to thermometers, pressure and gyro sensors - none of our senses can't be adopted digitally. This means that fundamental concepts like "up" and "down", "heavy" and "light" can indeed be experienced by computers.Hermeticus

    A camera does not see. A thermometer does not feel heat and cold. A pressure sensor does not feel pressure.
  • Wayfarer
    22.8k
    Very true. Computers don’t experience anything, any more than does an abacus. A computer is a vast array of switches, although they’re so effective that they emulate some aspects of experience.

    I worked at an AI company briefly for a few months, some time back. The biggest problem they had was in imparting 'context' to their agent. She (she was given a female persona) always seemed to lack a sense of background to queries.

    One of my early experiences is illustrative. I had a set of test data to play with, from supermarket sales. I noticed you could request data for single shoppers or families with children. I asked 'Shirley' (not her name, but that's a trade secret) if she had information on bachelors. After a moment, she asked 'Bachelor: is that a kind of commodity: olive?' So she was trying to guess if the word 'bachelor' was a kind of olive. I was super-impressed that she tried to guess that. But then the guess was also kind of clueless. This kind of issue used to come up a lot. Like, I notice with Siri, that there's certain contextual things she will never get. (I also have Alexa, but all she does is play the radio for me.)


    Afterall, the statement that there's no physical phenomena corresponding to emotions is false. Strictly speaking, it's all chemistry - qualitatively and quantitatively measurable.Hermeticus

    That is lumpen materialism. There is a reason why all living beings, even very simple ones, cannot be described in terms of chemistry alone. It's that they also encode memory which is transmitted in the form of DNA. Ernst Mayr, a leading theoretical biologist, said 'The discovery of the genetic code was a breakthrough of the first order. It showed why organisms are fundamentally different from any kind of nonliving material. There is nothing in the inanimate world that has a genetic program which stores information with a history of three thousand million years’. Furthermore, all beings, even simple ones, are also subjects of experience, not simply objects of analysis, which introduces a degree of unpredictability which no simple objective model can hope to capture.
  • Hermeticus
    181
    A camera does not see. A thermometer does not feel heat and cold. A pressure sensor does not feel pressure.Daemon

    The physical principles behind these sensors and the senses of our body are literally the same. The difference is in the signal that is sent thereafter (and even then, both signals are electric) and how the signal is processed.

    It goes way further than that though. The field of bionic prosthetics has already managed to send all the right signals to the brain. There are robotic arms that allow the user to feel touch. They are working on artificial eyes hooked up to the optic nerve - and while they're not quite finished yet, the technology already is proven to work.

    That is lumpen materialism. There is a reason why all living beings, even very simple ones, cannot be described in terms of chemistry alone.Wayfarer

    When we talk about what is, it's easiest to speak in terms of materialism. If two processes are comparable, one biological, one mechanical, why shouldn't I be able to compare them? As I said:
    I think it's a question of how machinized someone perceives our human organism.Hermeticus

    I was going to agree with "Living beings can not be described in terms of chemistry alone" but the more I think about it - I'm not so sure. Your example doesn't make any sense to me either way. What do you think DeoxyriboNucleic Acid is, if not chemistry?
  • Daemon
    591
    There is a reason why all living beings, even very simple ones, cannot be described in terms of chemistry alone. It's that they also encode memory which is transmitted in the form of DNA.Wayfarer

    Bacteria can swim up or down what is called a chemical gradient. They will swim towards a source of nutrition, and away from a noxious substance. In order to do this, they need to have a form of "memory" which allows them to "know" whether the concentration of a chemical is stronger or weaker than it was a moment ago.

    https://www.cell.com/current-biology/comments/S0960-9822(02)01424-0

    Here's a brief extract from that article:

    Increased concentrations of attractants act via their MCP receptors to cause an immediate inhibition of CheA kinase activity. The same changes in MCP conformation that inhibit CheA lead to relatively slow increases in MCP methylation by CheR, so that despite the continued presence of attractant, CheA activity is eventually restored to the same value it had in the absence of attractant. Conversely, CheB acts to demethylate the MCPs under conditions that cause elevated CheA activity. Methylation and demethylation occur much more slowly than phosphorylation of CheA and CheY. The methylation state of the MCPs can thereby provide a memory mechanism that allows a cell to compare its present situation to its recent past.

    The bacterium does not experience the chemical concentration.

    The "memory" encoded by DNA can also be described entirely in terms of chemistry. So I think Mayr got this wrong.
  • TheMadFool
    13.8k
    The examples I gave were intended to illustrate that semantics isn't simply mapping!

    Mapping is possible with computers, that's how my CAT tool works. But mapping isn't enough, it doesn't provide understanding. My examples were intended to illustrate what understanding is.

    Children learn some things by ostensive definition, but that isn't enough to allow understanding. I have a two-year-old here. We've just asked him "do you want to play with your cars, or do some gluing?"

    He can't understand what it is to "want" something through ostensive definition. He understands that through experiencing wanting, desire
    Daemon

    Thanks for trying to clarify the issue for me. Much obliged. Please tell me,

    1. What understanding is, if not mapping?

    2. Whatever thinking is, it seems to be some kind of pattern recognition process. That looks codable? Semantics are patterns e.g. dogs = domesticated (pattern) wolves (pattern).

    In short, semantics seems to be within the reach of computers provided pattern recognition can be coded.

    What say you?
  • Daemon
    591
    The field of bionic prosthetics has already managed to send all the right signals to the brain. There are robotic arms that allow the user to feel touch. They are working on artificial eyes hooked up to the optic nerve - and while they're not quite finished yet, the technology already is proven to work.Hermeticus

    But this can't be done without using the brain!
  • Hermeticus
    181
    But this can't be done without using the brain!Daemon


    The difference is in the signal that is sent thereafter and how the signal is processed.Hermeticus
    I have a hard time imagining that AI will ever get a good grip on these concepts.Hermeticus
    If and how this could possibly translate to machines perceiving emotion is beyond me.Hermeticus

    It's hard to picture an artificial brain because we don't even fully understand how our brains work. It's a matter of complexity. Our understanding of it is getting better and better though. On what basis can we say that an artificial brain wouldn't be possible in the future?
  • Daemon
    591
    On what basis can we say that an artificial brain wouldn't be possible in the future?Hermeticus

    We can't, but this is science fiction, not philosophy. I love science fiction, but that's not what I want to talk about here.
  • Daemon
    591
    Please tell me,

    1. What understanding is, if not mapping?
    TheMadFool

    My examples were intended to illustrate what understanding is.
  • Wayfarer
    22.8k
    Bacteria can swim up or down what is called a chemical gradient. They will swim towards a source of nutrition, and away from a noxious substance.Daemon

    Something which no inorganic substance will do. Nobody would deny that such behaviours involve chemistry, but they're not reducible to chemistry.

    The bacterium does not experience the chemical concentration.Daemon

    Perhaps behaviour is what experience looks like from the outside.
  • Daemon
    591
    But if you read that article, you can see that bacterial chemotaxis is entirely reducible to chemistry!
  • TheMadFool
    13.8k
    My examples were intended to illustrate what understanding isDaemon

    Automated Theorem Proving

    Automated theorem proving (also known as ATP or automated deduction) is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs. — Wikipedia

    I'm no mathematician but if math proofs are anything like proofs in philosophy, semantics is a cornerstone. One has to to understand the meaning of words, sentences/statements.

    In automated theorem proving computers have been used to prove math theorems but we know that computers are semantics-blind and can only manage syntax. Yet, they're doing things (proving theorems) which, in our case, requires understanding. My question is, can semantics be reduced to syntax?
  • Daemon
    591
    Yet, they're doing things (proving theorems) which, in our case, requires understanding.TheMadFool

    They aren't doing things, we are using them to do things.

    It's the same with an abacus. You can push two beads to one end of the wire, but the abacus isn't then proving that 1 + 1 = 2.
  • Hermeticus
    181
    We can't, but this is science fiction, not philosophy. I love science fiction, but that's not what I want to talk about hereDaemon

    Well, we're talking about understanding and you made your premise experience. I've argued that it's absolutely possible for an AI to have the same experiences we have with our senses and that it's merely a question of how the content of these experiences are processed. If we're not talking about hypotheticals then the answer is obviously no, AI can not understand like humans do.

    If you just want to talk about what understanding in general is, I'm totally with @TheMadFool here. Understanding is mapping. Complex chains of association between sensations and representations.
  • TheMadFool
    13.8k
    They aren't doing things, we are using them to do things.

    It's the same with an abacus. You can push two beads to one end of the wire, but the abacus isn't then proving that 1 + 1 = 2.
    Daemon

    So, if we ask a group of grade 1 students to carry out the math operation 1 + 1, they aren't doing things, we are using them to do things.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.