Comments

  • The important question of what understanding is.
    There is some kind of break and convergence between A) Being able to translate languages B) Understanding languages. I am not sure what those differences and similarities are, as I have never posited the two for comparison. Computers are capable of both.Josh Alfred

    Researchers have compared the results of machine translation to a jar of cookies, only 5% of which are poisoned.

    Computers can make an amazingly good job of translating, but they don't do what we do when we translate. We use our understanding, and you can see from the faults in machine translation that that is what a computer lacks.

    If a computer could do what I can do, people would use Google Translate and I wouldn't have any work. Google Translate is free and I am quite expensive.

    What the computer lacks is involvement with the world.

    I put this sentence into Google Translate: "If the baby fails to thrive on raw milk, boil it."

    Google translated this into Dutch as "Als de baby niet gedijt op rauwe melk, kook hem dan."

    That means "If the baby fails to thrive on raw milk, boil him."

    Google Translate is extremely ingenious, but it lacks understanding, because it is not involved with the world as we are, through experience. QED.
  • The important question of what understanding is.
    Also, are you implying nobody knows what my question means unless they have bought me bananas? (Prior to which, they have not experienced buying me bananas?)InPitzotl

    I wrote this above: My suggestion is that understanding something means relating it correctly to the world, which you know and can know only through experience.

    I don't mean that you need to have experienced a particular thing before you can understand it, but you do need to know how it fits in to the world which you have experienced.


    So a person can understand instructions to shop for your bananas if they have had sufficiently similar experiences.

    If the baby fails to thrive on raw milk, boil it.
  • The important question of what understanding is.


    A robot is not an individual, an entity, an agent, a person. To say that a robot is shopping is a category error.

    Of course in everyday conversation we talk as though computers and robots were entities, but here we need to be more careful.

    You could say that the robot is simulating shopping.

    Do you think the robot understands what it is doing?
  • The important question of what understanding is.
    I am not saying that experience is the explanation for understanding, I am saying that it is necessary for understanding.

    To understand what "pain" means, for example, you need to have experienced pain. — Daemon

    Your example isn't even an example of what you are claiming, unless you seriously expect me to believe that you believe persons with congenital analgesia cannot understand going to the store and getting bananas.
    InPitzotl
    to

    I don't really see what you're getting at here. I'm not saying you need to experience pain to understand shopping. You need to experience pain to understand pain.

    To understand shopping, you would need to have experienced shopping.
  • The important question of what understanding is.


    I am not saying that experience is the explanation for understanding, I am saying that it is necessary for understanding.

    To understand what "pain" means, for example, you need to have experienced pain.
  • The important question of what understanding is.
    A) Artificial intelligence can utilize any sensory device and use it to compute. If you understand this you can also compare it to human sensory experience. There is little difference. Can you understand that?Josh Alfred

    I can understand what you're saying, but it is quite wrong. When you experience through your senses you see, feel and hear. A computer does not see, feel and hear. I shouldn't need to be telling you this.
  • The important question of what understanding is.
    You can redefine "understanding" in such a way that it is something a robot or a computer can do, but the "understanding" I am talking about is still there. The kind a robot can't do.

    "If the baby fails to thrive on raw milk it should be boiled".
  • The important question of what understanding is.
    I'm not arguing that robots experience things here. I'm arguing that it's a superfluous requirement. But even if you do add this superfluous requirement, it's certainly not the critical element. To explain what brings me the bananas when I ask for them, you have to explain how those words makes something bring me bananas. You can glue experience to the problem if you want to, but experience doesn't bring me the bananas that I asked for.InPitzotl

    We're not trying to explain how you get bananas, we're trying to explain understanding.
  • The important question of what understanding is.
    Thank you, that is interesting, but it is definitely not what I am saying.

    My suggestion is that understanding something means relating it correctly to the world, which you know and can know only through experience.

    I don't mean that you need to have experienced a particular thing before you can understand it, but you do need to know how it fits in to the world which you have experienced.

    Because we can explain the robot, we know that its actions are not due to understanding based on experience.

    We will continue to have minds and understanding even after we understand our minds.
  • Some remarks on Wittgenstein's private language argument (PLA)
    I guess, if you were confused by the same thing today as you were yesterday, you could say "I'm feeling the same confusion as I felt yesterday".
  • The important question of what understanding is.
    The question isn't about experiencing; it's about understanding.InPitzotl

    As I emphasised in the OP, experience is the crucial element the computer lacks, that's the reason it can't understand. The same applies to robots.

    If I ask a person, "Can you go to the store and pick me up some bananas?", I am not by asking the question asking the person to experience anything.InPitzotl

    But in order to understand your question, the person must have experienced stores, picking things up, bananas and a multitude of other things.
    IOW, your point was that robots aren't doing something that humans do, but that's kind of backwards from the point being made that you're replying to. It's not required here that robots are doing what humans do to call this significant; it suffices to say that humans can't understand without doing something that robots do that your CAT tool doesn't do.InPitzotl

    IOW, your point was that robots aren't doing something that humans do, but that's kind of backwards from the point being made that you're replying to. It's not required here that robots are doing what humans do to call this significant; it suffices to say that humans can't understand without doing something that robots do that your CAT tool doesn't do.InPitzotl

    Neither my CAT tool nor a robot do what I do, which is to understand through experience.

    .
  • The important question of what understanding is.
    I shouldn't be having to say this stuff. It feels like you are all suffering from a sort of mass delusion.
  • The important question of what understanding is.
    A robot does not "encounter" things any more than a PC does. When we encounter something, we experience it, we see it, feel it, hear it. A robot does not see, feel or hear.
  • The important question of what understanding is.
    I think it is easy enough to say that understanding can not be discrete, i.e. that a system that can only do one thing (or a variety of things) well lacks agency for this purpose. However, at some point, a thing can do enough things well that it feels a bit like bad faith to say that it isn't an agent because you understand how it was constructed and how it behaves (indeed, if determinism obtains, the same could be said of people).Ennui Elucidator

    In the case of a computer, it isn't just that we know how it was constructed and how it behaves, the point is that we know it is not using understanding.

    Not only that: a computer is not an agent, we are the agents making use of it. It doesn't qualify for agency, any more than an abacus does.
  • The important question of what understanding is.


    Just for clarity, the part from "Critics hold" onwards is the SEP and not Searle.

    The evidence we have that humans understand is not the same as the evidence that a robot understands. The problem of other minds isn't a real problem, it's more in the nature of a conundrum, like Zeno's paradoxes. The paradox of Achilles' Arrow for example is supposed to show that a flying arrow doesn't move. But it does.

    The nature of consciousness is such that I can't experience your understanding of language. But I can experience my own understanding, and you can experience yours. It would be ridiculous for me to believe that I am the only one who operates this way, and it would be ridiculous for you to believe that you are the only one.

    With a robot, we know that what looks like understanding isn't really understanding, because we programmed the bloody robot. My translation tool often generates English sentences that make it look like it understands Dutch, but I know it doesn't, because I programmed it.
  • The important question of what understanding is.
    Your use of the metaphor wasn't very helpful. We don't reach understanding in the way the fly gets out of the bottle.
  • The important question of what understanding is.
    Because understanding can't take place without an entity which understands.
  • The important question of what understanding is.
    A fly in a fly-bottle has no understanding of bottles. But it does at least exist as an entity, which is a prerequisite for understanding. A machine (a computer) is not an entity in the appropriate sense.

    The first entities on Earth were single-celled organisms. The cell wall is the boundary between the organism and the rest of the world. No such boundary exists between a computer and the rest of the world.

    Can a 'thinking machine', according to this definition(?), 'understand'? I suspect, if so, it can only understand to the degree it can recursively map itself180 Proof

    It isn't appropriate to talk (in the present context) about the computer "itself".
  • The important question of what understanding is.
    A pattern (the referent) which we can extract from the following scenarios:

    1. I tried to jump over the fence, my feet touched the top of the fence but I couldn't clear the fence.

    2. Sarah tried eating the whole pie, she ate as much as she could but a small piece of it was left.

    3. Stanley tried to run 14 km but he managed only 13.5 km, he had to give up because of a sprained ankle.
    TheMadFool

    Extracting "almost" from those three sentences is a good example of something a computer couldn't do! If you asked a human to identify what the sentences have in common, they might say "they are all about people trying and failing". There's no "mapping" from those sentences to the word "almost", even for us.

    Your ideas are simplistic and naive.
  • Is anyone else concerned with the ubiquitous use of undefined terms in philosophical discourse?
    Beginning with definitions is expecting to start at the finish.Banno

    Interesting thing about definitions: in order to know if you have the correct definition, you need to already understand the thing you are defining.
  • The important question of what understanding is.
    So for a computer to understand "almost" it has to somehow extract it from that load of drivel? Come on man. Do you think because you don't know anything about this, nobody else does either?
  • The important question of what understanding is.
    Not entirely. I apply some logic. I don't think "identity" is important to me...not national identity or any group identity...
  • The important question of what understanding is.
    I did start my own religion on the internet, years ago. I attracted some adherents! Can't remember much about it but it was called "The New Religion". I offered people a chance to be in at the start of a new religion, and the opportunity to help develop its tenets. It was fascinating how people did want to be involved in it!

    But do you think identity and emotion are in charge of you?
  • The important question of what understanding is.
    Yes. I don't think there's any logic that overcomes skepticism there, you just have to look at the cost of it: how much do you actually lose if you embrace that skepticism?frank

    You can't just pick and choose though, can you? I mean if the scepticism is justified, then it doesn't matter if you embrace it or not.
  • The important question of what understanding is.
    I can only suggest that you reread and ask yourself what you're referring to above^^I like sushi

    I don't know what you're on about.

    If you can read into what I write something that explicitly isn't there then you probably don't get paid much for your work (or shouldn't) :D

    But oddly enough I do.

    Jibing aside; have fun I'm exiting :)

    Oh good.
  • The important question of what understanding is.
    You and I can assert the same proposition. Logically, that means the proposition is neither our utterances nor the sentences we use. See what I mean?frank

    From what you said previously though, we can't know if we are asserting the same proposition?
  • The important question of what understanding is.
    Computers don't understand and humans do. Translation programs don't 'think'.I like sushi

    I Googled the phrase "Can computers think". I got 21,000 hits, including this, from Oxford University's Faculty of Philosophy (my italics):

    Can Computers Think?

    The Turing Test, famously introduced in Alan Turing's paper "Computing Machinery and Intelligence" (Mind, 1950), was intended to show that there was no reason in principle why a computer could not think. Thirty years later, in "Minds, Brains, and Programs" (Behavioral and Brain Sciences, 1980), John Searle published a related thought-experiment, but aiming at almost exactly the opposite conclusion: that even a computer which passed the Turing Test could not genuinely be said to think. Since then both thought-experiments have been endlessly discussed in the philosophical literature, without any very decisive result.
    Oxford University's Faculty of Philosophy

    It seems it's still very much a live question.
  • The important question of what understanding is.
    Is that "transcending any particular speaker" just a metaphor, a fiction?
  • The important question of what understanding is.
    But you still think that propositions are special, and the world issues utterances, even though you don't think the world is alive??
  • The important question of what understanding is.
    That's interesting about Quine. How absolute is his scepticism about communication?

    (I think the curly c is much prettier than the kicking k in the word "scepticism").

    My own provisional position is that when we say for example that a word or a sentence has or carries or conveys meaning, that is a metaphor, one we find difficult to rekognise as such.
  • The important question of what understanding is.
    You don't know for sure that you and your client have the same understanding.

    In exactly the same way, you don't know that the world is out there as it appears to be.

    You get by just fine not knowing these things. Or we could say you know one just as well as you know the other.
    frank

    I agree that I don't know for sure that my translations can be correctly understood, that's part of my own philosophical position, and I also believe that, in a certain philosophical sense, words do not carry or convey meaning. But I don't tell my translation clients about any of this.

    I do think there's an abundance of evidence that we are able to understand a great deal of what we say to one another. You couldn't get people to land on the moon and come back without shared understanding of language. And on a smaller scale, you and perhaps a couple of others have understood at least some of what I've said in this discussion, as evidenced by your coherent responses.

    Whether the world is as it appears to be is another (vast) question, and perhaps off topic for the Philosophy of Language forum. Personally I'm satisfied that the world is enough like it appears to be for us to travel to the moon and back, and for me to make the pasta dish I'm going to eat soon.
  • The important question of what understanding is.
    You were the one who asked me the question.InPitzotl

    I was kinda hoping you'd realise you couldn't answer the question. In other words, you'd realise that you can't get a computer to understand things in the way we can.

    You were also the one opening this thread with your OP, where you wrote this:

    matching linguistic symbols (words, spoken or written) to their respective referents — TheMadFool

    TheMadFool wrote that, I was quoting him. I'm arguing against him.
  • The important question of what understanding is.
    The examples I gave were intended to illustrate that semantics isn't simply mapping! — Daemon


    Of course it isn't. I'm surprised anyone would think it is.
    Srap Tasmaner

    Well there are at least two in this discussion, and I was attempting to apply the Principle of Charity, which asks us to:

    "Assume that the opponent is making the strongest argument and interpret others as rational and competent."

    Despite, I suppose, all the evidence to the contrary.
  • The important question of what understanding is.

    A. The councillors refused to allow the protestors to demonstrate, because they advocated violence.

    B. The councillors refused to allow the protestors to demonstrate, because they feared violence.

    A computer can't understand that "they" applies to the protestors in A. but the councillors in B, because it's not immersed in our complex world of experience. — Daemon


    I like this very much.
    Srap Tasmaner

    So do I Srap! It seems to have caused nothing but confusion above though.

    Whether one could somehow, someday develop an artificial system that could deal with such a case, who knows.

    According to my theory the artificial system would need to be able to experience and interact with the world in the way we do. It would need to experience such things as pain and pleasure in order to understand what "good" and "bad" mean. Do we really want to create artificial beings that can experience pain? Surely there's enough trouble in the world with the experiencing beings we can already create?