There is some kind of break and convergence between A) Being able to translate languages B) Understanding languages. I am not sure what those differences and similarities are, as I have never posited the two for comparison. Computers are capable of both. — Josh Alfred
Also, are you implying nobody knows what my question means unless they have bought me bananas? (Prior to which, they have not experienced buying me bananas?) — InPitzotl
toI am not saying that experience is the explanation for understanding, I am saying that it is necessary for understanding.
To understand what "pain" means, for example, you need to have experienced pain. — Daemon
Your example isn't even an example of what you are claiming, unless you seriously expect me to believe that you believe persons with congenital analgesia cannot understand going to the store and getting bananas. — InPitzotl
A) Artificial intelligence can utilize any sensory device and use it to compute. If you understand this you can also compare it to human sensory experience. There is little difference. Can you understand that? — Josh Alfred
I'm not arguing that robots experience things here. I'm arguing that it's a superfluous requirement. But even if you do add this superfluous requirement, it's certainly not the critical element. To explain what brings me the bananas when I ask for them, you have to explain how those words makes something bring me bananas. You can glue experience to the problem if you want to, but experience doesn't bring me the bananas that I asked for. — InPitzotl
The question isn't about experiencing; it's about understanding. — InPitzotl
If I ask a person, "Can you go to the store and pick me up some bananas?", I am not by asking the question asking the person to experience anything. — InPitzotl
IOW, your point was that robots aren't doing something that humans do, but that's kind of backwards from the point being made that you're replying to. It's not required here that robots are doing what humans do to call this significant; it suffices to say that humans can't understand without doing something that robots do that your CAT tool doesn't do. — InPitzotl
IOW, your point was that robots aren't doing something that humans do, but that's kind of backwards from the point being made that you're replying to. It's not required here that robots are doing what humans do to call this significant; it suffices to say that humans can't understand without doing something that robots do that your CAT tool doesn't do. — InPitzotl
I think it is easy enough to say that understanding can not be discrete, i.e. that a system that can only do one thing (or a variety of things) well lacks agency for this purpose. However, at some point, a thing can do enough things well that it feels a bit like bad faith to say that it isn't an agent because you understand how it was constructed and how it behaves (indeed, if determinism obtains, the same could be said of people). — Ennui Elucidator
Can a 'thinking machine', according to this definition(?), 'understand'? I suspect, if so, it can only understand to the degree it can recursively map itself — 180 Proof
A pattern (the referent) which we can extract from the following scenarios:
1. I tried to jump over the fence, my feet touched the top of the fence but I couldn't clear the fence.
2. Sarah tried eating the whole pie, she ate as much as she could but a small piece of it was left.
3. Stanley tried to run 14 km but he managed only 13.5 km, he had to give up because of a sprained ankle. — TheMadFool
Beginning with definitions is expecting to start at the finish. — Banno
Yes. I don't think there's any logic that overcomes skepticism there, you just have to look at the cost of it: how much do you actually lose if you embrace that skepticism? — frank
I can only suggest that you reread and ask yourself what you're referring to above^^ — I like sushi
If you can read into what I write something that explicitly isn't there then you probably don't get paid much for your work (or shouldn't) :D
Jibing aside; have fun I'm exiting :)
You and I can assert the same proposition. Logically, that means the proposition is neither our utterances nor the sentences we use. See what I mean? — frank
Computers don't understand and humans do. Translation programs don't 'think'. — I like sushi
Can Computers Think?
The Turing Test, famously introduced in Alan Turing's paper "Computing Machinery and Intelligence" (Mind, 1950), was intended to show that there was no reason in principle why a computer could not think. Thirty years later, in "Minds, Brains, and Programs" (Behavioral and Brain Sciences, 1980), John Searle published a related thought-experiment, but aiming at almost exactly the opposite conclusion: that even a computer which passed the Turing Test could not genuinely be said to think. Since then both thought-experiments have been endlessly discussed in the philosophical literature, without any very decisive result. — Oxford University's Faculty of Philosophy
You don't know for sure that you and your client have the same understanding.
In exactly the same way, you don't know that the world is out there as it appears to be.
You get by just fine not knowing these things. Or we could say you know one just as well as you know the other. — frank
You were the one who asked me the question. — InPitzotl
You were also the one opening this thread with your OP, where you wrote this:
matching linguistic symbols (words, spoken or written) to their respective referents — TheMadFool
The examples I gave were intended to illustrate that semantics isn't simply mapping! — Daemon
Of course it isn't. I'm surprised anyone would think it is. — Srap Tasmaner
A. The councillors refused to allow the protestors to demonstrate, because they advocated violence.
B. The councillors refused to allow the protestors to demonstrate, because they feared violence.
A computer can't understand that "they" applies to the protestors in A. but the councillors in B, because it's not immersed in our complex world of experience. — Daemon
I like this very much. — Srap Tasmaner
Whether one could somehow, someday develop an artificial system that could deal with such a case, who knows.