I have a lot of objections to the linked essay, but I assume it's given here more for illustration and not as a focal point for us to critique specifically. — Efram
we can't say so certainly whether a computer would be capable of experiencing qualia, for example. — Efram
This was one of the more popular threads on the old forum so I'm remaking it.
Here is Searle's original essay:
http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.comp.html
Some objections are what defines computation? If everything (the entire universe) is a computation then the statement "the mind is computation" is only trivially true.
Another is that it could in theory be made out of anything so long as the system represents binary symbols (so minds can exist in anything). There is a longer Lanier essay on that problem available. http://www.jaronlanier.com/zombie.html
Searle also notes the homunculus problem: IE: that someone has to be around to interpret and operate the system. — JupiterJess
“It is much easier,” observed AI pioneer Terry Winograd, “to write a program to carry out abstruse formal operations than to capture the common sense of a dog.”
A dog knows, through its own sort of common sense, that it cannot leap over a house in order to reach its master. It presumably knows this as the directly given meaning of houses and leaps — a meaning it experiences all the way down into its muscles and bones. As for you and me, we know, perhaps without ever having thought about it, that a person cannot be in two places at once. We know (to extract a few examples from the literature of cognitive science) that there is no football stadium on the train to Seattle, that giraffes do not wear hats and underwear, and that a book can aid us in propping up a slide projector but a sirloin steak probably isn’t appropriate.
We could, of course, record any of these facts in a computer. The impossibility arises when we consider how to record and make accessible the entire, unsurveyable, and ill-defined body of common sense. We know all these things, not because our “random access memory” contains separate, atomic propositions bearing witness to every commonsensical fact (their number would be infinite), and not because we have ever stopped to deduce the truth from a few more general propositions (an adequate collection of such propositions isn’t possible even in principle). Our knowledge does not present itself in discrete, logically well-behaved chunks, nor is it contained within a neat deductive system.
It is no surprise, then, that the contextual coherence of things — how things hold together in fluid, immediately accessible, interpenetrating patterns of significance rather than in precisely framed logical relationships — remains to this day the defining problem for AI. It is the problem of meaning.
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.