Computers are symbol manipulators and that means whatever can be symbolized, is within the reach of a computer. — TheMadFool
If you believe there's more to understanding than symbol manipulation... — TheMadFool
If you believe there's more to understanding than symbol manipulation... — TheMadFool
Understanding is simply a match-the-following exercise, something a computer can easily accomplish. — TheMadFool
How do you think machine translations work? — TheMadFool
Ask yourself the question: what exactly does understanding semantics mean if a machine, allegedly incapable of semantics, can do as good a job as a translator of languages? — TheMadFool
I've heard that even the very best expert can and do make mistakes. — TheMadFool
Kindly furnish the definitions of "conceivable" and "possible". I'd like to see how they differ, if you don't mind. — TheMadFool
In everyday usage, the words might sometimes be considered synonymous, but in the contexts of metaphysics and modal logic, which are the contexts in which the p-zombie argument is made, 'possible' has a specific and distinct meaning. As the example shows, there's an aspect of 'so far as we know' to conceptions, which is supposed to be resolved in moving to metaphysical possibility - when we say, in the context of modal logic, that there is a possible world in which X is true, we are not just saying that we suppose there might be such a possible world. We should either assert it as an axiom or deduce it from our axioms, and if Chalmers had done the former, physicalists would simply have said that the burden was on him to justify that belief (it gets a bit more complicated when we make a counterfactual premise in order to prove by contradiction, but that is not an issue here.)
If the two words were synonymous, Chalmers would not have spent any time in making the distinction and in attempting to get from the former to the latter, and his opponents would not be challenging that step. — A Raybould
What's the problem with referents? — TheMadFool
The clear liquid that flows in rivers and the oceans that at times becomes solid and cold, and at other times is invisible vapor is the referent of the word "water". — TheMadFool
Yep. So what is it that a computer so easily (according to you) links to the word "water"? The referent you just described, or merely the description? — bongo fury
"Within the reach" avoids precision where precision is needed. What do you mean, here? — A Raybould
What's at issue here is my statement that "[Turing-like] tests could be further improved by focusing on what a machine understands, rather than what it knows" and your reply that you don't see anything special in understanding. Being symbol manipulation does not automatically make it simple, and your explanations, which invoke symbol manipulation whithout showing what sort of manipulation, are just part of the reason for thinking that it is not. — A Raybould
You are barking up the wrong tree here, precisely because translation does not modify the semantics of its input. — A Raybould
Yep. So what is it that a computer so easily (according to you) links to the word "water"? The referent you just described, or merely the description?
— bongo fury
The description consists of referents. — TheMadFool
Ok, well to see "why people make such a big deal of understanding" you need to see that they are interested in how we link the word "water" to the water itself, and not merely to more words for water.
"Referent" usually refers to the designated object itself, not to other words, semantically related or not. — bongo fury
How do we do it, link the word "water" to the water itself, in your opinion? — TheMadFool
By learning to agree (or disagree) with other people that particular tokens of the word are pointed at particular instances of the object. — bongo fury
That's to say there is no meaning except in the sense of a consensus. — TheMadFool
What makes you think computers can't do that? — TheMadFool
Given these different conceptions, it is not surprising that you might think it is an easy problem, while I see significant difficulties — A Raybould
For me, it is partly an ability to find the significant, implicit connections between the things you know, and there is also a counterfactual aspect to it: seeing the consequences if things were different, and seeing what needs to change in order to get a desired result. — A Raybould
Conceivable: Anything that can be stated as a proposition is conceivable.
Possible: In the context of modal logic, which is the context of Chalmers' argument, something is possible if and only if it can be stated as a proposition that is true in some possible world. — A Raybould
What, agree and disagree about where each other's words have 'landed', out in the world? If by computers you mean some future AI, then sure. This would no doubt be a few steps more advanced than, say, being able to predict where each other's ball has (actually) landed. Which I assume is challenging enough for current robots. — bongo fury
Do you mean that human understanding is reducible to computer logic — TheMadFool
but that we haven't the technology to make it work? If yes then that means you agree with me in principle that human understanding isn'tsomething special, something that can't be handled by logic gates inside computers. — TheMadFool
Searle's argument doesn't stand up to careful scrutiny for the simple reason that semantics are simply acts of linking words to their referents. Just consider the sentence, "dogs eat meat". The semantic part of this sentence consists of matching the words "dog" with a particular animal, "eat" with an act, and "meat" with flesh, i.e. to their referents and that's it, nothing more, nothing less. Understanding is simply a match-the-following exercise, something a computer can easily accomplish. — TheMadFool
Searle says that syntax can not give rise to semantics, and claims this to be the lesson of his "Chinese Room" paper. I don't agree, but I don't see the relationship as simple, either. — A Raybould
Only in the almost trivial sense that neurons are quite evidently some kind of switch or trigger. — bongo fury
But do you at last see the trouble here — bongo fury
If you want to read Chalmers' own words, he has written a book and a series of papers on the issue. As you did not bother to read my original link, I will not take the time to look up these references; you can find them yourself easily enough if you want to (and they may well be found in that linked article). I will warn you that you will find the papers easier to follow if you start by first reading the reference I gave you. — A Raybould
That is a different question than the one you asked, and I replied to, earlier. The answer to this one is that a TM is always distinguishable from a human, because neither a human, nor just its brain, nor any other part of it, is a TM. A human mind can implement a TM, to a degree, by simulation (thinking through the steps and remembering the state), but this is beside the point here.[ — A Raybould
If you had actually intended to ask "...indistinguishable from a human when interrogated over a teletype" (or by texting), that would be missing the point that p-zombies are supposed to be physically indistinguishable from humans (see the first paragraph in their Wikipedia entry), even when examined in the most thorough and intrusive way possible. This is a key element in Chalmers' argument against metaphysical physicalism. — A Raybould
As a p-zombie is physically identical to a human (or a human brain, if we agree that no other organ is relevant), then it is made of cells that work in a very non-Turing, non-digital way. — A Raybould
Chalmers believes he can show that there is a possible world identical to ours other than it being inhabited by p-zombies rather than humans, and therefore that the metaphysical doctrine of physicalism - that everything must necessarily be a manifestation of something physical - is false. — A Raybould
Notice that there is no mention of AI or Turing machines here. — A Raybould
P-zombies only enter the AI debate through additional speculation: If p-zombies are possible, then it is also possible that any machine (Turing or otherwise), no matter how much it might seem to be emulating a human, is at most emulating a p-zombie. — A Raybould
As the concept of p-zombies is carefully constructed so as to be beyond scientific examination,
p/quote]
Ah. Perhaps that explains my unease with the concept. My understanding is that p-zombies are logically incoherent. They are identical to human enough to emulate all human behavior, but they don't implement a subjective mind. In which case, mind must be extra-computable. Penrose's idea. I tend to agree that the mind is not computable. But how do p-zombies relate?
— A Raybould
such a claim may be impossible to disprove, but it is as vulerable to Occam's razor as is any hypothesis invoking magic or the supernatural. — A Raybould
There is no point in discussing your own private definition of 'understanding — A Raybould
you implied that the whole AI community has consistently failed to see what is obvious to you. — A Raybould
'conceivable' vs. 'possible' — A Raybould
Even in Bostrom's simulation argument, neither brains nor minds are TMs: in that argument, — A Raybould
I (or, rather, what I perceive as myself) is a process (a computation being performed), and what I perceive as being the rest of you is just data in that process. — A Raybould
To confuse a process (in either the computational sense here, or more generally) with the medium performing the process is like saying "a flight to Miami is an airplane." — A Raybould
A computation is distinct from the entity doing the computation (even if the latter is a simulation - i.e. is itself a computation - they are different computations (and even when a computation is a simulation of itself, they proceed at different rates in unending recursion.)) — A Raybould
I recognize that this loqution is fairly common - for example, we find Searle writing "The question is, 'Is the brain a digital computer?' — A Raybould
And for the purposes of this discussion I am taking that question as equivalent to 'Are brain processes computational?" - but, as this quote clearly shows, this is just a manner of speaking, — A Raybould
and IMHO it is best avoided, as it tends to lead to confusion (as demonstrated in this thread) — A Raybould
and can prime the mind to overlook certain issues in the underlying question (for example, if you assume that the brain is a TM, it is unlikely that you will see what Chalmers is trying to say about p-zombies.) — A Raybould
To me, Searle's first version of his question is little more than what we now call click-bait. — A Raybould
What part of 'a computation is what a Turing machine does, not what it is' do you not understand? At least until we sort that out, I am not going to read any more of this jumble. — A Raybould
I rather suspect that's true, unfortunately. — A Raybould
People who know me on this board know that I'm perfectly capable of getting into the mud. — fishfry
A TM is not a physical device. It's an abstract mathematical construction... — fishfry
...but it is not the computation that the abstract machine is computing. I covered that in yesterday's post....A computation — fishfry
Do you think that in Bostrom's simulated universes, it's TMs all the way down? — A Raybould
If they're different then it's possible that conceivable but not possible and possible but not conceivable. Please provide examples of both scenarios for my benefit. Thanks. — TheMadFool
The issue here is your insistence that there is nothing special about understanding and that it is a simple problem for AI. — A Raybould
If, as you say, having AIs understand things is simple, then how come the creators of one of the most advanced AI programs currently written acknowledge that understanding common-sense physics, for one thing, is still a problem? — A Raybould
When I read this, I got the distinct feeling that I was dealing with a bot, which would be quite embarrassing for me, given the original topic of this thread! Things that tend to give away a bot include blatant non-sequiturs, a lack of substance, a tendency to lose the thread, and repetition of errors. You asked essentially the same question as this one here (complete with the same basic error in propositional logic) a few posts back, but when I provided just such an example (the same one as I had given more than once before) you ignored it and went off in a different direction, only to return to the same question now.
I am tempted to just quote my reply from then, but I will spell it out more formally, so you can reference the first part you don't agree with:
P1 Anything that has been conceived of is conceivable.
P2 I have conceived of the proposition 'The Collatz conjecture is true.'
L1 'The Collatz conjecture is true' is conceivable. (P1, P2)
P3 I have conceived of the proposition 'The Collatz conjecture is false.'
L2 'The Collatz conjecture is false' is conceivable. (P1, P3)
P4 Either the Collatz conjecture is true, or it is false; it cannot be both, and there are no other alternatives.
L3 If the Collatz conjecture is true, then the conceivable proposition 'The Collatz conjecture is false' does not state a possibility. (L2, P4)
L4 If the Collatz conjecture is false, then the conceivable proposition 'The Collatz conjecture is true' does not state a possibility. (L1, P4)
C1 There is something that is conceivable but not possible. (L3, L4) — A Raybould
In other words, it's implied, you feel understanding is uncomputable i.e. there is "something special" about it and for that reason is beyond a computer's ability. — TheMadFool
Hence, I maintain that thinking/understanding is, for sure, computable. — TheMadFool
obscurum per obscurius — TheMadFool
Define the words "conceivable" and "possible" like a dictionary does. — TheMadFool
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.