Why aliens will never learn to speak our language I agree with you that we lack a good definition for general intelligence. But as my example of a thing that is clearly as intelligent as us but can't predict all our associations demonstrates, even our intuition doesn't agree with the Turing test as what is intelligent. We need to keep working to understand what intelligence is and as I currently see it, the way the Turing test is used in this work and in things like AI development, it diverts us into a path that is harmful. It is quite obvious that a transistor based general intelligence doesn't need to be able to speak any language in an indistinguishable way from humans and that that would be an inefficient and unnecessarily complex way to program general intelligence - yet people tend to see that as an important goal right now. Harmful, I say! — Qmeri
Whether or not a particular Turing test is appropriate in a given situation is largely a question concerning the
breadth of the test. For example, if testing whether a computer 'really' understands Chess, should the test be very narrow and concern only it's ability to produce good chess moves? or should the test be very broad to even include the ability of the computer to produce novel metaphors relating chess to the human condition?
Personally, I don't interpret the spirit of the Turing test as making or implying ontological commitments regarding how AI should be programmed or trained , or as to how intelligence should represent sensory information with language, or even as to what intelligence is or whether it is ultimately reducible to metrics. Neither do I understand the spirit of the Turing test as being prescriptive in telling humans how they
ought to judge a participant's actions. Rather I understand Alan Turing as very modestly pointing out the fact that humans tend to
recognise intelligence in terms of situationally embedded stimulus-response dispositions.
In other words, the specifics of what goes on inside the 'brain' of a participant is considered to be relevant only to the
functional extent that the brain's processes are a causal precondition for generating such situationally embedded behavioural repertoires; the meaning of language and intelligence being undetermined regarding the implementation of stimulus-response mappings.
Indeed, an important criterion of intelligence is the ability to generate
unexpected stimulus-responses. Hence any formal and rigid definition of intelligence solely in terms of rules, whether internal in describing computational processes inside the brain, or situationally in terms of stimulus-response mappings, would be to a large extent an oxymoron.