• Anthony
    197
    To refer to a machine as being intelligent is a blunder of intelligence. None of the definitions of "intelligence" can be satisfied by machines. Every definition (save the misnomer referring to computers) of intelligence includes terms like capacity to understand, to think, reason, make judgments; and mental capacity. These terms are precisely outside the ambit of what computers can do, so why was such a poor term chosen for computing operations and data processing of a machine ?
  • Happenstance
    71
    I'd say that defining intelligence as a capacity to understand is rather anthropocentric and a better definition is a capacity to deal with theoretical and/or practical problems effectively. But this is merely an opinion of mine rather than an assertion.
  • SophistiCat
    2.2k
    We are the masters of our language, not the the other way around; we create meanings. If we apply the word 'intelligent' to something other than a human being, then that is what the word means. And supposing that the origin of this meaning is anthropomorphic, as you seem to assume, so what?

    OED gives this as one of the secondary meanings for 'intelligent':

    (of a device or building) able to vary its state or action in response to varying situations and past experience.
  • tim wood
    9.2k
    Some words are just names for things well-known. Other words, terms-of-art, are invented words, or invented meanings for existing words, and the words themselves or meanings thereof really cannot be understood without already understanding the thing the word refers to. To refer to a machine as intelligent is the use of the word "intelligent" in just that latter sense. To laymen it's a word they think they understand, but they don't. To the specialist, it's not so much a word as a piece of special code. And dictionaries are no help. So, to call the specialized word a "blunder of intelligence" is simply to fail to understand that the word has a specialized meaning, and at the same time to express the discomfort that arises when a word seems mis-appropriated. I feel your pain, but remind myself when I can that they're not stupid, so maybe I have to acknowledge that I have to grow with the new usage.
  • Mentalusion
    93


    I believe that's why we have the term "artificial intelligence" (≠ "intelligence").
  • Wayfarer
    22.3k
    Agree with the OP. As it happens I’m working at an AI start-up right now. The system has been given a human name, and is supposed to be like Siri on top of your corporate database so you can ask ‘her’ questions and ‘she’ will respond in kind. It does amazing things, of course, but there’s no way it’s ‘intelligent’ except for by analogy. It will spit out some completely anomolous response from time to time - like, ‘seasonal factors only accounted for -608,782,786.12% of variation in sales for this product line’ which I saw the other day - without the least sense that there’s anything amiss.

    ‘Intelligence’ has the connotation of ‘judgement’ - even etymollogically - and AI systems don’t judge. They compute.
  • Baden
    16.3k


    What's a better adjective than "intelligent" to describe machines capable of doing certain tasks (e.g. calculations) that are analogous to those carried out using human intelligence and that distinguish these machines from those that cannot carry out such tasks?
  • Mentalusion
    93


    I'm sympathetic to the overestimation of the degree to which machines can be intelligent in the same way humans are, however I think the definition of "intelligence" is nevertheless going to be pretty fluid. I don't see why "judgment" has to factor into it and, in fact, that doesn't show up as part of the dictionary definition. "judgment" is also not etymologically related in any way to "intelligence" and neither connotes the other in any form of conventional language use.

    It seems to me that the ability to compute is at least one form intelligence can take. If children respond to a math problem posed by a teacher, they have done nothing significantly different from what a computer does when it returns an answer in response to input. The ability to access "memory" is also going to be another factor, which both humans and computer share.

    I think the crux of the issue is whether, as you admit, it makes sense to speak of "analogous intelligences". If your claim boils down to the intuition that human intelligence works differently than machine intelligence and that is the only proper sense in which we can speak of the term, you will be right, but you will be vacuously right.
  • Anthony
    197

    How, might I ask, would you go about the business of "dealing with" practical problems, or theoretical ones, without understanding them? What's more, intelligence isn't only what thinks in terms of problems or makes everything into a problem. Intelligence actively seeks aesthetical fulfillment and play and cultivates uncommon nonsense as the background to common sense. Living beings most all engage in play to learn because they are intelligent. Show me a processor (without implied programming) that depends on play to "learn" the way a primal human does? Sentience isn't something that "deals with" life...mentally healthy people tend to be mitigating the authority in their environment and in themselves. The authority a computer must follow - its implicit code - were it to be personified, would be the equivalent to the military orders of a commander (didn't Microsoft just agree to sell its tech to the military?..two peas in a pod). The Most High reasoning here is....obedience. Again, the perfect absence of reason.
  • Wayfarer
    22.3k
    If children respond to a math problem posed by a teacher, they have done nothing significantly different from what a computer does when it returns an answer in response to input.Mentalusion

    I think this is one of the tragic delusions of modern culture. There's an ontological distinction between computers and beings, but due to the 'flattening' of the contemporary cultural landscape - which philosophers have referred to as the 'forgetfulness of Being' - that distinction is difficult to make. But computers don't know anything. A computer is literally a device - enormously sophisticated and ingenious device, but ultimately a vast array of on-off switches running binary code. So when a child and a computer perform some arithmetical function, then the result is the same - but the computer doesn't actually know that, it doesn't mean anything to it. It is no more a being than is a television set or an alarm clock.

    No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain – or copies of words, pictures, grammatical rules or any other kinds of environmental stimuli. The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’.

    Robert Epstein Your brain doesn't process information..
  • Wayfarer
    22.3k
    intelligence (n.)

    late 14c., "the highest faculty of the mind, capacity for comprehending general truths;" c. 1400, "faculty of understanding, comprehension," from Old French intelligence (12c) and directly from Latin intelligentia, intellegentia "understanding, knowledge, power of discerning; art, skill, taste," from intelligentem (nominative intelligens) "discerning, appreciative," present participle of intelligere "to understand, comprehend, come to know," from assimilated form of inter "between" (see inter-) + legere "choose, pick out, read," from PIE root *leg- (1) "to collect, gather," with derivatives meaning "to speak (to 'pick out words')."

    Meaning "superior understanding, sagacity, quality of being intelligent" is from early 15c. Sense of "information received or imparted, news" first recorded mid-15c., especially "secret information from spies" (1580s). Meaning "a being endowed with understanding or intelligence" is late 14c. Intelligence quotient first recorded 1921.

    Granted, the word is now used in relation to 'artificial intelligence' and makes sense in that domain of discourse. But the ontological issue remains.
  • Valentinus
    1.6k

    Regarding the Epstein article, the use of "algorithm" stands out for me. When we make rules to help calculation, that is not the same as understanding why those rules can be relied upon to be produce the correct answer. Manipulating an abacus with rigor provides reliable results. The abacus is not a part of checking if it works.

    On the other hand, the computer is one hell of an abacus. It does not show us a new way to understand but it does provide a different way of thinking about rules that may not have occurred to us otherwise.

    In another register, coding reminds me of text as a tool and how Plato noted that reliance upon the latter, as beneficial as it may be, came at a loss of living in memory as the only way to keep the past alive.

    Trade offs.
  • Anthony
    197
    We are the masters of our language, not the the other way around; we create meanings. If we apply the word 'intelligent' to something other than a human being, then that is what the word means. And supposing that the origin of this meaning is anthropomorphic, as you seem to assume, so what?SophistiCat

    There's a difference between understanding oneself (not "we") and understanding the external world, of which other people are a part. If machines truly were sentient they would know that shared meanings tend toward mimesis and social decay (loss of truth; humans, in their primal nature, communicate with each other totally unlike computers communicate with each other; there's a reason why you don't automatically get other people's meaning). Synergy in human relationships is enhanced when each person is more individuated. Using the same criteria to judge different people creates a less synergistic human system; there is no universal code for individuated human beings . I understand and create meaning unlike you or anyone else (nor do I desire to have much in common with others), thankfully, inasmuch as shared meaning becomes a phenomenon of memory only, and not of reason or understanding.
  • Devans99
    2.7k
    To refer to a machine as being intelligent is a blunder of intelligenceAnthony

    But we are just machines. We have inputs and outputs, memory and a CPU. It's just we are so much more complex than current computers that we class ourselves apart when we are basically the same.

    I wonder what point in the the size/complexity of animal's brains does intelligence first manifest itself? Some animals have very small brains:

    https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons

    So a worm with 200 neutrons, could it be said to manifest intelligence? I would say yes in a limited way and that intelligence should be possible to emulate with a computer... eventually.

    Found this article with a great quote:

    "At the present time, our most advanced robots, some of which are built in Japan and also at MIT, have the collective intelligence and wisdom of a cockroach; a mentally challenged cockroach; a lobotomized, mentally challenged cockroach."

    https://bigthink.com/mike-colagrossi/what-animals-is-ai-currently-smarter-than

    But they are better progress on the worm.

    So even the most powerful computers of the current generation struggle to emulate a worm's behaviour. I think computers have 'intelligence'; it's just very limited at the moment.
  • javra
    2.6k
    Computers are all entropic, algorithms, memory, and all. Life, regardless of how simple, is negentropic—and, quite arguably, always awareness-endowed, for it must survive in an ever changing environment it must be to some measure aware of.

    I’m sometimes surprised by how easily we animate inanimate objects via metaphor (“my car’s my baby: listen to her purr/roar”) and then lose sight of the metaphors we use being just that. No one proposes that a marriage license should be sanctioned between cars and those who love them. But when it comes to AI, we have no problem accepting such things in our sci-fi … and then we often make a quantum leap into arguing this fantasy to in fact be reality—one that is always just on the horizon.

    Till a computer can be produced by some neo-Frankenstein-like person to be negentropic, it won’t think, nor learn, nor perceive. There’s no smidgen of awareness inside to facilitate these activities. But, then, we hardly know what negentropy is, mechanically speaking … never mind how to produce it.

    Still, allegorically addressed, an electrical watch is far more intelligent, far smarter, than a mechanical one: it can remember different times to wake us up, is aware of multiple time zones, and some of the more fancy ones can even be emotionally astute in how they address us (some maybe doing so in sexually alluring voices with which they speak). They’re still entropic, though.
  • Devans99
    2.7k
    Computers are all entropic, algorithms, memory, and all. Life, regardless of how simple, is negentropic—and, quite arguably, always awareness-endowed, for it must survive in an ever changing environment it must be to some measure aware of.javra

    Both us and humans turn fuel (food for us, electricity for computers) into heat energy. I assume you mean the way animals assimilate part of what they consume whereas computers do not? Does this directly impact on intelligence?

    Ants with just 250,000 neurons are self-aware in that they will scratch a paint spot of themselves when placed in front of a mirror:

    https://en.wikipedia.org/wiki/Self-awareness#Animals

    I wonder if a worm with 200 neurons is self-aware? It's in no danger of eating itself so maybe there is no evolutionary driver for self-awareness in worms and perhaps they have not developed the neural circuitry to implement it? A robot with sensors could perhaps be made self-aware. Sensors and positional awareness of those sensors and some simple correlation would do it.

    As to survival, that is an animal's primary goal. I'm not sure that primary goal is particularly intelligence inducing compared to other primary goals that could be set for a computer. Find the meaning of life or the world's funniest joke would foster intelligence just as well?
  • Happenstance
    71

    Yep, I admit that my definition was somewhat vague and general (hence my disclaimer of being opinion) but throwing words in such as understanding, fulfilment, play, common sense and sentience isn't exactly making things any clearer.

    What do you think about this video(loads to a specific time). Could it not be said that the robot comes to an understanding?
    https://youtu.be/DVprGRt39yg?t=3138

    For those who don't like links for whatever reason, here's a transcript of the video:

    Back in 2005 we started trying to build machines with self-awareness. This robot, to begin with, didn't know what it was. All it knew is that it needed to do something like walk. Through trial and error, it figured out how to walk using imagination and then it walked away. And then we did something very cruel, we chopped off a leg and watched what happened. At the beginning it didn't quite know what had happened. But over by the period of a day, it then began to limp.

    And then a year ago we were training an AI system for a live demonstration. We wanted to show how we wave all these objects in front of the camera and then the AI can recognize the objects. And so we're preparing this demo and we had on a side screen this ability to watch what certain neurons were responding to. And suddenly we notice that one of the neurons was tracking faces. It was tracking our faces as we were moving around.

    Now the spooky thing about this is that we never trained the system to recognize human faces and yet somehow it learnt to do that. Even though these robots are very simple, we can see there's something else going on there, it's not just programmed. So this is just the beginning…. [Dun, Dun, DUUUUN!! - I added this part]
  • Wayfarer
    22.3k
    Don't get me wrong, today’s computing technology is fabulous. I owe my livelihood to them. But I still think there’s a lot of confusion around the question of the relationship between computation and intelligence. One of the best books I’ve never read about it is this one
  • javra
    2.6k


    Entropy is the process in which energy progresses toward thermal equilibrium. Some express it as ordered systems progressing toward uniform disorder.

    Negative entropy is an opposite process. One in which the organization of thermodynamic systems increases. Biological process such as those of homeostasis exemplify negentropy. Here, the complexity of a system is either maintained or else increased over time.

    Life is about negative entropy whereas non-life is about entropy. Some could make this far more complex. Still, I take this to be the simplified stance. May I stand corrected if wrong in my summary.

    Living things, including brains, restructure their “hardware”. For brains, this is called neural plasticity. As the “hardware” is restructured, so too do the capacities of the “software” change in turn (which can further restructure the hardware). This, generally, either maintains or increases complexity over time; roughly, till death occurs.

    The computers we have today, regardless of how complex, do not restructure their hardware via the processes of their software so as to either maintain or increase complexity as a total system—no matter how much electricity is placed into them.

    I don’t have a link for this (I’ve lost track of the researcher’s name) but it’s rudimentary by today’s standards. Take an insect like body with basic sensory inputs, allow for a chaos algorithm together with a preset goal of arriving at some location, and the robot will adapt to any obstacle put in its path, even with legs bent or cut off, in its motions toward the given location. Very sophisticated stuff as far as artificial intelligence goes. Its behaviors are nevertheless entropic. Given its preset instructions, its energy follows paths of least resistance toward thermal equilibrium. It, for example, can’t change its immediate goals of its own volition—this as intelligent lifeforms do so as to maintain or increase total system organization. So doing being one integral aspect of intelligence as we know it to apply to ourselves.

    Programs can mimic the intelligence of lifeforms rather well in some but not all contexts. And their computations certainly outperform human intelligence in many ways. But I yet maintain that until robots and/or their programs can become negentropic, they will only mimic intelligence without actually so being intelligent. Their intelligence being only allegorical.

    To be frank, to me this issue boils down to one of whether or not ontology is one of hard causal determinism. If it is, crystals can be stated to be intelligent by comparison to rocks. If its not, than intelligence strictly applies to valid metaphysical agency ... something which is a property of living systems. I find myself in the latter camp. So to me computers need to become living beings prior to being endowed with real, and not just artificial, intelligence.
  • Harry Hindu
    5.1k
    Not if you define "thinking" as processing information.

    Computers have an input (raw information (data)) and an output (processed information), just like you and I.

    Computers make decisions, and can only make decisions with the input and/or stored information and time they have, just like you and I.

    Computers can apply both bottom-up (rely heavily on input to produce output (creating a document or editing a picture)) and top-down (rely more heavily on the installed software to produce output (a game or operating system)) processing, just like you and I.

    Indirect realism implies that "empty space" containing "solid material" may not be the way things actually are and that reality is this item of process - where a computer's processing is part of the same substratum as your own processing. The world is not as you see it - as solid objects containing minds. It is as you experience it - a process.
  • Ciaran
    53
    It will spit out some completely anomolous response from time to time - like, ‘seasonal factors only accounted for -608,782,786.12% of variation in sales for this product line’ which I saw the other day - without the least sense that there’s anything amiss.Wayfarer

    Really, and you've never heard a football manager claim their players gave "110 percent" with exactly the same utter failure to recognise anything amiss?
  • Devans99
    2.7k
    The computers we have today, regardless of how complex, do not restructure their hardwarejavra

    Humans don't grow new organs dynamically; all we do is maintain and grow the size of existing organs and bodily structures. A computer should be able to achieve this. On mainframe, there is the concept of user updatable hardware microcode. It's been around for years, it allows updating of low level hardware operations. BIOS update on a PC is similar.

    It's quite easy to imagine extending this to a computer self-restructuring its own microcode allowing hardware level change, growth, learning to the same degree a human does eventually. Evolution has developed the neural circuitry for intelligence over billions of years; we've had computers for less than 100 years; our software is somewhat lacking compared to nature's.
  • Jake
    1.4k
    But we are just machines. We have inputs and outputs, memory and a CPU. It's just we are so much more complex than current computers that we class ourselves apart when we are basically the same.Devans99

    This sounds about right to me.

    It seems the word "intelligence" is really a comparative term, not a label for some fixed position. We consider ourselves intelligent by comparison to the only other life we know, Earth bound animals. If the comparison was instead to an alien civilization a billion years beyond our own, then we wouldn't label ourselves as intelligent.

    Our concept of intelligence is derived from what may turn out to be a very narrow spectrum, single cell life to humans. That seems like a huge leap to us from within the narrow spectrum, but as the alien example illustrates intelligence may range so far beyond what we know that we wouldn't recognize it as intelligence, but it would seem to us to be magic, or more likely we wouldn't perceive it at, just as the Internet is entirely invisible to other Earthly species.

    I suspect we aren't intelligent enough to grasp what machine intelligence will be capable of.
  • Devans99
    2.7k
    I suspect we aren't intelligent enough to grasp what machine intelligence will be capable of.Jake

    Maybe we will need cybernetically enhanced brains just to understand and operate with future generations of computers; they should outperform biologicals in every way eventually. Biologicals are the product of 4 billion years of a random, very inefficient process (evolution). Given 4 billion years of design imagine what our computers will be like.

    What seems to set life apart from computers at the moment?

    Adaptability is one aspect I think. Animals and humans seem to have the ability to cross domain map knowledge and strategies. Computers don't seem to have this ability at present. A computer program can be written to play chess yet the same program cannot fight a war even though the strategies of chess are applicable to warfare. Humans on the other hand have no difficulty in taking strategies learned in one domain and applying them to another.

    We also have the ability to reason with incomplete data. We interpolate and extrapolate. We induce. Computers struggle without a precise and complete data model. We mix normal and fuzzy logic naturally. I think the software side has a long way to go.
  • Pattern-chaser
    1.8k
    why was such a poor term chosen for computing operations and data processing of a machine ?Anthony

    It was a mistake.
  • Jake
    1.4k
    I think the software side has a long way to go.Devans99

    All very good points, and well said. Thanks.

    While what you say hits the mark, we might keep in mind that digital intelligence is not going to evolve at the same glacial pace that human intelligence did. As example, the entire history of digital intelligence so far almost fits within my single human lifetime. And as AI is aimed back upon itself the resulting feedback loop is likely to accelerate the development of machine intelligence considerably. Hardly an original insight, but something to keep in mind perhaps...
  • Valentinus
    1.6k

    I agree with your comments regarding devices and intelligence. I was only adding the observation that as tools, they do work that was only done by "intellects" before their use. The element of simulation works in two directions. Some machines imitate life and some life imitates machines.

    Descartes looks out of his bathroom window and wonders if a passerby is an automaton....
  • Happenstance
    71
    I appear to have a reply that I made earlier deleted on me. I'm guessing that linking a youtube video is a violation of the ToS which wasn't my intention. Is my guess correct?
  • Harry Hindu
    5.1k
    Living things, including brains, restructure their “hardware”. For brains, this is called neural plasticity. As the “hardware” is restructured, so too do the capacities of the “software” change in turn (which can further restructure the hardware). This, generally, either maintains or increases complexity over time; roughly, till death occurs.

    The computers we have today, regardless of how complex, do not restructure their hardware via the processes of their software so as to either maintain or increase complexity as a total system—no matter how much electricity is placed into them.
    javra
    Living things, including brains, don't just "restructure their hardware" randomly, or for no reason. Input is required to influence those changes (learning). Learning requires memory. Your input (experiences of the world) over your lifetime has influenced changes in your hardware and software. A robot with a computer brain could be programmed to update its own programming with new input. Actually, computers do this already to a degree. Input from the user changes the contents of the software (the variables) and how the hardware behaves.

    Natural selection is the environmental pressure that filters out mutations that inhibit an organism's ability to survive long enough to procreate (copy its genes). We'd have to design robots to replicate themselves with the possibility of "mistakes" so that new attributes arise in their design that can then be affected by natural selection. As it stands right now, computers evolve by human selection, not natural selection. That is if you don't see human beings as natural outcomes. If you do, then you can say that computers have evolved by natural selection, just like everything that exists does.
  • Harry Hindu
    5.1k
    “Everybody in this country should learn how to program a computer (learn a computer language), because it teaches you how to think.”
    -Steve Jobs
  • javra
    2.6k
    But they're still entropic. Right?

    You guys have read my full post.

    Is there disagreement, for example, in that you uphold life itself to be entropic rather than negentropic?

    You guys want to say that we'll be making negentropic computers soon. OK. I can't argue with this issue of faith (other than by questioning what the benefits would be of so doing). But my point was that until its negentropic its not thinking, or understandings, or intelligent, etc.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.