• hypericin
    1.6k
    A distinction is missing:
    Computers are not just computing machines. They are very special, singular machines: they can simulate any computing machine, including themselves. There are many many computing machines found in nature, which while too powerful to be feasibly simulated by a computer, nonetheless lack this property of universal simulation.

    Searl's argument is not against computing machines understanding language. It is claiming that computers which simulate computing machines which understand language., themselves do not understand language. This much is plausibly argued by the Chinese Room. But even if you accept that, it by no means implies that computing machines cannot understand language.

    Therefore his conclusion that consciousness is bound to some kind of biological excretion is totally unwarrented.

    I'm not familiar with the secondary literature, is this objection discussed?
  • apokrisis
    7.3k
    But even if you accept that, it by no means implies that computing machines cannot understand language.

    Therefore his conclusion that consciousness is bound to some kind of biological excretion is totally unwarrented.
    hypericin

    Searle did offer the argument that consciousness is a physically embodied process. And that makes the crucial difference.

    Against the AI symbol processing story, Searle points out that a computer might simulate the weather, but simulated weather will never make you wet. Likewise, a simulated carburettor will never drive a car. Simulations have no real effects on the world.

    And so it is with the brain and its neurons. It may look like a computational pattern at one level. But that pattern spinning can't be divorced from the environment that is being regulated in realtime. Like the weather or a carburettor, the neural collective is actually pushing and shoving against the real world.

    That then is the semantics that breathes life into the syntax. And that is also the semantics that is missing if a brain, a carburettor or the weather is reduced to a mere syntactical simulation.

    So it is not that there isn't computation or syntax in play when it comes to life and mind. Organisms do exist by being able to impose their syntactic structure on their physical environments. But there also has to be an actualised semantics. The physics of the world needs to be getting rearranged in accordance with a "point of view" for there to indeed be this "point of view", and not some abstracted and meaningless clank of blind syntax inside a Chinese Room.
  • bongo fury
    1.7k
    Like the weather or a carburettor, the neural collective is actually pushing and shoving against the real world.

    That then is the semantics that breathes life into the syntax. And that is also the semantics that is missing if a brain, a carburettor or the weather is reduced to a mere syntactical simulation.
    apokrisis

    Not stalking you @apokrisis, just interested in semiotics. (But normally purchase non-bio!)

    I doubt that a carburettor will function as a referring symbol merely by functioning as an actual carburettor. It would need to perform a semantic, referential function, by being pointed at things.
  • apokrisis
    7.3k
    I don't think that a carburettor will function as a referring symbol merely by functioning as an actual carburettor, but only by performing a semantic, referential function, and being pointed at things.bongo fury

    I was just citing Searle's examples. A full semiotic argument would be more complex.

    For car drivers, it is the accelerator pedal that is the mechanical switch which connects to their entropic desires. The symbol that is "in mind".

    The carburettor is buried out of sight as just part of the necessary set of mechanical linkages that will actually turn an explosion of petrol vapour into me going 90 mph in 30 mph zone.

    For a driver, there are all kinds of signs involved. The ones on my speedo dial that I'm relishing, the ones on the roadside that I ignore. Even just the sign of the landscape whizzing past my window at my command. Or the feeling of my foot flat to the floor as the annoying sign I can't make the damn thing go faster.

    But if I'm doing all this within a car simulator, I can drive carefully or smash into the nearest lamppost without it being an actually meaningful difference. It is only when this semiotic umwelt is plugged into the physics of the world that there can be consequences that matter.

    The point of talking about simulated carburettors or simulated rain storms is just to say that the syntax exists to actually do a semantic job. And that job is to regulate the material world. That is what defines semiosis - a modelling relation - so far as life and mind go.
  • magritte
    555
    For car drivers, it is the accelerator pedal that is the mechanical switch which connects to their entropic desires. The symbol that is "in mind".apokrisis

    Isn't that overly simplistic in that the point of intentional action just triggers a whole range of prearranged links in the machine and unknown and at times unknowable interfaces with the environment? Just to try a couple of unlikely but conceivable cases, how does the scenario work in space or in a lake?
  • hypericin
    1.6k
    Simulations have no real effects on the world.apokrisis

    This is just not true. You can plug a simulation into the world, for example a robot, feed it inputs, and it could drive it's body and modify the world.
  • apokrisis
    7.3k
    Isn't that overly simplistic in that the point of intentional action just triggers a whole range of prearranged links in the machine and unknown and at times unknowable interfaces with the environment?magritte

    I would say it illustrates a general semiotic truth. Signs in fact have the aim of becoming binary switches. Their goal is to reduce the messy complexity of any physical situation to the simplest-possible yes/no, on/off, present/absent logical distinctions.

    People talk about symbols as representing or referring - a kind of pointing. But semiosis is about actually controlling. And if your informational connection to the material world is reduced to an on/off button, some kind of binary switch to flick, then that is semiosis at its highest level of refinement. It is how reality can be controlled the most completely with the lightest conceivable touch.

    So how much do I need to know about the mechanics of a car to drive it? One pedal to make it go and another pedal to make it stop is pleasantly minimalistic.

    And even my body is seeking a similar simplicity in its metabolic regulation. Much of its homeostatic control boils down to the switch where insulin is the molecular message that is telling the body generally to act anabolically - store excess energy. And then glucagon is there to signal the opposite direction - act catabolically and release those energy stores.

    Insulin is produced by the beta cells of the panceas. Glucagon by neighbouring alpha cells. A simple accelerator and brake set up to keep the body motoring long at the right pace.

    So semiosis is not passive reference, but active regulation. And for a mere symbol to control reality, reality must be brought to its most sharply tippable state. It must be holistically responsive to a black and white command.

    Stop/go. Grow/shrink. Store/burn. Etc.
  • Mijin
    123
    When people talk about "computers" within the context of strong AI, they usually mean Turing machines i.e. something which runs a program which can be run on any other Turing machine.
    If strong AI is true, my PC can run a consciousness program (perhaps extremely slowly, but that's besides the point) and my PC would be conscious.

    If we are just saying a computer is some machine that computes, and is not necessarily Turing complete, then sure, the Chinese room doesn't apply.
    But Searle would agree with you. He agrees that the brain is a kind of machine, and would obviously agree that we are capable of computation. The Chinese room is not about trying to prove we have a soul or whatever, it's just about whether running a word-understanding program is the same thing as understanding words, which is relevant to whether running a consciousness program is the same thing as being conscious.

    For this latter point, I am not saying that the argument necessarily works. I am just saying that the objection of the OP is possibly based on a misconception.
  • apokrisis
    7.3k
    You can plug a simulation into the world, for example a robot, feed it inputs, and it could drive it's body and modify the world.hypericin

    Sure. Plug syntax into the world - make it dependent on that relationship – and away you go. But then it is no longer just a simulation, is it?

    A simulation would be simulation of that robot plugged into the world. So a simulated robot rather than a real one.

    And to be organic, this robot would have to be building its body as well as modifying its world. There is rather more to it.
  • magritte
    555
    conclusion that consciousness is bound to some kind of biological excretion is totally unwarrented.hypericin

    Searle's experimental conditions can always be tightened to meet specific objections. Also, words like consciousness, biological, and computer can be adjusted depending on the desired conclusion. Is a supercharged C3PO conscious even if it never sleeps?
  • hypericin
    1.6k
    But then it is no longer just a simulation, is it?apokrisis

    Really? As soon as you attach inputs and outputs to the robot brain, it is no longer a simulation?
    So, if the Chinese room simulated a famous Chinese general, and it received orders which the laborers laboriously translated, and then computed a reply, and based on this orders were given to troops, it is not a simulation? Seems absurd.
  • hypericin
    1.6k

    So then would Searle agree that it is possible to build a machine that thinks? Either via replicating nature (the thought experiment of replacing each brain cell one by one with a functionally equivalent replacement), or, by a novel design that just fulfills the requirements of conscisiousness (whatever they may be)?
  • Mijin
    123

    Yes, that's right. According to the wiki:

    Searle does not disagree with the notion that machines can have consciousness and understanding, because, as he writes, "we are precisely such machines".[5] Searle holds that the brain is, in fact, a machine, but that the brain gives rise to consciousness and understanding using machinery that is non-computational. If neuroscience is able to isolate the mechanical process that gives rise to consciousness, then Searle grants that it may be possible to create machines that have consciousness and understanding. — Wiki

    The argument is just against certain computational theories of the mind, and he is just trying to show that:

    1. A mind "program" is not necessarily itself a mind
    2. We cannot infer from behaviour alone whether subjective states and understanding are taking place

    Again, I'm not saying that the argument necessarily works. I think at this point even Searle has conceded that the original argument needs further refinement. Or, alternatively, that the argument is often applied way beyond its intended scope.
  • Changeling
    1.4k
    what about the floor in it?
  • apokrisis
    7.3k
    Really? As soon as you attach inputs and outputs to the robot brain, it is no longer a simulation?hypericin

    A robot has arms and legs doesn't it? Or at least wheels. And sensors.

    So, if the Chinese room simulated a famous Chinese general, and it received orders which the laborers laboriously translated, and then computed a reply, and based on this orders were given to troops, it is not a simulation? Seems absurd.hypericin

    I'm confused which side of the argument you are running. Do you mean emulation rather than simulation in the OP?

    The universality of the Turing Machine allows the claim such a device can emulate any computer. But simulation is the claim that a computer is modelling the behaviour of a real physical system.

    Strong AI proponents may then claim conciousness is just an emulatable variety of Turing computation. Biological types like myself view consciousness as a semiotic process - a modelling relation in which information regulates physical outcomes.

    A Turing Machine designs out its physicality. And so it is straightforward it becomes “all information and no physics”. The argument continues from there.

    Now if I am a Chinese soldier and I’m following orders from a book, is the book conscious? Or is it me that is consciously applying some information to my material actions?

    And how is the Chinese room general more than the equivalent of a book I’m this thought experiment?
  • magritte
    555

    OK, let me say it another way,

    The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but could not produce real understanding. Hence the “Turing Test” is inadequate.
    Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics.
    The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted. Instead minds must result from biological processes
    SEP article

    Consciousness can always be defined more or less stringently either to be included in or to be excluded from any finite set of experimental conditions. And if you define the Universe as a Turing machine, as you seem to do, then it has already computed everything there ever was.
  • Daemon
    591
    Hi Hypericin,

    This is a favourite topic of mine, for various reasons. It seems that the received wisdom nowadays is that digital computers are or could be conscious, and (or) that our brains are conscious because they work like digital computers. I think that Searle's argument, properly understood, shows decisively that the received wisdom is wrong in this case, and I always enjoy it when I think I know something most people don't.

    Also I'm a professional translator, and I enjoy knowing that digital computers will never be able to do what I do, which is to properly understand natural language.

    The crucial reason why digital computers (like the ones we are using to read and write our messages here) can never be conscious as a result of their programs is that the meanings of their inputs, processes and outputs are all, to use Searle's term "observer dependent".

    You can see this in concrete, practical terms right from the start of the design of a computer, when the designer specifies what is to count as a 0 and what is to count as a 1.

    And the reason a digital computer can never understand natural language as we do is that our understanding is based on conscious experience of the world.

    Any questions?
  • Harry Hindu
    5.1k
    Simulations have no real effects on the world.apokrisis
    Predictions are simulations in your head, and predictions have causal power. We all run simulations of other minds in our minds as we attempt to determine the reasoning behind some behaviour.
  • Harry Hindu
    5.1k

    Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics.SEP article
    The problem with the "Chinese" room is that the rules in the room are not for understanding Chinese. Those are not the same rules that Chinese people learned or use to understand Chinese. So Searle is making a category error in labeling it a "Chinese Room".

    The question isn't whether a computer can understand Chinese by using those rules. It is whether anything, or anyone, can understand Chinese by using those rules. If nothing can understand Chinese using the rules in the room, then those are not the rules for understanding Chinese.
  • Daemon
    591
    The problem with the "Chinese" room is that the rules in the room are not for understanding Chinese. Those are not the same rules that Chinese people learned or use to understand Chinese.Harry Hindu

    I think you're getting things back to front. The room is set up to replicate the way a computer works, the kinds of rules it works with. It's not trying to replicate the way humans work, the kinds of rules we use to understand language. So Searle is showing why a digital computer can't understand language.
  • hypericin
    1.6k
    but that the brain gives rise to consciousness and understanding using machinery that is non-computational — Mijin

    Thanks for the quote, this is precisely the point where I disagree with Searle.
    There is a middle ground between Turning Machine and physical process.
    Searle argues that a Chinese Program (a strip of magnetic tape processed by a Turing Machine) does not (necessarily) understand Chinese. He then pivots from this, to say that the brain therefore gives rise to consciousness using *non-computational* machinery.

    This then ties consciousness to biologic process or machines which can emulate the same physical process.

    But there is a middle ground which Searle seems to overlook: computational machines which are not Turing machines, and yet are purely informational. Such a machine has no ties to the matter which instantiates it. And yet, it is not a Turing machine, it does not process symbols in order to simulate or emulate other computations. It embodies the computations. Just like us.
  • apokrisis
    7.3k
    Predictions are simulations in your head, and predictions have causal power. We all run simulations of other minds in our minds as we attempt to determine the reasoning behind some behaviour.Harry Hindu

    Of course. But you took that statement out of context. Here is the context....

    Like the weather or a carburettor, the neural collective is actually pushing and shoving against the real world.

    That then is the semantics that breathes life into the syntax. And that is also the semantics that is missing if a brain, a carburettor or the weather is reduced to a mere syntactical simulation.
    apokrisis
  • Mijin
    123
    But there is a middle ground which Searle seems to overlook: Computional machines which are not turing machine, and yet is purely informational. Such a machine has no ties to the matter which instantiates it. And yet, it is not a Turing machine, it does not process symbols in order to simulate or emulate other computations. It embodies the computations. Just like us.hypericin

    Sure but I don't think your point is actually different to Searle's, or something he's overlooked.

    Because firstly, yes, when he is talking about "computers" and "computation" he really has a narrow idea in mind of what that means. He means a turing-emulatable (probably) digital computer. If this seems a straw man, note that this is the same idea of "computer" that is usually in mind for areas of study like computational neuroscience, and so is a key part of the main theories that he is arguing against.

    Now, if you're suggesting "What if there's a computer that's not Turing-compatible?" well sure there is, in Searle's view: the human brain. Because, if the human brain is not Turing-compatible, then it must be an example of a non-Turing compatible computer (a "hypercomputer") because humans are obviously capable of performing computation.

    Finally, if your point is about the possibility of making a non-biological hypercomputer, well we don't know that right now. Searle himself speculates that it may well be possible by, if nothing else, copying the human brain.
    But anyway, the point is, that the Chinese room is not intended to show that producing a hypercomputer is impossible, and Searle himself explicitly considered it out of scope.
  • hypericin
    1.6k
    Any questions?Daemon

    I don't think this is the right approach. There is nothing special going on with observer dependence. Yes, a bit, or an assembly instruction, has no meaning in itself. But neither does a neuron. All take their meaning from the assembly of which they are a part (rather than an outside observer). In hardware, the meaning of a binary signal is determined by the hardware which processes that signal. In software, the meaning of an instruction from the other instructions in a program, which all together process information. And in a brain, the meaning of a neuron derives from the neurons it signals, and the neurons which signal it, which in totality also process information.
  • hypericin
    1.6k
    ... or something he's overlooked.Mijin
    But he seems to have overlooked it.

    If neuroscience is able to isolate the mechanical process that gives rise to consciousness, then Searle grants that it may be possible to create machines that have consciousness and understanding — Searle

    He presents a false dichotomy:
    * Consciousness cannot be emulated by a Turing machine
    * Therefore, it must be physical, not informational, and can only be reproduced with the right mechanical process.

    But what if consciousness is informational, not physical, and is emergent from a certain processing of information? And what if that emergence doesn't happen if a Turing machine emulates that processing?
  • hypericin
    1.6k
    And how is the Chinese room general more than the equivalent of a book I’m this thought experiment?apokrisis
    Books merely present information, they don't process it.

    You seemed to be making the argument that the Chinese room does not "push against the world", therefore it is a simulation and cannot be conscious.

    But my point is that any simulation can trivially be made to "push against the world" by supplying it with inputs and outputs. But it is absurd to suggest that this is enough to make a non-conscious simulation conscious.
  • Harry Hindu
    5.1k
    I think you're getting things back to front. The room is set up to replicate the way a computer works, the kinds of rules it works with. It's not trying to replicate the way humans work, the kinds of rules we use to understand language. So Searle is showing why a digital computer can't understand language.Daemon
    But that was my point... that there are only one set of rules for understanding Chinese, and both humans and computers would use the same rules for understanding Chinese. I don't see a difference between how computers work and how humans work. We both have sensory inputs and we process those inputs to produce outputs based on logical rules.

    Not only are we not acknowledging that the room does not contain instructions for understanding Chinese, but we are also ignoring the fact that the instructions are in a language that the man in the room does understand. So the question is how did the man in the room come to understand the language the instructions are written in?
  • Harry Hindu
    5.1k
    Against the AI symbol processing story, Searle points out that a computer might simulate the weather, but simulated weather will never make you wet. Likewise, a simulated carburettor will never drive a car. Simulations have no real effects on the world.

    And so it is with the brain and its neurons. It may look like a computational pattern at one level. But that pattern spinning can't be divorced from the environment that is being regulated in realtime. Like the weather or a carburettor, the neural collective is actually pushing and shoving against the real world.

    That then is the semantics that breathes life into the syntax. And that is also the semantics that is missing if a brain, a carburettor or the weather is reduced to a mere syntactical simulation.
    apokrisis

    I don't really understand what you're going on about here. Making the claim that simulations have no real effects on the world when all you have to do is look at all of the imaginary ideas and their outcomes in the world, is absurd. Just look at all the Mickey Mouse memorabilia, cartoons, theme parks, etc. Isnt the memorabilia a physical simulation of the non-physical idea of Mickey Mouse, or is it vice versa?

    Think about how simulated crashes with real cars, crash test dummies, and a simulated driving environment has an effect on insurance rates when it comes covering certain cars with insurance and providing consumers with crash test ratings.

    I've set up virtual machines, which are software simulations of computer hardware, for companies and their business functions on these simulated computers.

    Isn't every event in the world a simulation of other similar events? Every event isnt exactly the same as every other, all events are unique, but they can be similar or dissimilar, and how good of a simulation it is depends on how similar the (simulated) event is to the event being simulated. We use events (simulations) to make predictions about similar events.

    A simulated carburettor isn't meant to drive a real car. It is meant to design a real car. Try designing a car well without using simulations/predictions. Weather simulations aren't meant to make you wet. They are meant to inform you of causal relationships between existing atmospheric conditions and subsequent atmospheric conditions that have an impact on a meteorologist's weather predictions. Try predicting the weather without using weather simulations. Our knowledge is just as much part of the world and simulations have a drastic impact on our understandings, knowledge and behaviors in the world.

    I don't see how semantics could ever be divorced from the syntax and vice versa, or how you could have one without the other. Semantics is just as important of a rule as the syntax. Effects mean their causes and causes mean their effects, so the semantics is there in every causal process. The rules of which causes leads to which effects and vice versa, would be the syntax - the temporal and spatial order of which effects are about which causes. If some system is following all of the rules, then how can you say that they don't understand the semantics? Defining variables in a computer program is equivalent to establishing semantic relationships that are then applied to some logical function.
  • Mijin
    123
    He presents a false dichotomy:
    * Consciousness cannot be emulated by a Turing machine
    * Therefore, it must be physical, not informational, and can only be reproduced with the right mechanical process.

    But what if consciousness is informational, not physical, and is emergent from a certain processing of information? And what if that emergence doesn't happen if a Turing machine emulates that processing?
    hypericin

    Good point.
    I need to give this more thought...does the Chinese room apply if consciousness is informational, but not Turing-compatible?
    But at first glance it does appear that Searle made a claim there that goes beyond what the Chinese room actually demonstrates.
  • Harry Hindu
    5.1k
    Therefore, it must be physical, not informational, and can only be reproduced with the right mechanical process.hypericin
    I don't know what this means. Present physical states are informative of prior physical states. I don't see how you can have something that is physical that is also absent information. The only process that is needed for information to be present is the process of causation.
  • apokrisis
    7.3k
    But my point is that any simulation can trivially be made to "push against the world" by supplying it with inputs and outputs. But it is absurd to suggest that this is enough to make a non-conscious simulation conscious.hypericin

    A simulation processes information. A living organism processes matter. It’s computations move the world in a detailed way such that the organism itself in fact exists, suspended in its infodynamic relationship.

    So it is not impossible that this story could be recreated in silicon rather than carbon. But it wouldn’t be a Turing Machine simulation. It would be biology once again.

    Howard Pattee wrote a good paper on the gap between computationalism and what true A-life would have to achieve.

    Artificial life and mind are not ruled out. But the “inputs and outputs” would be general functional properties like growth, development, digestion, immunology, evolvability, and so on. The direct ability to process material flows via an informational relationship.

    And a TM has that connection to dynamics engineered out. It is not built from the right stuff. It is not made of matter implementing Pattee’s epistemic cut.

    This undifferentiated view of the universe, life, and brains as all computation is of no value for exploring what we mean by the epistemic cut because it simply includes, by definition, and without distinction, dynamic and statistical laws, description and construction, measurement and control, living and nonliving, and matter and mind as some unknown kinds of computation, and consequently misses the foundational issues of what goes on within the epistemic cuts in all these cases. All such arguments that fail to recognize the necessity of an epistemic cut are inherently mystical or metaphysical and therefore undecidable by any empirical or objective criteria

    Living systems as-we-know-them use a hybrid of both discrete symbolic and physical dynamic behavior to implement the genotype-phenotype epistemic cut. There is good reason for this. The source and function of genetic information in organisms is different from the source and function of information in physics. In physics new information is obtained only by measurement and, as a pure science, used only passively, to know that rather than to know how, in Ryle's terms. Measuring devices are designed and constructed based on theory. In contrast, organisms obtain new genetic information only by natural selection and make active use of information to know how, that is, to construct and control. Life is constructed, but only by trial and error, or mutation and selection, not by theory and design. Genetic information is therefore very expensive in terms of the many deaths and extinctions necessary to find new, more successful descriptions. This high cost of genetic information suggests an obvious principle that there is no more genetic information than is necessary for survival.

    If artificial life is to inform philosophy, physics, and biology it must address the implementation of epistemic cuts. Von Neumann recognized the logical necessity of the description-construction cut for open-ended evolvability, but he also knew that a completely axiomatic, formal, or implementation-independent model of life is inadequate, because the course of evolution depends on the speed, efficiency, and reliability of implementing descriptions as constraints in a dynamical milieu.

    https://www.researchgate.net/publication/221531066_Artificial_Life_Needs_a_Real_Epistemology
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.