• david boothroyd
    1
    FUNCTIONALISM And SUBSTRATE
    David Boothroyd

    I would be very interested to hear why (and where) the following is flawed - if it is!

    It seems to me that there is a significant problem for what many philosophers regard as the best theory of the mind that we have today – functionalism. Functionalism claims that mental events like experiences and thoughts are constituted by their causal relations to one another and to sensory inputs and behavioural outputs. The ‘functional elements’ are thus entities defined by the nature of the causal/relational roles that they play within the mind. A key aspect of functionalism is the idea that mental states can be fully accounted for without taking into account the underlying physical medium via which they are implemented. Obviously though, the high-level elements require a substrate on which they must be based – for example, neurons. Now, suppose we discover that this substrate is not irrelevant to the behaviour of the functional elements. Instead, the substrate affects in some way the behaviour of the functional elements. Clearly, then, we have not captured the complete nature of the system with our initial functional account. But that is no problem. We can examine the substrate – at the appropriate level – find out how it affects the functional elements, and build those further causal effects into the functional model.

    The trouble is, this could happen again. The neurons forming the original substrate obviously have to be composed of parts, and it turns out that these ‘micro-neurons’ also have causal effects, which therefore must be incorporated into the functional model. The critical problem for this scenario – and hence functionalism – is this: how can we know when we have reached the limit, the point at which we can be certain that the ‘bottom level’ substrate we have ended up with is indeed the bottom? In order to capture the complete behaviour of a system through a functional description, we must be sure the substrate supporting the functional elements ‘above it’ plays no part in their behaviour. In short, we must be certain the substrate is playing exactly the same (non-functional) role as the grid plays in a cellular automaton. But how can we do that? It seems it is always going to be possible that another more ‘basic, lower’ layer is waiting to be discovered, which plays some causal role and hence must be included in the functional model. If so, we cannot be sure that our functional description is complete.

    Some scientists are aware of this kind of problem, for example Susan Greenfield in her 1995 book, Journey to the Centers of the Mind. (W. H. Freeman):

    “Even if we had the awesome technology to transfer every piece of information contained in the brain to an artificial system, it would still not be the same as a real brain in terms of how that information was used. A central issue is that in the brain the hardware and software are effectively one and the same (my emphasis). The size and shape of a brain cell are critical to how it operates: these physical characteristics will deter¬mine its efficiency in integrating incoming electrical blips into an all-or-none signal. This signal will then become one of up to 100,000 inputs to the next neuron along. However, the overall size and shape of a neuron is highly dynamic, subject to change in accordance with how hard the cell is working, which in turn is dependent on how actively the neuron is being stimulated by other neurons. The physical features of the neurons, and the network they form with other neurons – the hardware – is thus impossible to disentangle from the activity of those neurons during certain brain operations – the software. This feature of intertwined structure and function must be kept in mind if we dream, as many do, of building a machine ‘just like the brain’, or even better.” (Greenfield 1995, p 73)
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment