we can 'hold something to be true'... despite our own propensity to act as though it were true.
— praxis
How would you know? — Isaac
Ah. So all this is just to day that sometimes folk say "I feel certain..." as equivalent to "I believe...", and this is distinct from "I am certain...".
Why didn't you say? — Banno
It's good exercise but not a lot of meat on those bones.
Others find it filling. — ZzzoneiroCosm
What we'd be arguing about is the best rendering into language of exactly the same neural network, where 'best' might be defined by the rules of rational thinking, or ethics, or just social function. — Isaac
Without an internal model, the brain cannot transform flashes of light into sights, chemicals into smells and variable air pressure into music.
They are embodied, whole brain representations...
Evers and Lakomski give several examples of ANNs in use (Evers and Lakomski 1996, pp. 122 – 127). Their great strength is in pattern recognition, for example in differentiating between a sonar image from a rock and a mine; or in recognising valid and invalid inferences in propositional calculus.
ANNs function in a way that is somewhat different from a typical computer – what is called a “von Neumann Machine”. A Von Neumann machine has a central processor and a memory, and executes the instructions in the memory one at a time in sequence. ANNs do not make use of memory in this way, instead storing the “instructions” in the weighting of the connections between neurons. However, ANNs are subject to the same rules of computation that apply to all computational devices – they are limited Turing machines. “The recurrent perceptron network… is (yet another) form of a Turing machine”. (Hyötyniemi 1996, parenthetic comment in original)
Their advantage, usually described as “pattern recognition”, is that they are able to represent what engineers call “hard problems” (Works 1992). Hard problems are those that are non-linear; or more specifically, that cannot satisfy the general equation
F(aX + bY) = aF(X) + bF(Y)
Complex systems, such as the Mandelbrot set or turbulence in a fluid, are not solvable using this linear equation. Complex systems vary so greatly that there are no general mathematical tools that can be used to analyse them. But non-linear systems can be modelled by the weightings of a neural network
“We can learn a great deal about complex non-linear systems even though we have no comprehensive mathematical tools. One way to describe this knowledge is to state rules describing the behaviour of a system. This is the approach taken by the author of the textbook, or a knowledge engineer writing an AI program. Another way is to map significant states of the system into some more compact internal representation so that later occurrences of these states can be recognized. This is the approach used in teaching by example, or by an ANNs.” (Works 1992, p. 37)
What form does this representation take? While it is not in the sequential form of a Von Neumann machine’s instructions, it can certainly be represented. The weightings of the neurons are given the indexes pi, qi, ri. The output of the network is given by the equation:
{image of matrix}
This example, from Paul Churchland (Churchland 1988), is itself used by Evers and Lakomski (Evers and Lakomski 1996, p. 122). The status of the artificial neural network can be stated precisely by such a matrix. Training an ANN is simply adjusting the values of pi, qi, ri until, for a desired set of inputs x, y, z, the desired output a, b, c, d is achieved. This is done by repeated use of a function called the delta rule (Picton 2000). The delta rule does no more than average the errors for each weighting, in order to find an optimal setting given a particular training set.
It is important to note that it is possible to represent the whole of the workings of an artificial neural network using this mathematics. Therefore it is possible to represent the function of the neural network symbolically (or linguistically, as Evers and Lakomski prefer to call it). The claim that “this would be a linguistic representation that the network itself makes no use of” (Evers and Lakomski 1996, p. 122) is difficult to understand, since pi, qi, ri, which are plainly linguistic representations, plainly are the values given to the weightings that the network indeed makes use of. These are precisely what each neuron uses in order to function, represented linguistically.
Again, in Doing Educational Administration they claim that
“…neural network representations of language raise the possibility that tacit knowledge of language is not rules based at all. For neural nets are not rules based.” (Evers and Lakomski 2000, p. 30)
But we have seen that neural networks are clearly rules-based systems, by explicating how those rules can be represented in a matrix.
Evers and Lakomski wish to use this to show that “all ‘knowing that’ is really ‘knowing how’”, taking their model for ‘knowing how’ as neural networks. We can just as easily claim, since neural networks can be represented by a rule, that ‘knowing how’ is actually ‘knowing that’.
However, it does seem evident that the description of the function of a neural network in a matrix is post hoc. The values that are assigned to the matrix are determined by running the network, and this appears to be the only way of determining these values. So although the values are indeed determined by a set of rules that are in their nature linguistic, those values are a result of the application of the rule. The know-how embedded in a neural network can be turned into theoretical knowledge in the form of a matrix only after the network has been run through a training cycle. In this sense, theoretical knowledge is post-hoc. — Banno
Weren't we just talking about pro-lifers getting abortions? I suspect the inverse also occurs, pro-choicers not getting an abortion because it feels immoral, like murder. — praxis
If belief is merely a social construct then we can abandon it should the need arise — praxis
Having read your post to Banno, you appear to have a wildly different conception of belief than I do. — praxis
A pointless comment eliciting this otherwise pointless response. — Janus
Pointless to you perhaps but that in no way makes my point pointless, just inconvenient for your point. — universeness
Right on Janus. We are not dealing here with certitudes. We are dealing with numerous instances of carelas speach and badass, ambigous words and expressions. — Ken Edwards
Painted using a matte house paint with the least possible gloss, on stretched canvas, 3.5 meters tall and 7.8 meters wide, in the Museo Reina Sofia in Madrid.
An anti-war statement displaying the terror and suffering of people and animals.
...an intermediary step in the translation where we talk about 'models of...' between the world we're trying to talk about (the actual pub) and the means by which the brain gets us to walk to the end of the road to get to it (the dynamic neural networks responsible). — Isaac
I might phrase the point as follows. We do not see the model; rather, the model is our seeing the world. It's not that brains construct models and the mind sees the model rather than the world, but that seeing the world can be described as constructing a model.Rather than holding their function in 'wiring', neural networks hold their function in behaviour. — Isaac
↪praxis That's more interesting. Did you parse these into actions on purpose? — Banno
You are slicing up the meanings of an inherently confusing batch of words into tinier and tinier shades of meaning. You are slicing them up into such teeny-weeny shades of meaning that you are going to end up with hamburger. — Ken Edwards
Sure, you can imagine stuff. But you are looking at your screen now; you are not looking at a model of your screen constructed by your brain. — Banno
Of what? — Banno
One believes some statement when one holds it to be true.
One is certain of some statement when one does not subject it to doubt.
One has faith in a statement when one believes it regardless of the evidence. — Banno
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.