2) The best theory of that what kind of stuff that actually is, is what you would expect biologists to produce. And the standard answer from biologists is biology is material dynamics regulated by semiotic code - unstable chemistry constrained by evolving memory. Agreed? — apokrisis
3) Then the question is whether computation is the same kind of stuff as that, or a fundamentally different kind of stuff. And as Pattee argues (not from quantum measurement, but his own 1960s work on biological automata), computation is physics-free modelling. It is the isolated play of syntax that builds in its presumption of being implementable on any computationally suited device. And in doing that, it explicitly rules out any external influences from the operation of physical laws or dissipative material processes. Sure there must be hardware to run the software, but it is axiomatic to universal computation that the nature of the hardware is irrelevant to the play of the symbols. Being physics-free is what makes the computation universal. Agreed? — apokrisis
from von Neumann (1955, p. 352). He calls the system being measured, S, and the measuring device, M, that must provide the initial conditions for the dynamic laws of S. Since the non-integrable constraint, M, is also a physical system obeying the same laws as S, we may try a unified description by considering the combined physical system (S + M). But then we will need a new measuring device, M', to provide the initial conditions for the larger system (S + M). This leads to an infinite regress; but the main point is that even though any constraint like a measuring device, M, can in principle be described by more detailed universal laws, the fact is that if you choose to do so you will lose the function of M as a measuring device. This demonstrates that laws cannot describe the pragmatic function of measurement even if they can correctly and completely describe the detailed dynamics of the measuring constraints.
Given the above - that biological stuff is fundamentally different from computational stuff in a completely defined fashion - the burden is then on the computationalist to show that computation could still be the right stuff in some way. — apokrisis
This is another unhelpful idee fixe you have developed. As said, Pattee's theoretical formulation of the epistemic cut arose from being a physicist working on the definition of life in the 1950s and 1960s as DNA was being discovered and the central mechanism of evolution becoming physically clear. From von Neumann - who also had an interest in self-reproducing automata - Pattee learnt that the epistemic cut was also the same kind of problem as had been identified in quantum mechanics as the measurement problem. — apokrisis
The epistemic cut or the distinction between subject and object is normally associated with highly evolved subjects with brains and their models of the outside world as in the case of measurement. As von Neumann states, where we place the cut appears to be arbitrary to a large extent. The cut itself is an epistemic necessity, not an ontological condition. That is, we must make a sharp cut, a disjunction, just in order to speak of knowledge as being "about" something or "standing for" whatever it refers to. What is going on ontologically at the cut (or what we see if we choose to look at the most detailed physics) is a very complex process. The apparent arbitrariness of the placement of the epistemic cut arises in part because the process cannot be completely or unambiguously described by the objective dynamical laws, since in order to perform a measurement the subject must have control of the construction of the measuring device. Only the subject side of the cut can measure or control.
Fine. Now present that evidence. — apokrisis
No idea what you are talking about here — apokrisis
But that's after the fact: the killing will have already either taken place or not taken place. My point stands: no amount of legislation can prevent someone determined enough from going out and killing another human, whether that's an adult or a baby inside of themselves. — Sapientia
This is another straw man. What I actually said is that in most cases, a better resolution is available, and I stand by that claim. — Sapientia
Feel free to go over my part in the previous discussion in order to better understand it. You've made quite a few big assumptions about my position which are in fact incorrect. Yes, there are exceptional circumstances, and yes, in places like the U.K. where I'm from, it is true that up to a point, pregnant women have a legal right to decide to have an abortion (We've even been over the actual wording and stated conditions in the relevant legislation), and I accept that there can be morally acceptable circumstances, although I would emphasise that they are acceptable, but not desirable or ideal. — Sapientia
Notice that subjectivity has already appeared! AlphaGo has no subjectivity. — tom
You are completely missing the point. It is impossible to transfer knowledge from one mind to another. Minds construct new knowledge from artefacts, problem-situations, background knowledge, by a fundamentally creative ability.
So, the creator of the artefact, and the interpreter of the artefact, are engaged in an inter-subjective dialogue. Each person is conjecturing theories about what each other means or interprets. Perfection and justification are impossible. — tom
AlphaGo can be as efficient as it likes. It will always fail the Chinese Room. It cannot create the knowledge that it is playing Go! — tom
You're right that there is no legislation which can prevent women from killing that which is living and growing inside of them, an unborn human, if they're determined enough, just as there is no legislation which can prevent women or anyone else from killing anyone else if they're determined enough. But neither are good things which should be encouraged. It is an unfortunate fact that murders and abortions occur, when in most cases, a better resolution is available. Just as someone who is contemplating murder should have access to counseling, so should someone contemplating abortion, and that is already the case in the developed world, as far as I'm aware. — Sapientia
That is irrelevant because you are talking about an already fully developed biology. The neural circuitry that was the result of having a hand would still be attempting to function. Check phantom limb syndrome.
Then imagine instead culturing a brain with no body, no sense organs, no material interaction with the world. That is what a meaningful state of disembodiment would be like. — apokrisis
I was talking about the biological basis of the epistemic cut - something we can examine in the lab today. — apokrisis
We also know that matter can compute...surely I am not expected to prove as much?Again, we know that biology is the right stuff for making minds. You are not expecting me to prove that? — apokrisis
And we know that biology is rooted in material instabilty, not material stabilty? I've given you the evidence of that. And indeed - biosemiotically - why it has to be the case. — apokrisis
And I've made the case that computation only employs syntax. It maps patterns of symbols onto patterns of symbols by looking up rules. There is nothing in that which constitutes an understanding of any meaning in the patterns or the rules? — apokrisis
So that leaves you having to argue that despite all this, computation has the right stuff in a way that makes it merely a question of some appropriate degree of algorithmic complication before it "must" come alive with thoughts and feelings, a sense of self and a sense of purpose, and so you are excused of the burden of saying just why that would be so given all the foregoing reasons to doubt. — apokrisis
No moral calculus has the same force as actual mathematical statements when it comes to accepting their truth. — Moliere
Do you mean a dualistic folk psychology notion of mind? I instead take the neurocognitive view that what you are talking about is simply the difference between attentive and habitual levels of brain processing. And these are hardly completely autonomous, but rather completely interdependent. — apokrisis
But the burden of proof is on you here. The only sure thing is that whatever you really mean by intelligence is a product of biology. And so biological stuff is already known to be the right stuff. — apokrisis
This misrepresents my argument again. My argument is that there is a fundamental known difference between hardware and wetware as BC puts it. So it is up to you to show that this difference does not matter here. — apokrisis
That would be why it seems easy to work from the top down. Computers are just mechanising what is already us behaving as if we were mechanical. But as soon as you actually dig into what it is to be a biological creature in an embodied relation with a complex world, mechanical programs almost immediately break down. They are the wrong stuff.
Neural networks buy you some extra biological realism. But then you have to understand the detail of that to make judgements about just how far that further exercise is going to get. — apokrisis
Nice example of misunderstanding a cultural aretfact. — tom
And again it seems. The leap to computational universality (the hardware problem) is fully understood. The leap to universal explainer (the software problem) is not understood. — tom
I tend to agree with this, and It would be nice if it were possible to talk in a more nuanced way about things. But I think it misses an important feature of the lived experience of women and of black people in the culture. — unenlightened
I would like to distinguish racism as a belief system held by a few and not implemented in social institutions beyond marginal groups, from prejudice, an unconscious attitude that alters behaviour based on race or gender as the case may be. This latter is what your account leaves out, and since it is more or less universal, it is quite devastating in its effects.
Mrs Un goes into a shop, and is immediately under suspicion; if there is a random check at the airport or the roadside, she is randomly chosen. Every relationship is tainted by not only racial prejudice, but also the performance of non-prejudice. White women in particular go out of their way to talk and act friendly, in a somewhat patronising way that quickly turns to resentment when it is not particularly appreciated. They want to have her as a friend as a symbol of their lack of prejudice - but at a safe distance, especially from their menfolk.
This plays out in wider society cumulatively; each little incident is deniable, no racist language is used, no views expressed, but when one dude is stopped twenty times in his car by the police, and another never, with no violation recorded for either, there is something going on statistically that is unidentifiable in any single incident.
Given that our recent past is that white supremacy and patriarchy were institutionally sanctioned and enforced, it is inevitable that there is a legacy of prejudice. And given the experience of this prejudice alongside its universal denial, it is inevitable that there is some anger and paranoia amongst the sufferers. It is especially the denial of the existence of a problem that is the daily experience of black people that becomes - maddening. — unenlightened
That is a good point maybe you are right.So you hope to discover the software by examining the hardware? The trouble is, since we don't know what we're looking for, how could we recognise it? — tom
Back to epistemology. If we want to create an AGI then the problem of how to create knowledge will have to be solved. You can't transfer knowledge from one mind to another. Instead one mind creates cultural artefacts, from which the other mind discerns something not contained within the artefact - its meaning. As Karl Popper said, "It is impossible to speak in such a way that you cannot be misunderstood. This by the way, dispenses with the Chinese Room. — tom
I am not so sure.t has been suggested that the human brain evolved the way it did in order to facilitate efficient knowledge transfer. Humans are unique (i.e. they are the last remaining species) in that they interpret meaning and intention - i.e. they create knowledge from artefacts and behaviours.
Now, here's the amazing thing if this account of our evolutionary history is true: once you can create knowledge, there is no stopping you. This is a leap to universality. Once you are an explainer you are automatically a universal explainer because the same mechanisms are involved.
Prior to the leap to universal explainer, there must have been another leap - the leap to computational universality in the human brain. This is a hardware problem, which we have long solved! — tom
In many social science departments of many western universities, they now teach that the west is fundamentally patriarchal, and fundamentally white supremacist. Racism is "power + privilege". They accept it as a brute fact that whites have all the power and all the privilege in the west, making all white people racist. It's hard to believe that this comes out of actual university curriculum, but it's becoming more and more evident. We're being told that as white men we're unaware of the naturally ingrained systems of oppression, which can be complex and subtle, that benefit us at the expense of women, of people color, even more so at the expense of women of color (and so on with a litany of possible identities which might entail facing any sort of obstacle in life which white men might not face). "Intersectionality" they call it, which is in itself worthy of it's own discussion. — VagabondSpectre
In some ways, any would be leader of the BLM movement is going to somehow have to put the "black" in "#BlackLivesMatter". It is very difficult to do this without amplifying a racial lens, but my own approach would be to address the issue of police use of force without focusing on racism or race as a fundamental causative factor behind the problem, and to also address the larger issue facing the black community, which leads to many of the events which spark BLM protests, which is crime in and of itself in black communities. The discussion must necessarily involve economics, politics and culture, and while it runs the risk of being obfuscated by likewise presuming that the economic, political, and cultural realities facing many black communities are symptoms of that larger white supremacist system contemporary schools of thought point to, it could still bare fruit. In summation, the BLM rhetoric at large is not outwardly "us against them", it is rather an idea lurks just under it's surface, and because of lost complexity and some inherently evocative underpinnings, it's now beginning to rear it's ugly head. — VagabondSpectre
Great. So in your view general intelligence is not wedded to biological underpinnings. You have drunk the Kool-Aid of 1970s cognitive functionalism. When faced with a hard philosophical rebuttal to the hand-waving promises that are currency of computer science as a discipline, suddenly you no longer want to care about the reasons AI is history's most over-hyped technological failure. — apokrisis
That is nothing like what I suggest. Instead I say "mind" arises out of that kind of lowest level beginning after an immense amount of subsequent complexification.
The question is whether computer hardware can ever have "the right stuff" to be a foundation for semantics. And I say it can't because of the things I have identified. And now biophysics is finding why the quasi-classical scale of being (organic chemistry in liquid water) is indeed a uniquely "right" stuff. — apokrisis
Well I think I get it...Pattee argues that life may be like a unique state of matter at the quantum scale and we just might not be able to tell because of the measurement problem (I know it is much more complicated then that I just could not think of a better analogy for breviaries sake).I explained this fairly carefully in a thread back on PF if you are interested....
http://forums.philosophyforums.com/threads/the-biophysics-of-substance-70736.html
So here you are just twisting what I say so you can avoid having to answer the fundamental challenges I've made to your cosy belief in computer science's self-hype. — apokrisis
I thought you were referring to the gaudy self-publicist, Jeff Hawkins, of hierarchical temporal memory fame - https://en.wikipedia.org/wiki/Hierarchical_temporal_memory
But Bayesian network approaches to biologically realistic brain processing models are of course what I think are exactly the right way to go, as they are implementations of the epistemic cut or anticipatory systems approach.
Look, it's clear that you are not even familiar with the history of neural networks and cybernetics within computer science, let alone the way the same foundational issues have played out more widely in science and philosophy.
Don't take that as in insult. It is hardly general knowledge. But all I can do is point you towards the arguments.
And I think they are interesting because they are right at the heart of everything - being the division between those who understand reality in terms of Platonic mechanism and those who understand it in terms of organically self-organising processes. — apokrisis
What I see as his main issue is that he believes there is something like the measurement problem when dealing with the origin of life.Right. Pattee requires you to understand physics as well as biology. ;) But that is what makes him the most rigorous thinker in this area for my money. — apokrisis
Good grief. Not Mr Palm Pilot and his attempt to reinvent Bayesian reasoning as a forward modelling architecture? — apokrisis
Pattee has written a ton of papers which you can find yourself if you google his name and epistemic cut.
This is one with a bit more of the intellectual history.... http://www.informatics.indiana.edu/rocha/publications/pattee/pattee.html
But really, Pattee won't make much sense unless you do have a strong grounding in biological science. And much of the biology is very new. If you want to get a real understanding of how different biology is in its informational constraint of material instability, then this a good new pop sci book.... — apokrisis
They 'know' how to behave apparently, but it is implausible that they could know that they know. But, this is also true, it is commonly thought, of most or even all animals.
Perhaps to know that you know, or at least think that you know, requires symbolic language; the kind of self-reflection that it provides. The same could be said, I think, about knowing facts, in the discursive sense at least, and also being able to conceive of ostensive facts, and the idea that things may not be as they seem. — John
No this is simply wrong...unless you mean to suggest that sub atomic particles are intentional beings.Then you don't understand the point. Probability, possibility, and chance, only exist in relation to an intentional being. That is why it is necessary to bring in the intentional being. — Metaphysician Undercover
Epistemic possibility, logical possibility, exists only as a property of the intentional being's knowledge. Ontological possibility exists only in relation to what the intentional being can and cannot do. That the intentional being can flip a coin to produce a 50/50 probability, roll a die, create a lottery, or create a stochastic system, all of these being artificial creations of randomness, provides no evidence that such a thing as randomness could exist naturally. Therefore any claim that probability is something natural is what is unjustified. — Metaphysician Undercover
That is the question. Does it actually learn its own semantics or is there a human in the loop who is judging that the machine is performing within some acceptable range? Who is training the machine and deciding that yes, its got the routine down pat? — apokrisis
The thing is that all syntax has to have an element of frozen semantics in practice. Even a Turing Machine is semantic in that it must have a reading head that can tell what symbol it is looking at so it can follow its rules. So semantics gets baked in - by there being a human designer who can build the kind of hardware which ensures this happens in the way it needs to.
So you could look at a neural network as a syntactical device with a lot of baked-in semantics. You are starting to get some biological realism in that open learning of that kind takes place. And yet inside the black box of circuits, it is still all a clicking and whirring of syntax as no contact with any actual semantics - no regulative interactions with material instability - are taking place.
Of course my view relies on a rather unfamiliar notion of semantics perhaps. The usual view is based on matter~mind dualism. Meaning is held to be something "mental" or "experiential". But then that whole way of framing the issue is anti-physicalist and woo-making.
So instead, a biosemiotic view of meaning is about the ability of symbol systems - memory structures - to regulate material processes. The presumption is that materiality is unstable. The job of information is to constrain that instability to produce useful work. That is what mindfulness is - the adaptive constraint of material dynamics.
And algorithms are syntax with any semantics baked in. The mindful connection to materiality is severed by humans doing the job of underpinning the material stability of the hardware that the software runs on. There is no need for instability-stabilising semantics inside the black box. An actual dualism of computational patterns and hotly-switching transistor gates has been manufactured by humans for their own purpose. — apokrisis
Yes. But the robot hand is still a scaled-up set of digital switches. And a real hand is a scaled-up set of molecular machines. So the difference is philosophically foundational even if we can produce functional mimickry.
At the root of the biological hand is a world where molecular structures are falling apart almost as soon as they self-assemble. The half-life of even a sizeable cellular component like a microtubule is about 7 minutes. So the "hardware" of life is all about a material instability being controlled just enough to stay organised and directed overall.
You are talking about a clash of world views here. The computationalist likes to think biology is a wee bit messy - and its amazing wet machines can work at all really. A biologist knows that a self-organising semiotic stability is instrincally semantic and adaptive. Biology know itself, its material basis, all the way down to the molecules that compose it. And so it is no surprise that computers are so autistic and brittle - the tiniest physical bug can cause the whole machine to break-down utterly. The smallest mess is something a computer algorithm has no capacity to deal with.
(Thank goodness again for the error correction routines that human hardware designers can design in as the cotton wool buffering for these most fragile creations in all material existence). — apokrisis
But the question is how can an algorithm have semantic understanding in any foundational sense when the whole point is that it is bare isolated syntax?
Your argument is based on a woolly and dualistic notion of semantics. Or if you have some other scientific theory of meaning here, then you would need to present it. — apokrisis
The point though, is that each of these two types of "possibilities" only exist in relation to the intentional being. In relation to the past, there is possibility with respect to the intentional being's knowledge. In relation to the future, there is possibility with respect to what the intentional being can do. Remove the intentional being, and there is no such possibility of either type, though we could assume that the world would continue to exist — Metaphysician Undercover
Second, I think there is a strong tendency to underrate animal (wet) intelligence. It isn't learning how to recite Beowulf from memory that is the only impressive human achievement. It's also remembering the odor of the room where we learned Anglo-Saxon and now feel nostalgia for that faint musty odor when we recite Beowulf, that's distinctive. [SEE NOTE] Dry intelligence can replay Beowulf, but it can't connect odors and texts and feelings. It can't feel. It can't smell. — Bitter Crank
Dry intelligence can't connect with the feelings of a dog excited by the walk it's about to take. Dry intelligence can't lay on the floor and determine whether the guy walking around is getting ready to go to work (alone) or is going to take the dog for a walk. Dogs can do that. They can tell the difference between routine getting ready to go to work and getting ready to go out of town (which the dog will probably disapprove of, considering what happened the last time "they" left). So can cats. — Bitter Crank
Wet brains and wet intelligence have developed over an exceedingly long time. Wet brains aren't the only defense animals have, but they are remarkably effective. A rat's wet brain does, and will, out-performs Deep Blue and all of it's Blue successors, Screwed Blue, Dude Blue, Rude Blue, etc. because it has capabilities that can be reproduced by an algorithm.
It's not the algorithm, it's the structure of the body and its history.
[NOTE] I never learned Anglo Saxon and I can't recite Beowulf. I can pretend I did, and even feel like I did. Betcha Deep Blue can't do that. — Bitter Crank
Ok. But from my biophysical/biosemiotic perspective, a theory of general intelligence just is a theory of life, a theory of complex adaptive systems. You have to have the essence of that semiotic relation between symbols and matter built in from the smallest, simplest scales to have any "intelligence" at all.
So yes, you are doing the familiar thing of trying to abstract away the routinised, mechanics-imitating, syntactical organisation that people think of as rational thought or problem solving. If you input some statement into a Searlean Chinese room or Turing test passing automaton, all that matters is that you get the appropriate output statement. If it sounded as though the machine knew what you were talking about, then the machine passes as "intelligent". — apokrisis
So again, fine, its easy to imagine building technology that is syntactic in ways that map some structure of syntax that we give it on to some structure of syntax that we then find meaningful. But the burden is on you to show why any semantics might arise inside the machine. What is your theory of how syntax produces semantics? — apokrisis
Biology's theory is that of semiotics - the claim that an intimate relation between syntax and semantics is there from the get-go as symbol and matter, Pattee's epistemic cut between rate independent information and rate dependent dynamics. And this is a generic theory - one that explains life and mind in the same physicalist ontology. — apokrisis
But computer science just operates on the happy assumption that syntax working in isolation from material reality will "light up" in the way brains "light up". There has never been any actual theory to back up this sci fi notion.
Instead - given the dismal failure of AI for so long - the computer science tendency is simply to scale back the ambitions to the simplest stuff for machines to fake - those aspects of human thought which are the most abstractly syntactic as mental manipulations.
If you just have numbers or logical variables to deal with, then hey, suddenly everything messy and real world is put at as great a distance as it can possibly be. Any schoolkid can learn to imitate a calculating engine - and demonstrate their essential humanness by being pretty bad, slow and error-prone at it, not to mention terminally bored. — apokrisis
Then we humans invent an actual machine to behave like a machine and ... suddenly we are incredibly impressed at its potential. Already our pocket calculators exceed all but our most autistic of idiot savants in rapid, error-free, syntactical operation. We think if a pocket calculator can be this unnaturally robotic in its responses, then imagine how wonderfully conscious, creative, semantic, etc, a next generation quantum supercomputer is going to be. Or some such inherently self-contradicting shit. — apokrisis
No that is not it at all.So every time I point to a fundamental difference, your reply is simply that differences can be minimised. And when I point out that minimising those differences might be physically impractical, you wave that constraint away as well. It doesn't seem as though you want to take a principled approach to your OP. — apokrisis
Anyway, another way of phrasing the same challenge to your presumption there is no great problem here: can you imagine an algorithm that could operate usefully on unstable hardware? How could an algorithm function in the way you require if it's next state of output was always irreducibly uncertain? In what sense would such a process still be algorithmic in your book if every time it computed some value, there would be no particular reason for the calculation to come out the same? — apokrisis
No, that's not a contradiction at all. As far as I am concerned it is a statement of fact.
Over and out on this thread, thanks. — Wayfarer
You toss these phrases off, as if it is all settled, as if understanding the issues is really simple. But it really is not, all you're communicating is the fact that you're skating over the surface. 'The nature of knowledge' is the subject of the discipline of epistemology, and it's very difficult subject. — Wayfarer
Whether computers are conscious or not, is also a really difficult and unresolved question. — Wayfarer
I guess I will have to take your word for it.No, that's not a contradiction at all. As far as I am concerned it is a statement of fact. — Wayfarer
Over and out on this thread, thanks. — Wayfarer
And for programmable machines, we can see that there is a designed in divorce between the states of information and the material processes sustaining those states. — apokrisis
And as I have pointed out we would have to build in this selfhood relation from the top down. Whereas in life it exists from the bottom up, starting with molecular machines at the quasi classical nanoscale of the biophysics of cells. So computers are always going against nature in trying to recreate nature in this sense. — apokrisis