Mind is only found in living organic matter therefor only living organic matter can have a mind.
That is an unasailable argument in that it defines the term mind to the exclusion of inorganic matter.
But that this definition is by necessity the only valid theory of the mind is not simply a resolved matter in philosophy. — m-theory
You are completely missing the point. It is impossible to transfer knowledge from one mind to another. — tom
Yes. But you don't agree because you want to believe something different without being able to produce the evidence. So at this point it is like arguing with a creationist. — apokrisis
The bit that is then missing - the crucial bit - is that the model doesn't have the job of making its own hardware. The whole thing is just a machine being created by humans to fulfill human purposes. It has no evolutionary or adaptive life of its own. — apokrisis
So computation is a formal action - algorithmic or rule-bound. And yet measurement is inherently an informal action - a choice that cannot be computed. Houston, we have a problem. — apokrisis
That's a good illustration of the absolute generacy of the measurement problem then. To have a formal theory of the mind involves also the informal choice about what kind of measurement stands for a sign of a mind.
We are talking Godelian incompleteness here. In the end, all formal reasoning systems - all syntactical arguments - rely on having to make an abductive and axiomatic guess to get the game started. We have to decide, oh, that is one of those. Then the development of a formal model can begin by having agreed a basis of measurement.
But then you mix up the issue of a measurement basis with something different - the notion of undecidability in computation.
Science models the world. So as an open-ended semiotic process, it doesn't have a halting problem. Instead, it is free to inquire until it reaches a point of pragmatic indifference in the light of its own interests.
You are talking about a halting problem analogy by the sound of it. And that is merely a formal property of computational space. Some computational processes will terminate, others have a form that cannot. That is something quite different. — apokrisis
If you are wondering how we can know that we have created a mind, we will know because we will have understood the mind well enough to program it in the first place. — tom
If the mind is something you can be sure that you have and that you can be sure correctly each time you inquire about the presence of your own mind...this would mean the term mind will be something that is fundamentally computational. — m-theory
Yeah. I just don't see that. You have yet to show how syntax connects to semantics in your view. And checking to see that "we" are still "a mind" is about an irreducibly semantic act as you could get, surely? So how is it fundamentally syntactical? Or how is computation not fundamentally syntax? These are the kinds of questions I can't get you to engage with here. — apokrisis
Out of curiosity, why did you cite partially observable Markov decision processes as if they somehow solved all your woes? Did you mean to point to some specific extra they implement which other similar computational architectures don't - like having an additional step that is meant to simulate actively disturbing the world to find if it conforms with predictions? — apokrisis
It seems to me still that sure we can complexify the architectures so they add on such naturalistic behaviours. We can reduce creative semantics to syntactically described routines. But still, a computer is just a machine simulating such a routine. It is a frozen state of habit, not a living state of habit.
A real brain is always operating semantically in the sense that it is able to think about its habits - they can come into question by drawing attention themselves, especially when they are not quite working out during some moment.
So as I've said, I agree that neural network architectures can certainly go after biological realism. But an algorithm is canonically just syntax.
Turing computation must have some semantics buried at the heart of its mechanics - a reading head that can interpret the symbols on a tape and do the proper thing. But Turing computation relies on external forces - some clever hardware designing mind - to actually underwrite that. The computation itself is just blind, helpless, syntax that can't fix anything if a fly spot on the infinite tape makes a smudge that is somewhere between whatever is the symbol for a 1 and a 0.
So to talk about AI in any principled way, you have to deal with the symbol grounding problem. You have to have a working theory of semantics that tells you whether your syntactic architecture is in some sense getting hotter or colder.
A hand-wavy approach may be quite standard in computer science circles - it is just standard practice for computer boffs to over-promise and under-deliver. DARPA will fund it anyway. But that is where philosophy of mind types, and theoretical biology types, will reply it is not good enough. It is just obvious that computer science has no good answer on the issue of semantics. — apokrisis
Semantics cannot exist without syntax.
To implement any notion of semantics will entail syntax and the logical relationships within that syntax.
To ground this symbol manipulation simply means to place some agency in the role of being invested in outcomes from decisions. — m-theory
Great. Now all you need to do is define "agency" in a computationally scalable way. Perhaps you can walk me through how you do this with pomdp? — apokrisis
A notion of agency is of course central to the biosemiotic approach to the construction of meanings - or meaningful relations given that this is about meaningful physical actions, an embodied or enactive view of cognition. — apokrisis
But you've rejected Pattee and biosemiotics for some reason that's not clear. So let's hear your own detailed account of how pomdp results in agency and is not merely another example of a computationalist Chinese Room. — apokrisis
How as a matter of design is pomdp not reliant on the agency of its human makers in forming its own semantic relations via signs it constructs for itself? In what way is pomdp's agency grown rather than built? — apokrisis
If pomdp turns out to be merely an assemblage of syntactic components, their semantic justification being something that its human builders understand rather than something pomdp grew for itself as part of a scaling up of a basic agential world relation, then Houston, you still have a problem. — apokrisis
So given pomdp is your baby, I'm really interested to see you explain how it is agentially semantic and not just Chinese Room syntactic. — apokrisis
Agency is any system which observes and acts in it's environment autonomously. — m-theory
The same applies to a computational agent, it is embedded with its environment through sensory perceptions. — m-theory
Pattee must demonstrate that exact solutions are necessary for semantics. — m-theory
I also provided a link that is extremely detailed. — m-theory
Pompdp illustrates why infinite regress is not completely intractable it is only intractable if exact solutions are necessary, I am arguing that exact solutions are not necessary and the general solutions used in Pomdp resolve issues of epistemic cut. — m-theory
I can make no sense of the notion that semantics is something divided apart from and mutually exclusive of syntax. — m-theory
To account for the competence of AlphaGo one cannot simply claim it is brute force of syntax as one might do with Deepblue or other engines. — m-theory
The Chinese room does not refute computational theories of the mind, never has, and never will.
It is simply suggests that because the hardware does not understand then the software does not understand. — m-theory
Great. Now you have replaced one term with three more terms you need to define within your chosen theoretical framework and not simply make a dualistic appeal to standard-issue folk ontology.
So how precisely are observation, action and autonomous defined in computational theory? Give us the maths, give us the algorithms, give us the measureables. — apokrisis
Well there must be a reason why that distinction is so firmly held by so many people - apart from AI dreamers in computer science perhaps. — apokrisis
But he does. That is what the Von Neumann replicator dilemma shows. It is another example of Godelian incompleteness. An axiom system can't compute its axiomatic base. Axioms must be presumed to get the game started. And therein lies the epistemic cut. — apokrisis
This link seems very poor as an example of a general mathematical outline of a Godel incompleteness facing computational theories of the mind.You could check out Pattee's colleague Robert Rosen who argued this point on a more general mathematical basis. See Essays on Life Itself for how impredicativity is a fundamental formal problem for the computational paradigm.
http://www.people.vcu.edu/~mikuleck/rosrev.html — apokrisis
The question here is whether you understand your sources. — apokrisis
Yes, this is what you assert. Now I'm asking you to explain it in terms that counter my arguments in this thread.
Again, I don't think you understand your sources well enough to show why they deal with my objections - or indeed, maybe even agree with my objections to your claim that syntax somehow generates semantics in magical fashion. — apokrisis
Well there must be a reason why that distinction is so firmly held by so many people - apart from AI dreamers in computer science perhaps. — apokrisis
But semantics is always built into computation by the agency of humans. That is obvious when we write the programs and interpret the output of a programmable computer. With a neural net, this building in of semantics becomes less obvious, but it is still there. So the neural net remains a syntactic simulation not the real thing. — apokrisis
If you want to claim there are algorithmic systems - that could be implemented on any kind of hardware in physics-free fashion - then it is up to you to argue in detail how your examples can do that. So far you just give links to other folk making the usual wild hand-waving claims or skirting over the ontic issues. — apokrisis
Well the Chinese Room sure felt like the death knell of symbolic AI at the time. The game was up at that point.
But anyway, now that you have introduced yet another psychological concept to get you out of a hole - "understanding" - you can add that to the list. What does it mean for hardware to understand anything, or software to understand anything? Explain that in terms of a scientific concept which allows measurability of said phenomena. — apokrisis
I don't really have time to explain repeatedly that fundamentally I don't agree that relevant terms such as these examples are excluded from computational implantation. — m-theory
This link seems very poor as an example of a general mathematical outline of a Godel incompleteness facing computational theories of the mind. — m-theory
Perhaps if you had some example of semantics that exists independently and mutually exclusive of syntax it would be useful for making your point? — m-theory
The semantics of go was not built into AlphaGo and you seem to be saying that because a human built it that means any semantic understanding it has came from humans. — m-theory
Again I can make no sense of your "physics free" insistence here. — m-theory
And again it is not clear that there is an ontic issue and the hand waving of obscure texts does not prove that there is one. — m-theory
I did not anticipate that you would insist that I define all the terms I use in technical detail.
I would perhaps be willing to do this I if I believed it would be productive, but because you disagree at a more fundamental level I doubt giving technical detail will further or exchange. — m-theory
So it is not my position that there is pure semantics anywhere anytime. If semantics and syntax form a proper metaphysical strength dichotomy, they would thus be two faces of the one developing separation. In a strong sense, you could never have one without the other. — apokrisis
So you read that Pattee reference before dismissing it? — apokrisis
I can only hand wave them if you won't even read them before dismissing them. And if you find them obscure, that simply speaks to the extent of your scholarship. — apokrisis
I've given you every chance to show that you understand the sources you cite in a way that counters the detailed objections I've raised.
Pompd is the ground on which you said you wanted to make your case. You claimed it deals with my fundamental level disagreement. I'm waiting for you to show me that with the appropriate technical account. What more can I do than take you at your word when you make such a promise? — apokrisis
This does not make it any clearer what you mean when you are using this term.
Again real world computation is not physics free, even if computation theory has thought experiments that ignore physical constraints. — m-theory
We don't have a technical account of your issue.
It was a mistake of me to try and find a technical solution prior I admit. — m-theory
Again, real world Turing computation is certainly physics-free if the hardware maker is doing his job right. If the hardware misbehaves - introduces physical variety in a way that affects the physics-free play of syntax - the software malfunctions. (Not that the semantics-free software could ever "know" this of course.) — apokrisis
:-} — apokrisis
You will have to forgive me if I find that line to be a rather large leap and not so straight forward as you take for granted.. — m-theory
So, to quote von Neumann, what is the point of me being percise if I don't know what I am talking about? — m-theory
Here is another video of Chaitin offering a computational rebuttal to the notion that computation does not apply to evolution. — m-theory
Forget youtube videos. Either you understand the issues and can articulate the relevant arguments or you are pretending to expertise you simply don't have. — apokrisis
Suppose we concede the point that evolution and the origin of life has a Godel incompleteness problem and let's ask what that would imply about computational theories of evolution, the origin of life, or even the mind.
Chaitin, I believe, offers a good example of the computational view surrounding that issue. — m-theory
In a very abstract way Chaitin shows that a very generalized evolution can still result from a computational foundation (albeit in his model it is necessary to ignore certain physical constraints). — m-theory
I listened to the podcast and it is indeed interesting but does the opposite of supporting what you appear to claim. — apokrisis
On incompleteness, Chaitin stresses that it shows that machine-like or syntactic methods of deriving maths is a pure math myth. All axiom forming involves what Peirce terms the creative abductive leap. So syntax has to begin with semantics. It doesn't work the other way round as computationalists might hope. — apokrisis
As Chaitin says, the problem for pure maths was that it had the view all maths could be derived from some finite set of axioms. And instead creativity says axiom production is what is infinitely open ended in potential. So that requires the further thing of some general constraint on such troublesome fecundity. The problem - for life, as Von Neumann and Rosen and Pattee argue mathematically - is that biological systems have to be able to close their own openness. They must be abe to construct the boundaries to causal entailment that the epistemic cut represents. — apokrisis
As a fundamental problem for life and mind, this is not even on the usual computer science radar. — apokrisis
Then Chaitin's theorem is proven in a physics-free context. He underlines that point himself, and says connecting the theorem to the real world is an entirely other matter.
But Chaitin is trying to take a biologically realistic approach to genetic algorithms. And thus his busy beaver problem is set up in a toy universe with the equivalent of an epistemic cut. The system has a running memory state that can have point mutations. An algorithm is written to simulate the physical randomness of the real world and make this so.
Then the outcome of the mutated programme is judged against the memory state which simulates the environment on the other side of the epistemic cut. The environment says either this particular mutant is producing the biggest number ever seen or its not, therefore it dies and is erased from history.
So the mutating programs are producing number-producing programs. In Pattee's terms, they are the rate independent information side of the equation. Then out in the environment, the numbers must be produced so they can be judged against a temporal backdrop where what might have been the most impressive number a minute ago is already now instead a death sentence. So that part of the biologically realistic deal is the rate dependent dynamics. — apokrisis
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.