Sir, I take it then that you have a better idea of what proof of the transcendent should consist in. So please go ahead and outline it for me, and please explain how it's suitable to the definition towards which we have both expressed our agreements! — Agustino
Yes indeed. I see that you do not want to share your knowledge of the transcendental or of what a transcendental proof would look like. So I will proceed with a definition of transcendental to advance our discussion.
I propose sir, that transcendental is anything that cannot be investigated with physical instruments (meaning instruments of physical sciences - microscopes, etc); anything that knowledge of the structure of the physical world as investigated by physical instruments would not reveal. Is this in accordance with your understanding of transcendental sir? — Agustino
Yes, and the color-blind I suppose should also say the same about the existence of colors they cannot see, that they don't exist, and their sight is perfectly fine, everyone else is wrong. — Agustino
The is/ought distinction is founded on a category error. Values don't exist in the sense of the way a chair exists, somewhere to be found and touched in the real world. They are transcendental, above and beyond the world. A mere analysis of the physics of the world will not yield you any values. And yet, that is not to say values don't exist. Only that they don't exist in the same way as atoms do. To expect the same kind of being with regards to values as in regards to atoms is to misunderstand the nature of each. — Agustino
Values are values. Not my values and your values. Me and you, we can assume things to be values, we don't decide what they are. — Agustino
Progressives - like all leftist groups having a tendency towards totalitarianism and decadence - must always create this "Other" who is a false image of the real threat - their own projection, which is aimed at nothing but maintaining their ignorance. — Agustino
This liberal cancer is a betrayal of real Western culture and civilisation — Agustino
Hillary Clinton awkwardly seemed to want to tie it into Trump himself — The Great Whatever
I listened to the podcast and it is indeed interesting but does the opposite of supporting what you appear to claim. — apokrisis
On incompleteness, Chaitin stresses that it shows that machine-like or syntactic methods of deriving maths is a pure math myth. All axiom forming involves what Peirce terms the creative abductive leap. So syntax has to begin with semantics. It doesn't work the other way round as computationalists might hope. — apokrisis
As Chaitin says, the problem for pure maths was that it had the view all maths could be derived from some finite set of axioms. And instead creativity says axiom production is what is infinitely open ended in potential. So that requires the further thing of some general constraint on such troublesome fecundity. The problem - for life, as Von Neumann and Rosen and Pattee argue mathematically - is that biological systems have to be able to close their own openness. They must be abe to construct the boundaries to causal entailment that the epistemic cut represents. — apokrisis
As a fundamental problem for life and mind, this is not even on the usual computer science radar. — apokrisis
Then Chaitin's theorem is proven in a physics-free context. He underlines that point himself, and says connecting the theorem to the real world is an entirely other matter.
But Chaitin is trying to take a biologically realistic approach to genetic algorithms. And thus his busy beaver problem is set up in a toy universe with the equivalent of an epistemic cut. The system has a running memory state that can have point mutations. An algorithm is written to simulate the physical randomness of the real world and make this so.
Then the outcome of the mutated programme is judged against the memory state which simulates the environment on the other side of the epistemic cut. The environment says either this particular mutant is producing the biggest number ever seen or its not, therefore it dies and is erased from history.
So the mutating programs are producing number-producing programs. In Pattee's terms, they are the rate independent information side of the equation. Then out in the environment, the numbers must be produced so they can be judged against a temporal backdrop where what might have been the most impressive number a minute ago is already now instead a death sentence. So that part of the biologically realistic deal is the rate dependent dynamics. — apokrisis
Forget youtube videos. Either you understand the issues and can articulate the relevant arguments or you are pretending to expertise you simply don't have. — apokrisis
Again, real world Turing computation is certainly physics-free if the hardware maker is doing his job right. If the hardware misbehaves - introduces physical variety in a way that affects the physics-free play of syntax - the software malfunctions. (Not that the semantics-free software could ever "know" this of course.) — apokrisis
:-} — apokrisis
So it is not my position that there is pure semantics anywhere anytime. If semantics and syntax form a proper metaphysical strength dichotomy, they would thus be two faces of the one developing separation. In a strong sense, you could never have one without the other. — apokrisis
So you read that Pattee reference before dismissing it? — apokrisis
I can only hand wave them if you won't even read them before dismissing them. And if you find them obscure, that simply speaks to the extent of your scholarship. — apokrisis
I've given you every chance to show that you understand the sources you cite in a way that counters the detailed objections I've raised.
Pompd is the ground on which you said you wanted to make your case. You claimed it deals with my fundamental level disagreement. I'm waiting for you to show me that with the appropriate technical account. What more can I do than take you at your word when you make such a promise? — apokrisis
That's the ambiguity I referred to earlier. "Chance" when speaking about a future event, refers to a possibility. This could be interpreted as probability. "Chance", when speaking of a past event as a "chance event", implies equal possibility, like the flip of the coin. "Random" refers to equal chances in relation to both, past and future. — Metaphysician Undercover
Since "random" refers to equal chances, it is useless for prediction. But any probability other than random is useful. So random is a particular type of probability which denies all possibility of prediction. That is the difference, probability is used in predictions when exactitude cannot be obtained, randomness does not allow for prediction at all. — Metaphysician Undercover
Great. Now you have replaced one term with three more terms you need to define within your chosen theoretical framework and not simply make a dualistic appeal to standard-issue folk ontology.
So how precisely are observation, action and autonomous defined in computational theory? Give us the maths, give us the algorithms, give us the measureables. — apokrisis
Well there must be a reason why that distinction is so firmly held by so many people - apart from AI dreamers in computer science perhaps. — apokrisis
But he does. That is what the Von Neumann replicator dilemma shows. It is another example of Godelian incompleteness. An axiom system can't compute its axiomatic base. Axioms must be presumed to get the game started. And therein lies the epistemic cut. — apokrisis
This link seems very poor as an example of a general mathematical outline of a Godel incompleteness facing computational theories of the mind.You could check out Pattee's colleague Robert Rosen who argued this point on a more general mathematical basis. See Essays on Life Itself for how impredicativity is a fundamental formal problem for the computational paradigm.
http://www.people.vcu.edu/~mikuleck/rosrev.html — apokrisis
The question here is whether you understand your sources. — apokrisis
Yes, this is what you assert. Now I'm asking you to explain it in terms that counter my arguments in this thread.
Again, I don't think you understand your sources well enough to show why they deal with my objections - or indeed, maybe even agree with my objections to your claim that syntax somehow generates semantics in magical fashion. — apokrisis
Well there must be a reason why that distinction is so firmly held by so many people - apart from AI dreamers in computer science perhaps. — apokrisis
But semantics is always built into computation by the agency of humans. That is obvious when we write the programs and interpret the output of a programmable computer. With a neural net, this building in of semantics becomes less obvious, but it is still there. So the neural net remains a syntactic simulation not the real thing. — apokrisis
If you want to claim there are algorithmic systems - that could be implemented on any kind of hardware in physics-free fashion - then it is up to you to argue in detail how your examples can do that. So far you just give links to other folk making the usual wild hand-waving claims or skirting over the ontic issues. — apokrisis
Well the Chinese Room sure felt like the death knell of symbolic AI at the time. The game was up at that point.
But anyway, now that you have introduced yet another psychological concept to get you out of a hole - "understanding" - you can add that to the list. What does it mean for hardware to understand anything, or software to understand anything? Explain that in terms of a scientific concept which allows measurability of said phenomena. — apokrisis
I have no problem with "probability", I believe it is very useful. What I have a problem with is "chance", or "randomness". Do you see the difference? Chance, or randomness, is when probability is inapplicable for the purpose of prediction. So chance and probability are inherently incompatible. Probability provides the basis for prediction, chance does not. — Metaphysician Undercover
Great. Now all you need to do is define "agency" in a computationally scalable way. Perhaps you can walk me through how you do this with pomdp? — apokrisis
A notion of agency is of course central to the biosemiotic approach to the construction of meanings - or meaningful relations given that this is about meaningful physical actions, an embodied or enactive view of cognition. — apokrisis
But you've rejected Pattee and biosemiotics for some reason that's not clear. So let's hear your own detailed account of how pomdp results in agency and is not merely another example of a computationalist Chinese Room. — apokrisis
How as a matter of design is pomdp not reliant on the agency of its human makers in forming its own semantic relations via signs it constructs for itself? In what way is pomdp's agency grown rather than built? — apokrisis
If pomdp turns out to be merely an assemblage of syntactic components, their semantic justification being something that its human builders understand rather than something pomdp grew for itself as part of a scaling up of a basic agential world relation, then Houston, you still have a problem. — apokrisis
So given pomdp is your baby, I'm really interested to see you explain how it is agentially semantic and not just Chinese Room syntactic. — apokrisis
Yeah. I just don't see that. You have yet to show how syntax connects to semantics in your view. And checking to see that "we" are still "a mind" is about an irreducibly semantic act as you could get, surely? So how is it fundamentally syntactical? Or how is computation not fundamentally syntax? These are the kinds of questions I can't get you to engage with here. — apokrisis
Out of curiosity, why did you cite partially observable Markov decision processes as if they somehow solved all your woes? Did you mean to point to some specific extra they implement which other similar computational architectures don't - like having an additional step that is meant to simulate actively disturbing the world to find if it conforms with predictions? — apokrisis
It seems to me still that sure we can complexify the architectures so they add on such naturalistic behaviours. We can reduce creative semantics to syntactically described routines. But still, a computer is just a machine simulating such a routine. It is a frozen state of habit, not a living state of habit.
A real brain is always operating semantically in the sense that it is able to think about its habits - they can come into question by drawing attention themselves, especially when they are not quite working out during some moment.
So as I've said, I agree that neural network architectures can certainly go after biological realism. But an algorithm is canonically just syntax.
Turing computation must have some semantics buried at the heart of its mechanics - a reading head that can interpret the symbols on a tape and do the proper thing. But Turing computation relies on external forces - some clever hardware designing mind - to actually underwrite that. The computation itself is just blind, helpless, syntax that can't fix anything if a fly spot on the infinite tape makes a smudge that is somewhere between whatever is the symbol for a 1 and a 0.
So to talk about AI in any principled way, you have to deal with the symbol grounding problem. You have to have a working theory of semantics that tells you whether your syntactic architecture is in some sense getting hotter or colder.
A hand-wavy approach may be quite standard in computer science circles - it is just standard practice for computer boffs to over-promise and under-deliver. DARPA will fund it anyway. But that is where philosophy of mind types, and theoretical biology types, will reply it is not good enough. It is just obvious that computer science has no good answer on the issue of semantics. — apokrisis
If you are wondering how we can know that we have created a mind, we will know because we will have understood the mind well enough to program it in the first place. — tom
If you do believe this, how would you proceed to demonstrate that it is true? — Metaphysician Undercover
That there are possibilities does not necessarily entail that there is randomness. — Metaphysician Undercover
Yes. But you don't agree because you want to believe something different without being able to produce the evidence. So at this point it is like arguing with a creationist. — apokrisis
The bit that is then missing - the crucial bit - is that the model doesn't have the job of making its own hardware. The whole thing is just a machine being created by humans to fulfill human purposes. It has no evolutionary or adaptive life of its own. — apokrisis
So computation is a formal action - algorithmic or rule-bound. And yet measurement is inherently an informal action - a choice that cannot be computed. Houston, we have a problem. — apokrisis
That's a good illustration of the absolute generacy of the measurement problem then. To have a formal theory of the mind involves also the informal choice about what kind of measurement stands for a sign of a mind.
We are talking Godelian incompleteness here. In the end, all formal reasoning systems - all syntactical arguments - rely on having to make an abductive and axiomatic guess to get the game started. We have to decide, oh, that is one of those. Then the development of a formal model can begin by having agreed a basis of measurement.
But then you mix up the issue of a measurement basis with something different - the notion of undecidability in computation.
Science models the world. So as an open-ended semiotic process, it doesn't have a halting problem. Instead, it is free to inquire until it reaches a point of pragmatic indifference in the light of its own interests.
You are talking about a halting problem analogy by the sound of it. And that is merely a formal property of computational space. Some computational processes will terminate, others have a form that cannot. That is something quite different. — apokrisis