• tom
    1.5k
    Mind is only found in living organic matter therefor only living organic matter can have a mind.
    That is an unasailable argument in that it defines the term mind to the exclusion of inorganic matter.
    But that this definition is by necessity the only valid theory of the mind is not simply a resolved matter in philosophy.
    m-theory

    But it is the sort of "unasailable" argument that will be forgotten when we create an artificial mind.

    If you are wondering how we can know that we have created a mind, we will know because we will have understood the mind well enough to program it in the first place.
  • Metaphysician Undercover
    13.2k
    You are completely missing the point. It is impossible to transfer knowledge from one mind to another.tom

    This all depends on what type of existence you think that knowledge has, which is determined by your metaphysical perspective. Some would say that knowledge exists only in minds, like you do. Some would say that knowledge exists externally to the mind, in the artefact, such as in the books, in the library. It is also possible to produce an inclusive metaphysics, which includes both these aspects of knowledge. In which case knowledge passes from one mind to another, having an active and passive form.
  • m-theory
    1.1k
    Yes. But you don't agree because you want to believe something different without being able to produce the evidence. So at this point it is like arguing with a creationist.apokrisis

    I produced a deductive argument.

    The bit that is then missing - the crucial bit - is that the model doesn't have the job of making its own hardware. The whole thing is just a machine being created by humans to fulfill human purposes. It has no evolutionary or adaptive life of its own.apokrisis

    The human mind did not create it's own hardware.
    We are simply products of chemistry.

    So computation is a formal action - algorithmic or rule-bound. And yet measurement is inherently an informal action - a choice that cannot be computed. Houston, we have a problem.apokrisis

    Measurement is not an informal action when it produces something desecrate.
    Imagine if something like a measurement does face the mind.
    We are saying that the mind cannot take into account it's own existence within the context of what is measured.
    Of course I don;t agree that is a productive way to define the mind because we do not consider the mind something that is not included in our observations.

    That's a good illustration of the absolute generacy of the measurement problem then. To have a formal theory of the mind involves also the informal choice about what kind of measurement stands for a sign of a mind.

    We are talking Godelian incompleteness here. In the end, all formal reasoning systems - all syntactical arguments - rely on having to make an abductive and axiomatic guess to get the game started. We have to decide, oh, that is one of those. Then the development of a formal model can begin by having agreed a basis of measurement.

    But then you mix up the issue of a measurement basis with something different - the notion of undecidability in computation.

    Science models the world. So as an open-ended semiotic process, it doesn't have a halting problem. Instead, it is free to inquire until it reaches a point of pragmatic indifference in the light of its own interests.

    You are talking about a halting problem analogy by the sound of it. And that is merely a formal property of computational space. Some computational processes will terminate, others have a form that cannot. That is something quite different.
    apokrisis

    If the mind is something you can be sure that you have and that you can be sure correctly each time you inquire about the presence of your own mind...this would mean the term mind will be something that is fundamentally computational.

    The alternative is that we cannot be sure we have minds, there is no finite set of steps that our brains could take to derive the answer to the question of our own minds.

    Of course we do not use the term this way.
    So why should we define the term formally this way?
  • m-theory
    1.1k
    If you are wondering how we can know that we have created a mind, we will know because we will have understood the mind well enough to program it in the first place.tom

    The question of which theory of the mind is correct and which theories are mutually exclusive of each other is still very much an open one in philosophy.

    I will take this point you have here to mean until there is more clarity and greater consensus from further discovery that it will remain an open question.

    Of course I cannot disagree with that.
  • apokrisis
    7.3k
    If the mind is something you can be sure that you have and that you can be sure correctly each time you inquire about the presence of your own mind...this would mean the term mind will be something that is fundamentally computational.m-theory

    Yeah. I just don't see that. You have yet to show how syntax connects to semantics in your view. And checking to see that "we" are still "a mind" is about an irreducibly semantic act as you could get, surely? So how is it fundamentally syntactical? Or how is computation not fundamentally syntax? These are the kinds of questions I can't get you to engage with here.

    Out of curiosity, why did you cite partially observable Markov decision processes as if they somehow solved all your woes? Did you mean to point to some specific extra they implement which other similar computational architectures don't - like having an additional step that is meant to simulate actively disturbing the world to find if it conforms with predictions?

    It seems to me still that sure we can complexify the architectures so they add on such naturalistic behaviours. We can reduce creative semantics to syntactically described routines. But still, a computer is just a machine simulating such a routine. It is a frozen state of habit, not a living state of habit.

    A real brain is always operating semantically in the sense that it is able to think about its habits - they can come into question by drawing attention themselves, especially when they are not quite working out during some moment.

    So as I've said, I agree that neural network architectures can certainly go after biological realism. But an algorithm is canonically just syntax.

    Turing computation must have some semantics buried at the heart of its mechanics - a reading head that can interpret the symbols on a tape and do the proper thing. But Turing computation relies on external forces - some clever hardware designing mind - to actually underwrite that. The computation itself is just blind, helpless, syntax that can't fix anything if a fly spot on the infinite tape makes a smudge that is somewhere between whatever is the symbol for a 1 and a 0.

    So to talk about AI in any principled way, you have to deal with the symbol grounding problem. You have to have a working theory of semantics that tells you whether your syntactic architecture is in some sense getting hotter or colder.

    A hand-wavy approach may be quite standard in computer science circles - it is just standard practice for computer boffs to over-promise and under-deliver. DARPA will fund it anyway. But that is where philosophy of mind types, and theoretical biology types, will reply it is not good enough. It is just obvious that computer science has no good answer on the issue of semantics.
  • m-theory
    1.1k
    Yeah. I just don't see that. You have yet to show how syntax connects to semantics in your view. And checking to see that "we" are still "a mind" is about an irreducibly semantic act as you could get, surely? So how is it fundamentally syntactical? Or how is computation not fundamentally syntax? These are the kinds of questions I can't get you to engage with here.apokrisis

    This seems like this is some of that folksy dualism you were talking about.
    Semantics and syntax are separated by some special case of nature.

    I did engage these points.

    I do not view semantics as something that can even occur without syntax and I offered an example of pomdp which could handle the "irreducible" subject object distinction in a deciding agent.

    Out of curiosity, why did you cite partially observable Markov decision processes as if they somehow solved all your woes? Did you mean to point to some specific extra they implement which other similar computational architectures don't - like having an additional step that is meant to simulate actively disturbing the world to find if it conforms with predictions?apokrisis

    I mention it because I believe it can be argued that the epistemic cut you mentioned is not an intractable problem.

    It seems to me still that sure we can complexify the architectures so they add on such naturalistic behaviours. We can reduce creative semantics to syntactically described routines. But still, a computer is just a machine simulating such a routine. It is a frozen state of habit, not a living state of habit.

    A real brain is always operating semantically in the sense that it is able to think about its habits - they can come into question by drawing attention themselves, especially when they are not quite working out during some moment.

    So as I've said, I agree that neural network architectures can certainly go after biological realism. But an algorithm is canonically just syntax.

    Turing computation must have some semantics buried at the heart of its mechanics - a reading head that can interpret the symbols on a tape and do the proper thing. But Turing computation relies on external forces - some clever hardware designing mind - to actually underwrite that. The computation itself is just blind, helpless, syntax that can't fix anything if a fly spot on the infinite tape makes a smudge that is somewhere between whatever is the symbol for a 1 and a 0.

    So to talk about AI in any principled way, you have to deal with the symbol grounding problem. You have to have a working theory of semantics that tells you whether your syntactic architecture is in some sense getting hotter or colder.

    A hand-wavy approach may be quite standard in computer science circles - it is just standard practice for computer boffs to over-promise and under-deliver. DARPA will fund it anyway. But that is where philosophy of mind types, and theoretical biology types, will reply it is not good enough. It is just obvious that computer science has no good answer on the issue of semantics.
    apokrisis

    Semantics cannot exist without syntax.
    To implement any notion of semantics will entail syntax and the logical relationships within that syntax.
    To ground this symbol manipulation simply means to place some agency in the role of being invested in outcomes from decisions.

    The argument that semantics is strictly non-computational is ridiculous to me for this reason.
    Even if I did agree that only biology could produce semantics I certainly would not agree that some impenetrable and irreducible problem accounts for semantics in biology.

    I don't agree that your references have demonstrated that semantics in biology is necessarily an impenetrable and irreducible problem for which no computation can be applied in resolution.
    In the pomdp example infinite objective states and subjective beliefs sets are not intractable and there are methods for general solution.

    I will concede that exact solutions are not possible, but this means that the burden for Pattee would be that he must demonstrate that exact solutions are necessary.
    Simply waving his hand at an epistemic cut and saying there is no computational method for dealing with it is not convincing.
  • apokrisis
    7.3k
    Semantics cannot exist without syntax.
    To implement any notion of semantics will entail syntax and the logical relationships within that syntax.
    To ground this symbol manipulation simply means to place some agency in the role of being invested in outcomes from decisions.
    m-theory

    Great. Now all you need to do is define "agency" in a computationally scalable way. Perhaps you can walk me through how you do this with pomdp?

    A notion of agency is of course central to the biosemiotic approach to the construction of meanings - or meaningful relations given that this is about meaningful physical actions, an embodied or enactive view of cognition.

    But you've rejected Pattee and biosemiotics for some reason that's not clear. So let's hear your own detailed account of how pomdp results in agency and is not merely another example of a computationalist Chinese Room.

    How as a matter of design is pomdp not reliant on the agency of its human makers in forming its own semantic relations via signs it constructs for itself? In what way is pomdp's agency grown rather than built?

    Sure, neural networks do try to implement this kind of biological realism. But the problem for neural nets is to come up with a universal theory - a generalised architecture that is "infinitely scalable" in the way that Turing computation is.

    If pomdp turns out to be merely an assemblage of syntactic components, their semantic justification being something that its human builders understand rather than something pomdp grew for itself as part of a scaling up of a basic agential world relation, then Houston, you still have a problem.

    Every time some new algorithm must be written by the outside hand of a human designer rather than evolving internally as a result of experiential learning, you have a hand-crafted machine and not an organism.

    So given pomdp is your baby, I'm really interested to see you explain how it is agentially semantic and not just Chinese Room syntactic.
  • m-theory
    1.1k
    Great. Now all you need to do is define "agency" in a computationally scalable way. Perhaps you can walk me through how you do this with pomdp?apokrisis

    Agency is any system which observes and acts in it's environment autonomously.

    A notion of agency is of course central to the biosemiotic approach to the construction of meanings - or meaningful relations given that this is about meaningful physical actions, an embodied or enactive view of cognition.apokrisis

    The same applies to a computational agent, it is embedded with its environment through sensory perceptions. It must be able to act within it's environment in order to learn what it can and cannot do and how that information is related to its goal of seeking reward.

    But you've rejected Pattee and biosemiotics for some reason that's not clear. So let's hear your own detailed account of how pomdp results in agency and is not merely another example of a computationalist Chinese Room.apokrisis

    I made it quite clear why I reject an argument from intractable infinite regress concerning an epistemic cut. I pointed out that belief about states and states themselves may be infinite but that there are methods for general solutions.

    Pattee would need to demonstrate that exact solutions are necessary for semantics.
    That has not been demonstrated.

    I also provided a link that is extremely detailed.

    How as a matter of design is pomdp not reliant on the agency of its human makers in forming its own semantic relations via signs it constructs for itself? In what way is pomdp's agency grown rather than built?apokrisis

    Pomdp is a method for resovling an epistemic cut that is argued to be a necessary dilemma for agency. Pompdp illustrates why infinite regress is not completely intractable it is only intractable if exact solutions are necessary, I am arguing that exact solutions are not necessary and the general solutions used in Pomdp resolve issues of epistemic cut.

    If pomdp turns out to be merely an assemblage of syntactic components, their semantic justification being something that its human builders understand rather than something pomdp grew for itself as part of a scaling up of a basic agential world relation, then Houston, you still have a problem.apokrisis

    I get the feeling that you are trying to suggest that if you can point to some syntax then this proves there is no semantics. Again I will suggest to you that such a definition of semantics is incoherent.

    I can make no sense of the notion that semantics is something divided apart from and mutually exclusive of syntax.

    That this system is a semantic one and not brute force syntax is offered by the AlphaGo example. To account for the competence of AlphaGo one cannot simply claim it is brute force of syntax as one might do with Deepblue or other engines.

    The agent of this system makes intuitive decisions based on experiences and systems of belief that may or may not be strictly relevant to go, it does not have its beliefs and policies hand crafted by humans as would be the case with a brute force engine.
    AlphaGo was not designed to play go, it was designed to learn how to solve a variety of problems and in order to be good at playing go it must understand what go is.

    So given pomdp is your baby, I'm really interested to see you explain how it is agentially semantic and not just Chinese Room syntactic.apokrisis

    Once again I have already explained why the Chinese room argument fails.
    Arguing the Chinese room simply means you are saying that organic matter is the hardware side and the instructions are the software side.
    Taken as a whole it is the hardware and the software that provides the understanding.
    Not one or the other as the Chinese room implies.

    The Chinese room does not refute computational theories of the mind, never has, and never will. It is simply suggests that because the hardware does not understand then the software does not understand.

    That is fine
    and also completely irrelevant.

    Computational theory of the mind is not the notion that one or the other (hardware or software) results in understanding, it is the theory that these things combined will result in understanding.
  • apokrisis
    7.3k
    Agency is any system which observes and acts in it's environment autonomously.m-theory

    Great. Now you have replaced one term with three more terms you need to define within your chosen theoretical framework and not simply make a dualistic appeal to standard-issue folk ontology.

    So how precisely are observation, action and autonomous defined in computational theory? Give us the maths, give us the algorithms, give us the measureables.

    The same applies to a computational agent, it is embedded with its environment through sensory perceptions.m-theory

    Again this is equivocal. What is a "sensory perception" when we are talking about a computer, a syntactic machine? Give us the maths behind the assertion.

    Pattee must demonstrate that exact solutions are necessary for semantics.m-theory

    But he does. That is what the Von Neumann replicator dilemma shows. It is another example of Godelian incompleteness. An axiom system can't compute its axiomatic base. Axioms must be presumed to get the game started. And therein lies the epistemic cut.

    You could check out Pattee's colleague Robert Rosen who argued this point on a more general mathematical basis. See Essays on Life Itself for how impredicativity is a fundamental formal problem for the computational paradigm.

    http://www.people.vcu.edu/~mikuleck/rosrev.html

    I also provided a link that is extremely detailed.m-theory

    The question here is whether you understand your sources.

    Pompdp illustrates why infinite regress is not completely intractable it is only intractable if exact solutions are necessary, I am arguing that exact solutions are not necessary and the general solutions used in Pomdp resolve issues of epistemic cut.m-theory

    Yes, this is what you assert. Now I'm asking you to explain it in terms that counter my arguments in this thread.

    Again, I don't think you understand your sources well enough to show why they deal with my objections - or indeed, maybe even agree with my objections to your claim that syntax somehow generates semantics in magical fashion.

    I can make no sense of the notion that semantics is something divided apart from and mutually exclusive of syntax.m-theory

    Well there must be a reason why that distinction is so firmly held by so many people - apart from AI dreamers in computer science perhaps.

    To account for the competence of AlphaGo one cannot simply claim it is brute force of syntax as one might do with Deepblue or other engines.m-theory

    But semantics is always built into computation by the agency of humans. That is obvious when we write the programs and interpret the output of a programmable computer. With a neural net, this building in of semantics becomes less obvious, but it is still there. So the neural net remains a syntactic simulation not the real thing.

    If you want to claim there are algorithmic systems - that could be implemented on any kind of hardware in physics-free fashion - then it is up to you to argue in detail how your examples can do that. So far you just give links to other folk making the usual wild hand-waving claims or skirting over the ontic issues.

    The Chinese room does not refute computational theories of the mind, never has, and never will.
    It is simply suggests that because the hardware does not understand then the software does not understand.
    m-theory

    Well the Chinese Room sure felt like the death knell of symbolic AI at the time. The game was up at that point.

    But anyway, now that you have introduced yet another psychological concept to get you out of a hole - "understanding" - you can add that to the list. What does it mean for hardware to understand anything, or software to understand anything? Explain that in terms of a scientific concept which allows measurability of said phenomena.
  • m-theory
    1.1k
    Great. Now you have replaced one term with three more terms you need to define within your chosen theoretical framework and not simply make a dualistic appeal to standard-issue folk ontology.

    So how precisely are observation, action and autonomous defined in computational theory? Give us the maths, give us the algorithms, give us the measureables.
    apokrisis

    Well there must be a reason why that distinction is so firmly held by so many people - apart from AI dreamers in computer science perhaps.apokrisis

    Alright I concede your point here.
    When I mention terms I don't understand what they mean, except by some folksy definition of these terms.

    I don't really have time to explain repeatedly that fundamentally I don't agree that relevant terms such as these examples are excluded from computational implantation.

    But he does. That is what the Von Neumann replicator dilemma shows. It is another example of Godelian incompleteness. An axiom system can't compute its axiomatic base. Axioms must be presumed to get the game started. And therein lies the epistemic cut.apokrisis

    I will have to re-review your sources.
    I saw no mention of Godel incompleteness.
    Are you suggesting that if I review Theory of Self-Reproducing Automata that Von Neumann will layout an argument from incompleteness that demonstrates that the mind is not computational?

    You could check out Pattee's colleague Robert Rosen who argued this point on a more general mathematical basis. See Essays on Life Itself for how impredicativity is a fundamental formal problem for the computational paradigm.

    http://www.people.vcu.edu/~mikuleck/rosrev.html
    apokrisis
    This link seems very poor as an example of a general mathematical outline of a Godel incompleteness facing computational theories of the mind.
    I found no such example.
    The entire thing is completely text with no formal logic or mathematical notation what-so-ever.

    The question here is whether you understand your sources.apokrisis

    Yes, this is what you assert. Now I'm asking you to explain it in terms that counter my arguments in this thread.

    Again, I don't think you understand your sources well enough to show why they deal with my objections - or indeed, maybe even agree with my objections to your claim that syntax somehow generates semantics in magical fashion.
    apokrisis

    I suppose you are right I don't understand my sources and my sources do indeed prove your point is correct and I believe it is magic that accounts for my points.

    Well there must be a reason why that distinction is so firmly held by so many people - apart from AI dreamers in computer science perhaps.apokrisis

    The idea that semantics and syntax are independent and mutually exclusive sound more like folksy dualism to me than computational theories of the mind do.

    Perhaps if you had some example of semantics that exists independently and mutually exclusive of syntax it would be useful for making your point?

    But semantics is always built into computation by the agency of humans. That is obvious when we write the programs and interpret the output of a programmable computer. With a neural net, this building in of semantics becomes less obvious, but it is still there. So the neural net remains a syntactic simulation not the real thing.apokrisis

    Again AlphaGo learned to play go from scratch.
    It was not built to play go it was built to learn problems and how to solve those problems.
    The semantics of go was not built into AlphaGo and you seem to be saying that because a human built it that means any semantic understanding it has came from humans.
    That is like saying if you are taught you how to play chess any understanding you have of chess comes from the fact that you learned from somebody else.

    If you want to claim there are algorithmic systems - that could be implemented on any kind of hardware in physics-free fashion - then it is up to you to argue in detail how your examples can do that. So far you just give links to other folk making the usual wild hand-waving claims or skirting over the ontic issues.apokrisis

    Again I can make no sense of your "physics free" insistence here.
    As best as I can understand this is an allusion to thought experiments in computation that ignore physical constraints for the sake of making some greater theoretical point.
    Application of computation in the real world certainly is not "physics free."
    Also it is not clear that a computational theory of the mind must be physics free.
    And again it is not clear that there is an ontic issue and the hand waving of obscure texts does not prove that there is one.

    Well the Chinese Room sure felt like the death knell of symbolic AI at the time. The game was up at that point.

    But anyway, now that you have introduced yet another psychological concept to get you out of a hole - "understanding" - you can add that to the list. What does it mean for hardware to understand anything, or software to understand anything? Explain that in terms of a scientific concept which allows measurability of said phenomena.
    apokrisis

    I did not anticipate that you would insist that I define all the terms I use in technical detail.
    I would perhaps be willing to do this I if I believed it would be productive, but because you disagree at a more fundamental level I doubt giving technical detail will further our exchange.
    Also when I attempted to do this previously you insisted I don't understand my points and in fact my arguments prove your point and not mine.
  • apokrisis
    7.3k
    I don't really have time to explain repeatedly that fundamentally I don't agree that relevant terms such as these examples are excluded from computational implantation.m-theory

    Repeatedly? Once properly would suffice.

    This link seems very poor as an example of a general mathematical outline of a Godel incompleteness facing computational theories of the mind.m-theory

    Read Rosen's book then.

    Perhaps if you had some example of semantics that exists independently and mutually exclusive of syntax it would be useful for making your point?m-theory

    You just changed your wording. Being dichotomously divided is importantly different from existing independently.

    So it is not my position that there is pure semantics anywhere anytime. If semantics and syntax form a proper metaphysical strength dichotomy, they would thus be two faces of the one developing separation. In a strong sense, you could never have one without the other.

    And that is indeed the basis of my pan-semiotic - not pan-psychic - metaphysics. It is why I see the essential issue here the other way round to you. The fundamental division has to develop from some seed symmetry breaking. I gave you links to the biophysics that talks about that fundamental symmetry breaking when it comes to pansemiosis - the fact that there is a nano-scale convergence zone at the thermal nano-scale where suddenly energetic processes can be switched from one type to another type at "no cost". Physics becomes regulable by information. The necessary epistemic cut just emerges all by itself right there for material reasons that are completely unmysterious and fully formally described.

    The semantics of go was not built into AlphaGo and you seem to be saying that because a human built it that means any semantic understanding it has came from humans.m-theory

    What a triumph. A computer got good at winning a game completely defined by abstract rules. And we pretend that it discovered what counts as "winning" without humans to make sure that it "knew" it had won. Hey, if only the machine had been programmed to run about the room flashing lights and shouting "In your face, puny beings", then we would be in no doubt it really understood/experienced/felt/observed/whatever what it had just done.

    Again I can make no sense of your "physics free" insistence here.m-theory

    So you read that Pattee reference before dismissing it?

    And again it is not clear that there is an ontic issue and the hand waving of obscure texts does not prove that there is one.m-theory

    I can only hand wave them if you won't even read them before dismissing them. And if you find them obscure, that simply speaks to the extent of your scholarship.

    I did not anticipate that you would insist that I define all the terms I use in technical detail.
    I would perhaps be willing to do this I if I believed it would be productive, but because you disagree at a more fundamental level I doubt giving technical detail will further or exchange.
    m-theory

    I've given you every chance to show that you understand the sources you cite in a way that counters the detailed objections I've raised.

    Pompd is the ground on which you said you wanted to make your case. You claimed it deals with my fundamental level disagreement. I'm waiting for you to show me that with the appropriate technical account. What more can I do than take you at your word when you make such a promise?
  • m-theory
    1.1k
    So it is not my position that there is pure semantics anywhere anytime. If semantics and syntax form a proper metaphysical strength dichotomy, they would thus be two faces of the one developing separation. In a strong sense, you could never have one without the other.apokrisis

    At least we agree on that point.
    These terms are interdependent.

    So you read that Pattee reference before dismissing it?apokrisis

    This does not make it any clearer what you mean when you are using this term.
    Again real world computation is not physics free, even if computation theory has thought experiments that ignore physical constraints.

    I can only hand wave them if you won't even read them before dismissing them. And if you find them obscure, that simply speaks to the extent of your scholarship.apokrisis

    No, it is not a mainstream view that the problem of the origin of life is a Godel incompleteness problem.
    That is a rather obscure claim.

    I've given you every chance to show that you understand the sources you cite in a way that counters the detailed objections I've raised.

    Pompd is the ground on which you said you wanted to make your case. You claimed it deals with my fundamental level disagreement. I'm waiting for you to show me that with the appropriate technical account. What more can I do than take you at your word when you make such a promise?
    apokrisis

    We don't have a technical account of your issue.
    It was a mistake of me to try and find a technical solution prior I admit.
  • apokrisis
    7.3k
    This does not make it any clearer what you mean when you are using this term.
    Again real world computation is not physics free, even if computation theory has thought experiments that ignore physical constraints.
    m-theory

    Again, real world Turing computation is certainly physics-free if the hardware maker is doing his job right. If the hardware misbehaves - introduces physical variety in a way that affects the physics-free play of syntax - the software malfunctions. (Not that the semantics-free software could ever "know" this of course.)

    We don't have a technical account of your issue.
    It was a mistake of me to try and find a technical solution prior I admit.
    m-theory

    :-}
  • m-theory
    1.1k
    Again, real world Turing computation is certainly physics-free if the hardware maker is doing his job right. If the hardware misbehaves - introduces physical variety in a way that affects the physics-free play of syntax - the software malfunctions. (Not that the semantics-free software could ever "know" this of course.)apokrisis

    And again the same is true of biological systems, biology will not function properly and will die without a given range of environmental stability.

    Perhaps you mean to suggest that the range that life can operate in is greater than the range current hardware can operate in and that this proves that current hardware cannot produce a general purpose A.I.

    You will have to forgive me if I find that line to be a rather large leap and not so straight forward as you take for granted..

    :-}apokrisis

    Well your sources have not been very clear in the formal sense, at least not to me.
    And you have not done much except say that I don't understand your sources, or my own, when I attempt to address the issues you believe your sources raise.

    So, to quote von Neumann, what is the point of me being percise if I don't know what I am talking about?

    But
    Suppose we concede the point that evolution and the origin of life has a Godel incompleteness problem and let's ask what that would imply about computational theories of evolution, the origin of life, or even the mind.
    Chaitin, I believe, offers a good example of the computational view surrounding that issue.

    Of course such views are not so pessimistic as the one you have taken.

    Here is another video of Chaitin offering a computational rebuttal to the notion that computation does not apply to evolution.
  • apokrisis
    7.3k
    You will have to forgive me if I find that line to be a rather large leap and not so straight forward as you take for granted..m-theory

    Only because you stubbornly misrepresent my position.

    So, to quote von Neumann, what is the point of me being percise if I don't know what I am talking about?m-theory

    Exactly. Why say pomdp sorts all your problems when it is now clear that you have no technical understanding of pomdp?

    Here is another video of Chaitin offering a computational rebuttal to the notion that computation does not apply to evolution.m-theory

    Forget youtube videos. Either you understand the issues and can articulate the relevant arguments or you are pretending to expertise you simply don't have.
  • m-theory
    1.1k
    Forget youtube videos. Either you understand the issues and can articulate the relevant arguments or you are pretending to expertise you simply don't have.apokrisis

    I never claimed to be an expert.
    Those are your words not mine.
    I claimed that pomdp is a possible solution to a problem of infinite regress.
    Once again you fail to correct me that error.
    Perhaps you mean to argue that pomdp does not solve your particular infinite regress problem.
    Ok fine, forgive me for suggesting that it might.

    Also what a crude fallacy of you to imply.
    One must be an authority to make valid points.
    I expect better of you.

    Never mind that it does not follow that if the origin of life is not finitely commutable then necessarily the mind is not finitely commutable.
    (You never connected these and those connections were not made readily clear in anything you have linked)

    Let's just assume you are right and say that one does necessarily follow from the other.
    Allow me to give a brief overview of Chaitin's approach to a Godel incompleteness problem in biology and its implications for computational theories of evolution.

    In a very abstract way Chaitin shows that a very generalized evolution can still result from a computational foundation (albeit in his model it is necessary to ignore certain physical constraints).

    Of course I would suggest the same applies to arguments about an incompleteness in the origin of life (And so does Chaitin).

    The implication for computational models would simply mean that role computation plays would be more open ended, and not at all that computation would be mutually excluded as you have attempted to imply.

    Chaitin discusses at length the philosophical issues you have raised and how he believes they are related to computational theories in biology.
    It is worth reviewing for those who are interested in how the issues raised by von Neumann, Godel, and Turing are related to computational theories of biology.

    Overall Chaitin argues that one does not simply discard computation on the grounds of Godel incompleteness rather one ought to embrace the limitlessness incompleteness implies.


    Your source even name drops Chaitin.
    So I thought you would be more receptive to hearing him out for that reason.

    At any rate thanks apokrisis for you input and thanks for challenging my views.
    Your effort has not been in vain completely, I believe from our interaction I have deeper appreciation of the topic than I did when I began this thread.
  • tom
    1.5k
    Suppose we concede the point that evolution and the origin of life has a Godel incompleteness problem and let's ask what that would imply about computational theories of evolution, the origin of life, or even the mind.
    Chaitin, I believe, offers a good example of the computational view surrounding that issue.
    m-theory

    As I mentioned in another thread:

    "Von Neumann showed that an accurate self-reproducer must consist of a replicator and a vehicle."

    Of course Chaitin takes the computational view of life, because Von Neumann *proved* that it cannot be otherwise, before the discovery of DNA.
  • apokrisis
    7.3k
    In a very abstract way Chaitin shows that a very generalized evolution can still result from a computational foundation (albeit in his model it is necessary to ignore certain physical constraints).m-theory

    I listened to the podcast and it is indeed interesting but does the opposite of supporting what you appear to claim.

    On incompleteness, Chaitin stresses that it shows that machine-like or syntactic methods of deriving maths is a pure math myth. All axiom forming involves what Peirce terms the creative abductive leap. So syntax has to begin with semantics. It doesn't work the other way round as computationalists might hope.

    As Chaitin says, the problem for pure maths was that it had the view all maths could be derived from some finite set of axioms. And instead creativity says axiom production is what is infinitely open ended in potential. So that requires the further thing of some general constraint on such troublesome fecundity. The problem - for life, as Von Neumann and Rosen and Pattee argue mathematically - is that biological systems have to be able to close their own openness. They must be abe to construct the boundaries to causal entailment that the epistemic cut represents.

    As a fundamental problem for life and mind, this is not even on the usual computer science radar.

    Then Chaitin's theorem is proven in a physics-free context. He underlines that point himself, and says connecting the theorem to the real world is an entirely other matter.

    But Chaitin is trying to take a biologically realistic approach to genetic algorithms. And thus his busy beaver problem is set up in a toy universe with the equivalent of an epistemic cut. The system has a running memory state that can have point mutations. An algorithm is written to simulate the physical randomness of the real world and make this so.

    Then the outcome of the mutated programme is judged against the memory state which simulates the environment on the other side of the epistemic cut. The environment says either this particular mutant is producing the biggest number ever seen or its not, therefore it dies and is erased from history.

    So the mutating programs are producing number-producing programs. In Pattee's terms, they are the rate independent information side of the equation. Then out in the environment, the numbers must be produced so they can be judged against a temporal backdrop where what might have been the most impressive number a minute ago is already now instead a death sentence. So that part of the biologically realistic deal is the rate dependent dynamics.
  • m-theory
    1.1k
    I listened to the podcast and it is indeed interesting but does the opposite of supporting what you appear to claim.apokrisis

    Told you it was good.

    On incompleteness, Chaitin stresses that it shows that machine-like or syntactic methods of deriving maths is a pure math myth. All axiom forming involves what Peirce terms the creative abductive leap. So syntax has to begin with semantics. It doesn't work the other way round as computationalists might hope.apokrisis

    No. On incompleteness Chaitin stress that we simply have to conclude that there is no finite set of axioms to describe all mathematical truths. He does not suggest that computational theories are myths. In fact he stress quite the opposite. I mean come on, the guy built a toy model of evolution based on computation. Chaitin is very much a proponent for computation not against it.

    As Chaitin says, the problem for pure maths was that it had the view all maths could be derived from some finite set of axioms. And instead creativity says axiom production is what is infinitely open ended in potential. So that requires the further thing of some general constraint on such troublesome fecundity. The problem - for life, as Von Neumann and Rosen and Pattee argue mathematically - is that biological systems have to be able to close their own openness. They must be abe to construct the boundaries to causal entailment that the epistemic cut represents.apokrisis

    As a fundamental problem for life and mind, this is not even on the usual computer science radar.apokrisis


    Chaitin does not mention any epistemic cut and neither does von Neumann in any thing I have read.
    That term seems to be something Pattee has coined himself.
    It probably has not caught on in mainstream science because there is not a clear definition for it.

    Then Chaitin's theorem is proven in a physics-free context. He underlines that point himself, and says connecting the theorem to the real world is an entirely other matter.

    But Chaitin is trying to take a biologically realistic approach to genetic algorithms. And thus his busy beaver problem is set up in a toy universe with the equivalent of an epistemic cut. The system has a running memory state that can have point mutations. An algorithm is written to simulate the physical randomness of the real world and make this so.

    Then the outcome of the mutated programme is judged against the memory state which simulates the environment on the other side of the epistemic cut. The environment says either this particular mutant is producing the biggest number ever seen or its not, therefore it dies and is erased from history.

    So the mutating programs are producing number-producing programs. In Pattee's terms, they are the rate independent information side of the equation. Then out in the environment, the numbers must be produced so they can be judged against a temporal backdrop where what might have been the most impressive number a minute ago is already now instead a death sentence. So that part of the biologically realistic deal is the rate dependent dynamics.
    apokrisis

    Yeah Chaitin stresses that more research in the area is a worthy pursuit and hopes that people begin creating real simulations using his work.
    I hope so to.
  • Punshhh
    2.6k

    Yes the thinking mind of a human could be described as an algorithm. But I don't think that this is the whole story, there is consciousness and being, which do not require computation in the brain in the same sense, in that there is a metabolic component and possibly, due to philosophical analysis, an immaterial component.

    I presume that you imagine a working A.I. device, do you also imagine it having consciousness?
1234Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.