• m-theory
    1.1k
    So every time I point to a fundamental difference, your reply is simply that differences can be minimised. And when I point out that minimising those differences might be physically impractical, you wave that constraint away as well. It doesn't seem as though you want to take a principled approach to your OP.apokrisis
    No that is not it at all.

    I was seeking to make a distinction between simulating a human being and simulating general intelligence.

    I did concede that if we must digitally simulate a human at nano-scale before we can hope to simulate a mind that this would be a monumental task.
    And perhaps you are also correct that may may be impossible.

    I just don't use that criterion.
    I was using the criterion of if a computer could learn any problem and or solution to a problem that a human could...then that would be a mind in the general sense.
    Why, if a computer could do such, is that not a mind in a general sense?

    I believe there are two distinct meanings to the term mind.
    One meaning is the intimately personal and rich inner self.
    The second meaning is the sense in which others have minds...if we took away all the differences of personal minds and focused on the general template of what that term means then the problem of creating a mind is minimized to a hardy degree I would argue.

    Anyway, another way of phrasing the same challenge to your presumption there is no great problem here: can you imagine an algorithm that could operate usefully on unstable hardware? How could an algorithm function in the way you require if it's next state of output was always irreducibly uncertain? In what sense would such a process still be algorithmic in your book if every time it computed some value, there would be no particular reason for the calculation to come out the same?apokrisis

    I believe we can argue the same thing of a human.
    If our brain suffers trauma and damage it can result in sever impairment.

    I just don't agree that the top down approach is necessarily faulty all the way up until a nano scale human simulation is achieved.

    I suspect that somewhere in the middle of top down design something mindlike should be possible.

    The reason I believe this is because a lot of the human body and brain functions are autonomous of what we mean by the term mind...and while I understand your point that there is feedback of these system that inform consciousness/mind (understanding that the extent of which it does is unclear) and that is what contributes to what we call an individual persons...but mind also has a more general meaning...and I am suggesting that should be possible before we achieve nano-scale human simulation.
  • apokrisis
    7.3k
    I was seeking to make a distinction between simulating a human being and simulating general intelligence....I was using the criterion of if a computer could learn any problem and or solution to a problem that a human could...m-theory

    Ok. But from my biophysical/biosemiotic perspective, a theory of general intelligence just is a theory of life, a theory of complex adaptive systems. You have to have the essence of that semiotic relation between symbols and matter built in from the smallest, simplest scales to have any "intelligence" at all.

    So yes, you are doing the familiar thing of trying to abstract away the routinised, mechanics-imitating, syntactical organisation that people think of as rational thought or problem solving. If you input some statement into a Searlean Chinese room or Turing test passing automaton, all that matters is that you get the appropriate output statement. If it sounded as though the machine knew what you were talking about, then the machine passes as "intelligent".

    So again, fine, its easy to imagine building technology that is syntactic in ways that map some structure of syntax that we give it on to some structure of syntax that we then find meaningful. But the burden is on you to show why any semantics might arise inside the machine. What is your theory of how syntax produces semantics?

    Biology's theory is that of semiotics - the claim that an intimate relation between syntax and semantics is there from the get-go as symbol and matter, Pattee's epistemic cut between rate independent information and rate dependent dynamics. And this is a generic theory - one that explains life and mind in the same physicalist ontology.

    But computer science just operates on the happy assumption that syntax working in isolation from material reality will "light up" in the way brains "light up". There has never been any actual theory to back up this sci fi notion.

    Instead - given the dismal failure of AI for so long - the computer science tendency is simply to scale back the ambitions to the simplest stuff for machines to fake - those aspects of human thought which are the most abstractly syntactic as mental manipulations.

    If you just have numbers or logical variables to deal with, then hey, suddenly everything messy and real world is put at as great a distance as it can possibly be. Any schoolkid can learn to imitate a calculating engine - and demonstrate their essential humanness by being pretty bad, slow and error-prone at it, not to mention terminally bored.

    Then we humans invent an actual machine to behave like a machine and ... suddenly we are incredibly impressed at its potential. Already our pocket calculators exceed all but our most autistic of idiot savants in rapid, error-free, syntactical operation. We think if a pocket calculator can be this unnaturally robotic in its responses, then imagine how wonderfully conscious, creative, semantic, etc, a next generation quantum supercomputer is going to be. Or some such inherently self-contradicting shit.
  • Wayfarer
    22.5k
    hey don't sweat it and I apologise also if I seemed brusque.
  • BC
    13.6k
    The question of whether A.I. is here, or will be here, when, how, what and where...

    Two things: First, electronic (dry) equipment that can produce the verisimilitude of intelligence might be here now, or will be soon. It might not be that the algorithms are so good as it is our desire is so great to hail non-human intelligence. I don't know why we so desire this mirror. Go, Jeopardy, chess... whatever complex game we ask it to play (or ask it to learn how to play, in the case of Go) is a very limited (but none the less impressive) achievement. Still...

    Second, I think there is a strong tendency to underrate animal (wet) intelligence. It isn't learning how to recite Beowulf from memory that is the only impressive human achievement. It's also remembering the odor of the room where we learned Anglo-Saxon and now feel nostalgia for that faint musty odor when we recite Beowulf, that's distinctive. [SEE NOTE] Dry intelligence can replay Beowulf, but it can't connect odors and texts and feelings. It can't feel. It can't smell.

    Dry intelligence can't connect with the feelings of a dog excited by the walk it's about to take. Dry intelligence can't lay on the floor and determine whether the guy walking around is getting ready to go to work (alone) or is going to take the dog for a walk. Dogs can do that. They can tell the difference between routine getting ready to go to work and getting ready to go out of town (which the dog will probably disapprove of, considering what happened the last time "they" left). So can cats.

    Wet brains and wet intelligence have developed over an exceedingly long time. Wet brains aren't the only defense animals have, but they are remarkably effective. A rat's wet brain does, and will, out-performs Deep Blue and all of it's Blue successors, Screwed Blue, Dude Blue, Rude Blue, etc. because it has capabilities that can be reproduced by an algorithm.

    It's not the algorithm, it's the structure of the body and its history.

    [NOTE] I never learned Anglo Saxon and I can't recite Beowulf. I can pretend I did, and even feel like I did. Betcha Deep Blue can't do that.
  • Metaphysician Undercover
    13.1k
    I like that distinction, "wet intelligence" versus "dry intelligence".
  • m-theory
    1.1k
    Ok. But from my biophysical/biosemiotic perspective, a theory of general intelligence just is a theory of life, a theory of complex adaptive systems. You have to have the essence of that semiotic relation between symbols and matter built in from the smallest, simplest scales to have any "intelligence" at all.

    So yes, you are doing the familiar thing of trying to abstract away the routinised, mechanics-imitating, syntactical organisation that people think of as rational thought or problem solving. If you input some statement into a Searlean Chinese room or Turing test passing automaton, all that matters is that you get the appropriate output statement. If it sounded as though the machine knew what you were talking about, then the machine passes as "intelligent".
    apokrisis

    I don't mean to sweep away your criticisms.
    I freely admit if we are using a biological metric of life then we are no where close to simulating intelligence.
    If simulating biology is the criterion we can safely conclude machines don't think.

    So again, fine, its easy to imagine building technology that is syntactic in ways that map some structure of syntax that we give it on to some structure of syntax that we then find meaningful. But the burden is on you to show why any semantics might arise inside the machine. What is your theory of how syntax produces semantics?apokrisis

    I argue that because this algorithm has to learn from scratch it must discover it's own semantics within the problem and solution to that problem.

    Take the go example, alphago would not be able to learn to play the game as well as humans unless it is forming semantics.
    Because it has to learn the problem and learn the solution, often at the same time, it learns to have biases about different syntactical relationships within the context of the problem and the solution.
    Not all syntactical relationships are equal within the context of what the problem is and within the context of what is the solution.

    You may argue that it is a rather crude and primitive form of semantics when compared to humans and perhaps you are right...but it is still a form of semantics.

    I might use another analogy.
    Consider the task of creating robot hand that is dexterous as the human hand.
    You might argue that the finished product can not sense what it grasps, that it has no nerves, no skin, no bones, no blood coursing through it and then you claim this is not a hand.

    But if we ask the question of whether or not it is a hand by a different criterion, whether or not it can perform any action a human hand can preform, then the problem is very different.

    Instead of trying to replicate the human hand we are trying to replicate the utility of a human hand and that is a far less difficult engineering goal.

    So again...this algorithm, if it does have semantic understanding...it does not and never will have human semantic understanding.
    But I do not agree that we can be sure that it won't be able to perform at human level utility of human semantic understanding.

    Biology's theory is that of semiotics - the claim that an intimate relation between syntax and semantics is there from the get-go as symbol and matter, Pattee's epistemic cut between rate independent information and rate dependent dynamics. And this is a generic theory - one that explains life and mind in the same physicalist ontology.apokrisis

    Pattee's epistemic cut was not very clear to me, and he seems to have coined this term.
    Do you have any references for epistemic cut?
    I did not find it as an entry in Stanford.

    I tried to read through your link but got hung up on that term, the definition is not clear to me.

    But computer science just operates on the happy assumption that syntax working in isolation from material reality will "light up" in the way brains "light up". There has never been any actual theory to back up this sci fi notion.

    Instead - given the dismal failure of AI for so long - the computer science tendency is simply to scale back the ambitions to the simplest stuff for machines to fake - those aspects of human thought which are the most abstractly syntactic as mental manipulations.

    If you just have numbers or logical variables to deal with, then hey, suddenly everything messy and real world is put at as great a distance as it can possibly be. Any schoolkid can learn to imitate a calculating engine - and demonstrate their essential humanness by being pretty bad, slow and error-prone at it, not to mention terminally bored.
    apokrisis

    Again I recall my hand example.
    It is exceedingly difficult to simulate the human hand to the finest detail.
    It is not nearly so difficult to engineer a machine that replicates the utility of a human hand.

    I believe a similar thing applies to A.i.

    Then we humans invent an actual machine to behave like a machine and ... suddenly we are incredibly impressed at its potential. Already our pocket calculators exceed all but our most autistic of idiot savants in rapid, error-free, syntactical operation. We think if a pocket calculator can be this unnaturally robotic in its responses, then imagine how wonderfully conscious, creative, semantic, etc, a next generation quantum supercomputer is going to be. Or some such inherently self-contradicting shit.apokrisis

    Well again I understand that you believe there is a fundamental problem that engineering human level A.I. faces.
    I will try to read through Pattee's work again to see if I can address that point.
  • m-theory
    1.1k
    Second, I think there is a strong tendency to underrate animal (wet) intelligence. It isn't learning how to recite Beowulf from memory that is the only impressive human achievement. It's also remembering the odor of the room where we learned Anglo-Saxon and now feel nostalgia for that faint musty odor when we recite Beowulf, that's distinctive. [SEE NOTE] Dry intelligence can replay Beowulf, but it can't connect odors and texts and feelings. It can't feel. It can't smell.Bitter Crank

    This is a more a matter of sensory apparatus, dry intelligence would be able to record and recall this input if it had the sensory input to record it.

    Dry intelligence can't connect with the feelings of a dog excited by the walk it's about to take. Dry intelligence can't lay on the floor and determine whether the guy walking around is getting ready to go to work (alone) or is going to take the dog for a walk. Dogs can do that. They can tell the difference between routine getting ready to go to work and getting ready to go out of town (which the dog will probably disapprove of, considering what happened the last time "they" left). So can cats.Bitter Crank

    This algorithm does have primitive feelings.
    It understands from experience that there is reward in the world and there is penatly in the world.
    It also understands which of these it experiences will depend on the choices it makes in its environment.

    Wet brains and wet intelligence have developed over an exceedingly long time. Wet brains aren't the only defense animals have, but they are remarkably effective. A rat's wet brain does, and will, out-performs Deep Blue and all of it's Blue successors, Screwed Blue, Dude Blue, Rude Blue, etc. because it has capabilities that can be reproduced by an algorithm.

    It's not the algorithm, it's the structure of the body and its history.

    [NOTE] I never learned Anglo Saxon and I can't recite Beowulf. I can pretend I did, and even feel like I did. Betcha Deep Blue can't do that.
    Bitter Crank

    This is a different example from deepblue because it has the above mentioned reinforcement learning techniques employed.
    Deepblue had to be programmed what the problem of chess was, that program had to be hand crafted by human engineers.
    Alphago had to have it's ability to learn hand crafted, but once that was done it learned what the problem of go was, learned what the solution to that problem is (to win) and it learned all this from scratch.

    This algorithm is also different because it is not limited to playing go.
    Deepblue can only play chess unless it is reconfigured by human programmers (it would have to use a different algorithm to learn a different game and it would not perform well at go because go has far too many possible moves to solve with brute force techniques).
    Deepmind on the other hand can learn to play atari games in the same way it learn to play go.

    This algorithm is a breakthrough because, so far, it appears that the algorithm can be applied to any problem in general.
  • BC
    13.6k
    Designing a computer to learn things is an advance on our part, for sure. I've been using computers of one kind or another for the last 36 years, and not one of them has learned a damned thing. Granted, these were slightly less powerful than DeepMind. An early PC is to DeepMind as a cockroach is to an elephant.

    If computers are ever to be "intelligent", whatever that means, they certainly will have to have the capacity to learn without human instigation. That means, I suppose, that they have to have some sort of will. They will also need some independent mobility too, to take their sensory apparatus on the road to find things that they want to learn about. Will and wishes implies some sort of feelings, like curiosity and satisfaction. When they arrive, we will all be watched over by machines of loving grace. [BBC]
  • apokrisis
    7.3k
    I argue that because this algorithm has to learn from scratch it must discover it's own semantics within the problem and solution to that problem.m-theory

    That is the question. Does it actually learn its own semantics or is there a human in the loop who is judging that the machine is performing within some acceptable range? Who is training the machine and deciding that yes, its got the routine down pat?

    The thing is that all syntax has to have an element of frozen semantics in practice. Even a Turing Machine is semantic in that it must have a reading head that can tell what symbol it is looking at so it can follow its rules. So semantics gets baked in - by there being a human designer who can build the kind of hardware which ensures this happens in the way it needs to.

    So you could look at a neural network as a syntactical device with a lot of baked-in semantics. You are starting to get some biological realism in that open learning of that kind takes place. And yet inside the black box of circuits, it is still all a clicking and whirring of syntax as no contact with any actual semantics - no regulative interactions with material instability - are taking place.

    Of course my view relies on a rather unfamiliar notion of semantics perhaps. The usual view is based on matter~mind dualism. Meaning is held to be something "mental" or "experiential". But then that whole way of framing the issue is anti-physicalist and woo-making.

    So instead, a biosemiotic view of meaning is about the ability of symbol systems - memory structures - to regulate material processes. The presumption is that materiality is unstable. The job of information is to constrain that instability to produce useful work. That is what mindfulness is - the adaptive constraint of material dynamics.

    And algorithms are syntax with any semantics baked in. The mindful connection to materiality is severed by humans doing the job of underpinning the material stability of the hardware that the software runs on. There is no need for instability-stabilising semantics inside the black box. An actual dualism of computational patterns and hotly-switching transistor gates has been manufactured by humans for their own purpose.

    Consider the task of creating robot hand that is deleterious as the human hand.m-theory

    Yes. But the robot hand is still a scaled-up set of digital switches. And a real hand is a scaled-up set of molecular machines. So the difference is philosophically foundational even if we can produce functional mimickry.

    At the root of the biological hand is a world where molecular structures are falling apart almost as soon as they self-assemble. The half-life of even a sizeable cellular component like a microtubule is about 7 minutes. So the "hardware" of life is all about a material instability being controlled just enough to stay organised and directed overall.

    You are talking about a clash of world views here. The computationalist likes to think biology is a wee bit messy - and its amazing wet machines can work at all really. A biologist knows that a self-organising semiotic stability is instrincally semantic and adaptive. Biology know itself, its material basis, all the way down to the molecules that compose it. And so it is no surprise that computers are so autistic and brittle - the tiniest physical bug can cause the whole machine to break-down utterly. The smallest mess is something a computer algorithm has no capacity to deal with.

    (Thank goodness again for the error correction routines that human hardware designers can design in as the cotton wool buffering for these most fragile creations in all material existence).

    So again...this algorithm, if it does have semantic understanding...it does not and never will have human semantic understanding.m-theory

    But the question is how can an algorithm have semantic understanding in any foundational sense when the whole point is that it is bare isolated syntax?

    Your argument is based on a woolly and dualistic notion of semantics. Or if you have some other scientific theory of meaning here, then you would need to present it.

    Pattee's epistemic cut was not very clear to me, and he seems to have coined this term.m-theory

    Pattee has written a ton of papers which you can find yourself if you google his name and epistemic cut.

    This is one with a bit more of the intellectual history.... http://www.informatics.indiana.edu/rocha/publications/pattee/pattee.html

    But really, Pattee won't make much sense unless you do have a strong grounding in biological science. And much of the biology is very new. If you want to get a real understanding of how different biology is in its informational constraint of material instability, then this a good new pop sci book....

    http://lifesratchet.com/
  • m-theory
    1.1k

    Even if this algorithm makes that possible it would still take quite a while to teach it how to have any thing resembling the common sense we expect of developed humans.
  • m-theory
    1.1k
    That is the question. Does it actually learn its own semantics or is there a human in the loop who is judging that the machine is performing within some acceptable range? Who is training the machine and deciding that yes, its got the routine down pat?apokrisis

    Again with regular humans there is a human in the loop.
    As you grew from an infant to a child it was not in a vacuum...you learn from expectations of others.

    But yes, sometimes humans have to intervene and give guidance.

    However all this amounts to is reward and penalty value assignment changes.
    If deepmind gets stuck on a problem in which it needs to explore more to be efficient then the value of reward for exploring is tweaked.

    The thing is that all syntax has to have an element of frozen semantics in practice. Even a Turing Machine is semantic in that it must have a reading head that can tell what symbol it is looking at so it can follow its rules. So semantics gets baked in - by there being a human designer who can build the kind of hardware which ensures this happens in the way it needs to.

    So you could look at a neural network as a syntactical device with a lot of baked-in semantics. You are starting to get some biological realism in that open learning of that kind takes place. And yet inside the black box of circuits, it is still all a clicking and whirring of syntax as no contact with any actual semantics - no regulative interactions with material instability - are taking place.

    Of course my view relies on a rather unfamiliar notion of semantics perhaps. The usual view is based on matter~mind dualism. Meaning is held to be something "mental" or "experiential". But then that whole way of framing the issue is anti-physicalist and woo-making.

    So instead, a biosemiotic view of meaning is about the ability of symbol systems - memory structures - to regulate material processes. The presumption is that materiality is unstable. The job of information is to constrain that instability to produce useful work. That is what mindfulness is - the adaptive constraint of material dynamics.

    And algorithms are syntax with any semantics baked in. The mindful connection to materiality is severed by humans doing the job of underpinning the material stability of the hardware that the software runs on. There is no need for instability-stabilising semantics inside the black box. An actual dualism of computational patterns and hotly-switching transistor gates has been manufactured by humans for their own purpose.
    apokrisis

    I am arguing that the semantics in this algorithms example are not simply baked-in because it can learn on it's own to shift biases as it discovers new information about it's environment and itself in relation to it's environment.

    I don't agree with the notion that humans have semantics from birth (perhaps some) semantics is something we learn not just from ourselves but from others.

    Semantics is a dynamic thing and this is the first example of an algorithm with a robust dynamic semantic capability.
    That is to say it is flexible enough that it can handle the dynamic semantics of a variety of tasks with a hardy degree of autonomy.

    This system can handle instability of environments (I gave this example of a system that it learned to regulate)

    Yes. But the robot hand is still a scaled-up set of digital switches. And a real hand is a scaled-up set of molecular machines. So the difference is philosophically foundational even if we can produce functional mimickry.

    At the root of the biological hand is a world where molecular structures are falling apart almost as soon as they self-assemble. The half-life of even a sizeable cellular component like a microtubule is about 7 minutes. So the "hardware" of life is all about a material instability being controlled just enough to stay organised and directed overall.

    You are talking about a clash of world views here. The computationalist likes to think biology is a wee bit messy - and its amazing wet machines can work at all really. A biologist knows that a self-organising semiotic stability is instrincally semantic and adaptive. Biology know itself, its material basis, all the way down to the molecules that compose it. And so it is no surprise that computers are so autistic and brittle - the tiniest physical bug can cause the whole machine to break-down utterly. The smallest mess is something a computer algorithm has no capacity to deal with.

    (Thank goodness again for the error correction routines that human hardware designers can design in as the cotton wool buffering for these most fragile creations in all material existence).
    apokrisis

    My point was there is a difference between engineering to replicate one-to-one systems and designing to accomplish one-to-one utility.
    We can often achieve the same utility with out modeling the exact system.

    But I will concede you main point here, that a human hand can adapt by process of evolution as a consequence of its complicated systems where as a robot hand will never be able to adapt in that way.
    I don't see that as a major one because evolution takes so long to produces describable adaptation and because we do not necessarily want a robot model of the human hand to adapt under environmental pressure over the course of many many generations.


    But the question is how can an algorithm have semantic understanding in any foundational sense when the whole point is that it is bare isolated syntax?

    Your argument is based on a woolly and dualistic notion of semantics. Or if you have some other scientific theory of meaning here, then you would need to present it.
    apokrisis

    I tried to explain that there is a general sense of the term mind as something others have.
    As a term that means a general way of thinking.
    I believe this sense of the term mind is an algorithm and it is how we account for the fact that vastly different people can agree on semantics...because they learn the same problems and form the same solutions to those problems and are taught by people that have the same general algorithm.

    I am suggesting that there is a single algorithm for general intelligence that not only we possess but others possess and that is how we can answer the question of whether or not we have a mind with a yes or no without error.

    If there is no general intelligence algorithm that is quite a curious thing...that so many different individuals and different cultures should share so much in common.
    One would expect that if each mind is not a general template upon which the individual is formed but rather its own unique iteration then there should be much more variety and that the other minds of humans would seem utterly alien to us more often than they would seem similar to ourselves.
  • m-theory
    1.1k
    Pattee has written a ton of papers which you can find yourself if you google his name and epistemic cut.

    This is one with a bit more of the intellectual history.... http://www.informatics.indiana.edu/rocha/publications/pattee/pattee.html

    But really, Pattee won't make much sense unless you do have a strong grounding in biological science. And much of the biology is very new. If you want to get a real understanding of how different biology is in its informational constraint of material instability, then this a good new pop sci book....
    apokrisis

    I have read some more and you are right he is very technically laden.
    I was hoping for a more generalized statement of the problem of the epistemic cut because I believe that the Partially observable Markov decision process might be a very general solution to establishing an epistemic cut between the model and the reality in an A.I. agent.

    I noticed he draws on the work of von neumann so I will pursue that as well.

    Thanks again for your posts and again you have given me a lot to think about.
  • apokrisis
    7.3k
    I have read some more and you are right he is very technically laden.m-theory

    Right. Pattee requires you to understand physics as well as biology. ;) But that is what makes him the most rigorous thinker in this area for my money.

    I was hoping for a more generalized statement of the problem of the epistemic cut because I believe that the Partially observable Markov decision process might be a very general solution to establishing an epistemic cut between the model and the reality in an A.I. agent.m-theory

    Good grief. Not Mr Palm Pilot and his attempt to reinvent Bayesian reasoning as a forward modelling architecture?
  • m-theory
    1.1k
    Right. Pattee requires you to understand physics as well as biology. ;) But that is what makes him the most rigorous thinker in this area for my money.apokrisis
    What I see as his main issue is that he believes there is something like the measurement problem when dealing with the origin of life.
    He seems to use the term epistemic cut synonymously with the measurement problem.

    Perhaps he is correct.

    To solve the problem of artificial life in general maybe sure he may have a point...however the goal in the field of A.I. is not to recreate life artificially but to create artificial intelligence.
    The problem of general artificial intelligence is not equivalent to the problem of artificial life I don't believe.
    So I don't agree we have to solve the measurement problem to solve the problem of making general purpose A.I.

    If so and if the measurement problem is undecidable then that would mean we could not answer yes or no if we had general intelligence.
    This is why I do not believe defining our terms (intelligence/mind/consciousness) in this way would be productive and it certainly is not clear that it is necessary to do so.
    It solves no issue and creates one that is not necessary if another definition is more suitable.
    That is to say if our terms are decidable things.

    I suppose if you want to argue that the mind ultimately takes place at a quantum scale in nature then Pattee may well be correct and we would have to contend with the issues surrounding the measurement problem.

    But again I don't agree that we have to solve the issue of the origins of life (and any measurement problems that exist there) in order to solve the problem of machines that can think as good if not better than humans do.

    Good grief. Not Mr Palm Pilot and his attempt to reinvent Bayesian reasoning as a forward modelling architecture?apokrisis

    Mr. Palm Pilot...I don't get it?
    :s

    What is wrong with bayesian probability I don't get it either?

    Are you saying that bayesian statistical methods cannot be used to form an epistemic cut because of some fundamental issue?

    Some statistical method will have to be used because exact details of initial conditions at the time of observation cannot be known.
    I don't see any issue using bayesian methods?
  • apokrisis
    7.3k
    But I don't agree that we have to solve the origin of life and the measurement problem to solve the problem of general intelligence.m-theory

    Great. So in your view general intelligence is not wedded to biological underpinnings. You have drunk the Kool-Aid of 1970s cognitive functionalism. When faced with a hard philosophical rebuttal to the hand-waving promises that are currency of computer science as a discipline, suddenly you no longer want to care about the reasons AI is history's most over-hyped technological failure.

    I suppose if you want to argue that the mind ultimately takes place at a quantum scale in nature then Pattee may well be correct and we would have to contend with the issues surrounding the measurement problem.m-theory

    That is nothing like what I suggest. Instead I say "mind" arises out of that kind of lowest level beginning after an immense amount of subsequent complexification.

    The question is whether computer hardware can ever have "the right stuff" to be a foundation for semantics. And I say it can't because of the things I have identified. And now biophysics is finding why the quasi-classical scale of being (organic chemistry in liquid water) is indeed a uniquely "right" stuff.

    I explained this fairly carefully in a thread back on PF if you are interested....
    http://forums.philosophyforums.com/threads/the-biophysics-of-substance-70736.html

    So here you are just twisting what I say so you can avoid having to answer the fundamental challenges I've made to your cosy belief in computer science's self-hype.

    What is wrong with bayesian probability I don't get it either?m-theory

    I thought you were referring to the gaudy self-publicist, Jeff Hawkins, of hierarchical temporal memory fame - https://en.wikipedia.org/wiki/Hierarchical_temporal_memory

    But Bayesian network approaches to biologically realistic brain processing models are of course what I think are exactly the right way to go, as they are implementations of the epistemic cut or anticipatory systems approach.

    Look, it's clear that you are not even familiar with the history of neural networks and cybernetics within computer science, let alone the way the same foundational issues have played out more widely in science and philosophy.

    Don't take that as in insult. It is hardly general knowledge. But all I can do is point you towards the arguments.

    And I think they are interesting because they are right at the heart of everything - being the division between those who understand reality in terms of Platonic mechanism and those who understand it in terms of organically self-organising processes.
  • m-theory
    1.1k
    Great. So in your view general intelligence is not wedded to biological underpinnings. You have drunk the Kool-Aid of 1970s cognitive functionalism. When faced with a hard philosophical rebuttal to the hand-waving promises that are currency of computer science as a discipline, suddenly you no longer want to care about the reasons AI is history's most over-hyped technological failure.apokrisis

    I just don't think we have to crack the origin of life before we can crack the problem of machines with minds.

    That is the bottom up approach.
    We are reverse engineering from the top down as you pointed out.
    And I believe that somewhere in the middle is where the mind breakthrough will happen.
    I believe this because a great deal of what the body and brain do is completely autonomous from the mind...or at least what we mean by the term mind.

    I granted that of course those processes have feedback that informs the mind...but I do not see that a significant portion of them do.
    I think the level of detail regarding that feedback can be considered negligible (for example I don't think we need to model the circulatory system or the neurology that supports it to in order to achieve a mind...the list of systems I believe are unnecessary to model does not end there).
    Upon this is were we seem to disagree most.

    That is nothing like what I suggest. Instead I say "mind" arises out of that kind of lowest level beginning after an immense amount of subsequent complexification.

    The question is whether computer hardware can ever have "the right stuff" to be a foundation for semantics. And I say it can't because of the things I have identified. And now biophysics is finding why the quasi-classical scale of being (organic chemistry in liquid water) is indeed a uniquely "right" stuff.
    apokrisis

    Most of what happens at the nano or quantum scale has little to do with how the brain forms semantics in my view.
    For my view I believe semantics in the context of the mind is entailed by self aware syntax.
    For a machine to create a model of itself does not require that it is biological in my view.

    For this reason I think simulations of thought do not have to recreate the physics of biology at the nano scale before a mind can be modeled.

    Again we mostly have a different view on how the relevant terms ought to be defined.

    I explained this fairly carefully in a thread back on PF if you are interested....
    http://forums.philosophyforums.com/threads/the-biophysics-of-substance-70736.html

    So here you are just twisting what I say so you can avoid having to answer the fundamental challenges I've made to your cosy belief in computer science's self-hype.
    apokrisis
    Well I think I get it...Pattee argues that life may be like a unique state of matter at the quantum scale and we just might not be able to tell because of the measurement problem (I know it is much more complicated then that I just could not think of a better analogy for breviaries sake).

    I just don't agree that intelligence is necessarily dependent upon that state.
    I don't see why computers can not be the "right stuff" as you put it.
    Pattee does not provide conclusive evidence that such is the case.
    And you haven't either.

    Also you don't have to be so condescending in your replies.
    We can disagree without being insulting to each other...I may be wrong and stupid for what I believe but I am entitled to be wrong and stupid and it does not hurt any one but me.
    It kind of hurts my feelings man because I have a lot of respect for you.

    I thought you were referring to the gaudy self-publicist, Jeff Hawkins, of hierarchical temporal memory fame - https://en.wikipedia.org/wiki/Hierarchical_temporal_memory

    But Bayesian network approaches to biologically realistic brain processing models are of course what I think are exactly the right way to go, as they are implementations of the epistemic cut or anticipatory systems approach.

    Look, it's clear that you are not even familiar with the history of neural networks and cybernetics within computer science, let alone the way the same foundational issues have played out more widely in science and philosophy.

    Don't take that as in insult. It is hardly general knowledge. But all I can do is point you towards the arguments.

    And I think they are interesting because they are right at the heart of everything - being the division between those who understand reality in terms of Platonic mechanism and those who understand it in terms of organically self-organising processes.
    apokrisis

    Hey thanks.
    That cheered me a bit.
    You are right I am not well versed in the history of neural network theory.
    I guess I have a lot more research to do before become of aware of the issues you are referring to.

    My main concern is that some want to define terms surrounding the issue in such a way that they are not decidable.
    That is not productive because very obviously they must be or we could not know that is what we are doing when we think.

    What we mean by the term mind is that we ourselves can know definitively that we have one...that will mean that this term is something an algorithm can compute.

    So that is a foundational assumption about how the term should be defined that I have.
  • m-theory
    1.1k

    To put it another way I don't agree that a mind is utterly dependent upon all of life's complicated systems.
    I think it is more dependent upon the computation that life is able to perform and that computers can be designed to perform similarly without necessarily being one-to-one biological or one-to-one simulations of the biological.
  • tom
    1.5k
    That is the bottom up approach.
    We are reverse engineering from the top down as you pointed out.
    And I believe that somewhere in the middle is where the mind breakthrough will happen.
    m-theory

    So you hope to discover the software by examining the hardware? The trouble is, since we don't know what we're looking for, how could we recognise it?

    Back to epistemology. If we want to create an AGI then the problem of how to create knowledge will have to be solved. You can't transfer knowledge from one mind to another. Instead one mind creates cultural artefacts, from which the other mind discerns something not contained within the artefact - its meaning. As Karl Popper said, "It is impossible to speak in such a way that you cannot be misunderstood. This by the way, dispenses with the Chinese Room.

    It has been suggested that the human brain evolved the way it did in order to facilitate efficient knowledge transfer. Humans are unique (i.e. they are the last remaining species) in that they interpret meaning and intention - i.e. they create knowledge from artefacts and behaviours.

    Now, here's the amazing thing if this account of our evolutionary history is true: once you can create knowledge, there is no stopping you. This is a leap to universality. Once you are an explainer you are automatically a universal explainer because the same mechanisms are involved.

    Prior to the leap to universal explainer, there must have been another leap - the leap to computational universality in the human brain. This is a hardware problem, which we have long solved!
  • m-theory
    1.1k
    So you hope to discover the software by examining the hardware? The trouble is, since we don't know what we're looking for, how could we recognise it?tom
    That is a good point maybe you are right.

    I thought we were just looking for a way to encode semantics relative agency.

    But there could be much more to it than just this...I have to admit I don't know.

    Back to epistemology. If we want to create an AGI then the problem of how to create knowledge will have to be solved. You can't transfer knowledge from one mind to another. Instead one mind creates cultural artefacts, from which the other mind discerns something not contained within the artefact - its meaning. As Karl Popper said, "It is impossible to speak in such a way that you cannot be misunderstood. This by the way, dispenses with the Chinese Room.tom

    If we had a thinking machine that interacted with humans there is no reason to assume it would not be able to communicate with the conventions humans use.

    t has been suggested that the human brain evolved the way it did in order to facilitate efficient knowledge transfer. Humans are unique (i.e. they are the last remaining species) in that they interpret meaning and intention - i.e. they create knowledge from artefacts and behaviours.

    Now, here's the amazing thing if this account of our evolutionary history is true: once you can create knowledge, there is no stopping you. This is a leap to universality. Once you are an explainer you are automatically a universal explainer because the same mechanisms are involved.

    Prior to the leap to universal explainer, there must have been another leap - the leap to computational universality in the human brain. This is a hardware problem, which we have long solved!
    tom
    I am not so sure.
    It could be that the brains software became more efficient to and that it is not strictly a hardware leap.
  • tom
    1.5k
    If we had a thinking machine that interacted with humans there is no reason to assume it would not be able to communicate with the conventions humans use.m-theory

    Nice example of misunderstanding a cultural aretfact.

    I am not so sure.
    It could be that the brains software became more efficient to and that it is not strictly a hardware leap.
    m-theory

    And again it seems. The leap to computational universality (the hardware problem) is fully understood. The leap to universal explainer (the software problem) is not understood.
  • apokrisis
    7.3k
    And I believe that somewhere in the middle is where the mind breakthrough will happen.
    I believe this because a great deal of what the body and brain do is completely autonomous from the mind...or at least what we mean by the term mind.
    m-theory

    Do you mean a dualistic folk psychology notion of mind? I instead take the neurocognitive view that what you are talking about is simply the difference between attentive and habitual levels of brain processing. And these are hardly completely autonomous, but rather completely interdependent.

    For this reason I think simulations of thought do not have to recreate the physics of biology at the nano scale before a mind can be modeled.m-theory

    This misrepresents my argument again. My argument is that there is a fundamental known difference between hardware and wetware as BC puts it. So it is up to you to show that this difference does not matter here.

    Perhaps the computer simulation only needs to be as coarse grain as you describe. But you have to be able to provide positive reasons to think that is so rather than make the usual computer science presumption it probably is going to be so.

    And part of that is going to be showing that simulations are more than just syntactical structures. You have to have an account of semantics that is grounded in physicalism, not in some hand-wavy dualistic folk psychology notion of "mind".

    I just don't agree that intelligence is necessarily dependent upon that state.
    I don't see why computers can not be the "right stuff" as you put it.
    Pattee does not provide conclusive evidence that such is the case.
    And you haven't either.
    m-theory

    But the burden of proof is on you here. The only sure thing is that whatever you really mean by intelligence is a product of biology. And so biological stuff is already known to be the right stuff.

    If you also think a machine can be the right stuff, then why isn't it already easier to produce artificial life before we can produce artificial mind? DNA is just a genetic algorithm, right? And we understand biology better than neurology?

    So maybe we are just fooling ourselves here because we humans are smart enough to follow rules as if we are machines. We can walk within lines, move chess pieces, write squiggled pages of algebra. And we can even then invent machines that follow the rules we invent in a fashion that actually is unaware, syntactic and simulated.

    That would be why it seems easy to work from the top down. Computers are just mechanising what is already us behaving as if we were mechanical. But as soon as you actually dig into what it is to be a biological creature in an embodied relation with a complex world, mechanical programs almost immediately break down. They are the wrong stuff.

    Neural networks buy you some extra biological realism. But then you have to understand the detail of that to make judgements about just how far that further exercise is going to get.
  • m-theory
    1.1k
    Nice example of misunderstanding a cultural aretfact.tom

    Or it could be a nice example of a poorly constructed artifact.
    But I will assume the fault lies with me...and hope you can forgive that.

    And again it seems. The leap to computational universality (the hardware problem) is fully understood. The leap to universal explainer (the software problem) is not understood.tom

    The software problem...
    Our software can self analyze...even it's own software.
    Why it should not be able to model itself is beyond me...and your short cryptic answers does not help me to understand (are you on a mobile phone or something?).

    Why should I agree that we cannot self analyze sufficiently to explain how we are able to analyze?

    You seem to indicate that we are at square one of this problem with no clue where to start.

    That idea seems absurd to me considering the vast amount of effort in many different disciplines aimed at explaining how it is we are able to explain (after all the unexamined life is not worth living if you can examine it).

    If it is that you believe the problem is immensely more vast then I realize then you should at the very least suggest why I should believe that too.
  • m-theory
    1.1k
    Do you mean a dualistic folk psychology notion of mind? I instead take the neurocognitive view that what you are talking about is simply the difference between attentive and habitual levels of brain processing. And these are hardly completely autonomous, but rather completely interdependent.apokrisis

    Allow me to put it another way.
    We might disembody a head and sustain the life of the brain without a body by employing machines.
    Were we to do so we would not say that this person has lost a significant amount of their mind.
    Would we?
    A gruesome prospect to be sure but it is only a hypothetical.
    Perhaps it would not be practical for any but a short period I did do some research and it is not completely implausible.

    This may be a folksy rebuttal to the notion that we must understand the all of the body and even the origin of life to understand the mind.
    But it is what I immediately thought when I realized that this was the problem you seemed to be presenting.

    I am not sure what role attentive and habitual processing plays in theories of the mind or how relevant it is or how relevant it is to this subject.

    Again you shame me my lack of knowledge...I will have to research further to begin to understand your concern in this regard.

    My notion was that we might hope to model something like the default mode network.
    How dependent that network is upon attentive and habitual processing, I do not know, so I admit I may have greatly underestimated the difficulties involved.


    But the burden of proof is on you here. The only sure thing is that whatever you really mean by intelligence is a product of biology. And so biological stuff is already known to be the right stuff.apokrisis

    I don't agree that at all.
    If you state that the origins of life must be understood in order that we understand the mind that is claim that entails burdens of proof.

    This misrepresents my argument again. My argument is that there is a fundamental known difference between hardware and wetware as BC puts it. So it is up to you to show that this difference does not matter here.apokrisis

    Nonesense.
    If the mind is computational then it is simply matter that creates the environment in which computation can take place.
    That the matter must be living is a claim and will also have a burden of proof.

    The main issue at hand is whether or not computational theory of the mind is valid.
    Not whether or inorganic matter can compute.

    That would be why it seems easy to work from the top down. Computers are just mechanising what is already us behaving as if we were mechanical. But as soon as you actually dig into what it is to be a biological creature in an embodied relation with a complex world, mechanical programs almost immediately break down. They are the wrong stuff.

    Neural networks buy you some extra biological realism. But then you have to understand the detail of that to make judgements about just how far that further exercise is going to get.
    apokrisis

    Again we are working from completely different assumptions about theory of the mind.
    I am arguing a case for computational theory.
    You seem to be arguing a case for embedded cognition to the exclusion computational models.

    You are also misleading about how conclusive the matter is...it is not simply settled in the context of philosophy whether or not computational theories of the mind are valid or not...even if you have decided they aren't
    So please be charitable and don't assume that ignorance alone is what guides my views.

    I respect your position and grant that if it is the most valid, then I am just wrapped up in some hype chamber.
    You will have to forgive me, the idea of it fascinates me that I want to believe.
  • apokrisis
    7.3k
    We might disembody a head and sustain the life of the brain without a body by employing machines.
    Were we to do so we would not say that this person has lost a significant amount of their mind.
    Would we?
    m-theory

    That is irrelevant because you are talking about an already fully developed biology. The neural circuitry that was the result of having a hand would still be attempting to function. Check phantom limb syndrome.

    Then imagine instead culturing a brain with no body, no sense organs, no material interaction with the world. That is what a meaningful state of disembodiment would be like.

    My notion was that we might hope to model something like the default mode network.m-theory

    That is simply how the brain looks when attention is in idle mode with not much to do. Or indeed when attention is being suppressed to avoid it disrupting smoothly grooved habit.

    If you state that the origins of life must be understood in order that we understand the mind that is claim that entails burdens of proof.m-theory

    Who is talking about the origins of life - the problem of abiogenesis? You probably need a time machine to give an empirical answer on that.

    I was talking about the biological basis of the epistemic cut - something we can examine in the lab today.

    The main issue at hand is whether or not computational theory of the mind is valid.
    Not whether or inorganic matter can compute.
    m-theory

    Again, we know that biology is the right stuff for making minds. You are not expecting me to prove that?

    And we know that biology is rooted in material instabilty, not material stabilty? I've given you the evidence of that. And indeed - biosemiotically - why it has to be the case.

    And we know that computation is rooted in material stability? Hardware fabrication puts a lot of effort into achieving that, starting by worrying about the faintest speck of dust in the silicon foundry.

    And I've made the case that computation only employs syntax. It maps patterns of symbols onto patterns of symbols by looking up rules. There is nothing in that which constitutes an understanding of any meaning in the patterns or the rules?

    So that leaves you having to argue that despite all this, computation has the right stuff in a way that makes it merely a question of some appropriate degree of algorithmic complication before it "must" come alive with thoughts and feelings, a sense of self and a sense of purpose, and so you are excused of the burden of saying just why that would be so given all the foregoing reasons to doubt.
  • m-theory
    1.1k
    That is irrelevant because you are talking about an already fully developed biology. The neural circuitry that was the result of having a hand would still be attempting to function. Check phantom limb syndrome.

    Then imagine instead culturing a brain with no body, no sense organs, no material interaction with the world. That is what a meaningful state of disembodiment would be like.
    apokrisis

    I had not thought of that.
    I suppose you are right that there was a body even if there is not one now.
    So you can still argue that the body plays a very significant role in the mind.

    I see the problem that I face now...biology has produced a mind and you can always fall back on that.

    Touché

    nicely done sir.

    Of course I disagree that the mind must necessarily always be biological...but that is a semantic debate surrounding how the term is defined.
    You have decided that the term mind must be defined biologically to the exclusion of a computational model.

    It may well be that you are correct...but it is not a settled matter in philosophy.

    I was talking about the biological basis of the epistemic cut - something we can examine in the lab today.apokrisis

    Yes and as far as I could tell from your source material it was claimed that the origin of life contains a quantum measurement problem.
    The term epistemic cut was used synonymously with the quantum measurement problem and the author continuously alluded to the origins of self replicating life.

    Again, we know that biology is the right stuff for making minds. You are not expecting me to prove that?apokrisis
    We also know that matter can compute...surely I am not expected to prove as much?

    And we know that biology is rooted in material instabilty, not material stabilty? I've given you the evidence of that. And indeed - biosemiotically - why it has to be the case.apokrisis

    Imagine if the body and brain had a sudden interruption in the supply of electrons within its neurological system?
    Biology is not without stability.

    And I've made the case that computation only employs syntax. It maps patterns of symbols onto patterns of symbols by looking up rules. There is nothing in that which constitutes an understanding of any meaning in the patterns or the rules?apokrisis

    No you have stated this as if it were a settled matter by suggesting that only biology can form semantics.
    I don't agree semantics can only occur in biology.

    So that leaves you having to argue that despite all this, computation has the right stuff in a way that makes it merely a question of some appropriate degree of algorithmic complication before it "must" come alive with thoughts and feelings, a sense of self and a sense of purpose, and so you are excused of the burden of saying just why that would be so given all the foregoing reasons to doubt.apokrisis

    Again I refer to the alternative of a undecidable mind.
    We could not know if we had one if the mind is not algorithmic it is that simple.
    If we can know without error that we have minds this is the result of some algorithm which means the mind is computational.

    Why this argument fails has not been addressed by what you have provided on this thread.
  • tom
    1.5k
    Or it could be a nice example of a poorly constructed artifact.
    But I will assume the fault lies with me...and hope you can forgive that.
    m-theory

    You are completely missing the point. It is impossible to transfer knowledge from one mind to another. Minds construct new knowledge from artefacts, problem-situations, background knowledge, by a fundamentally creative ability.

    So, the creator of the artefact, and the interpreter of the artefact, are engaged in an inter-subjective dialogue. Each person is conjecturing theories about what each other means or interprets. Perfection and justification are impossible.

    Notice that subjectivity has already appeared! AlphaGo has no subjectivity.

    It could be that the brains software became more efficient to and that it is not strictly a hardware leap.m-theory

    AlphaGo can be as efficient as it likes. It will always fail the Chinese Room. It cannot create the knowledge that it is playing Go!
  • m-theory
    1.1k
    Notice that subjectivity has already appeared! AlphaGo has no subjectivity.tom

    This does not follow from^

    You are completely missing the point. It is impossible to transfer knowledge from one mind to another. Minds construct new knowledge from artefacts, problem-situations, background knowledge, by a fundamentally creative ability.

    So, the creator of the artefact, and the interpreter of the artefact, are engaged in an inter-subjective dialogue. Each person is conjecturing theories about what each other means or interprets. Perfection and justification are impossible.
    tom

    This.^

    AlphaGo can be as efficient as it likes. It will always fail the Chinese Room. It cannot create the knowledge that it is playing Go!tom

    Suppose alphago was tasked with learning the problem of the context in which chinese is used and was able to converge upon a solution such that it could efficiently and consistently pass a turing test.

    Then suppose we Chinese room Alphago and Searle.

    For the people outside the room the Chinese room is just a black box.
    If we ask them and they insist that the black box understands Chinese how would we account for that apparent knowledge?

    If the man inside insists he is only performing the necessary actions he was instructed to preform we can conclude that the knowledge did not come from him right?

    So either the people outside the black box have simply projected knowledge into meaningless strings of symbols.
    Which would be a philosophical issue for another thread I would say.

    Or the system of instructions can function in the role of the software while the man functions in the role of hardware and when combined they produce Chinese for those outside to interpret.
    If this is the account for the knowledge of Chinese then it would not conflict with the computational theory of the mind.

    I am unable to think of any other reasonable options to account for the knowledge of Chinese if the people outside the black box insist that it is there.
  • apokrisis
    7.3k
    Of course I disagree that the mind must necessarily always be biological...but that is a semantic debate surrounding how the term is defined.
    You have decided that the term mind must be defined biologically to the exclusion of a computational model.
    m-theory

    In your stubbornness, you keep short-cutting my carefully structured argument.

    1) Whatever a mind is, we are as certain as we can be that biology has the right stuff. Agreed?

    2) The best theory of that what kind of stuff that actually is, is what you would expect biologists to produce. And the standard answer from biologists is biology is material dynamics regulated by semiotic code - unstable chemistry constrained by evolving memory. Agreed?

    3) Then the question is whether computation is the same kind of stuff as that, or a fundamentally different kind of stuff. And as Pattee argues (not from quantum measurement, but his own 1960s work on biological automata), computation is physics-free modelling. It is the isolated play of syntax that builds in its presumption of being implementable on any computationally suited device. And in doing that, it explicitly rules out any external influences from the operation of physical laws or dissipative material processes. Sure there must be hardware to run the software, but it is axiomatic to universal computation that the nature of the hardware is irrelevant to the play of the symbols. Being physics-free is what makes the computation universal. Agreed?

    4) Given the above - that biological stuff is fundamentally different from computational stuff in a completely defined fashion - the burden is then on the computationalist to show that computation could still be the right stuff in some way.

    Yes and as far as I could tell from your source material it was claimed that the origin of life contains a quantum measurement problem.
    The term epistemic cut was used synonymously with the quantum measurement problem and the author continuously alluded to the origins of self replicating life.
    m-theory

    This is another unhelpful idee fixe you have developed. As said, Pattee's theoretical formulation of the epistemic cut arose from being a physicist working on the definition of life in the 1950s and 1960s as DNA was being discovered and the central mechanism of evolution becoming physically clear. From von Neumann - who also had an interest in self-reproducing automata - Pattee learnt that the epistemic cut was also the same kind of problem as had been identified in quantum mechanics as the measurement problem.

    Imagine if the body and brain had a sudden interruption in the supply of electrons within its neurological system?
    Biology is not without stability.
    m-theory

    You seem to be imagining that electrons are like little Newtonian billiard balls or something. Quantum field theory would say a more accurate mental picture would be excitations in field. And even that leaves out the difficult stuff.

    But anyway, again of course there is always stability and plasticity in the world. They are complementary poles of description. And the argument from biophysics is that dynamical instability is essential to life because life depends on having material degrees of freedom that it can harness. For biological information to act as a switch, there must be a physico-chemical instability that makes for material action that is switchable.

    I don't agree semantics can only occur in biology.m-theory

    Fine. Now present that evidence.

    Again I refer to the alternative of a undecidable mind.
    We could not know if we had one if the mind is not algorithmic it is that simple.
    If we can know without error that we have minds this is the result of some algorithm which means the mind is computational.
    m-theory

    No idea what you are talking about here.
  • m-theory
    1.1k
    2) The best theory of that what kind of stuff that actually is, is what you would expect biologists to produce. And the standard answer from biologists is biology is material dynamics regulated by semiotic code - unstable chemistry constrained by evolving memory. Agreed?apokrisis

    No of course I don't agree that the best theory of the mind must be biological.

    3) Then the question is whether computation is the same kind of stuff as that, or a fundamentally different kind of stuff. And as Pattee argues (not from quantum measurement, but his own 1960s work on biological automata), computation is physics-free modelling. It is the isolated play of syntax that builds in its presumption of being implementable on any computationally suited device. And in doing that, it explicitly rules out any external influences from the operation of physical laws or dissipative material processes. Sure there must be hardware to run the software, but it is axiomatic to universal computation that the nature of the hardware is irrelevant to the play of the symbols. Being physics-free is what makes the computation universal. Agreed?apokrisis

    I must admit I can make no sense of this.

    What the epistemic cut could be other than a measurement problem is beyond me and I had difficulty finding a good definition of that term in your reference sources.
    I can not be sure how the problem relates to computational theory of the mind or if it is actually necessary as Pattee would insist that it is.

    Pattee has also taken the liberty of defining the term semantics such that it will necessarily exclude anything which isn't biological.
    Again this may be necessary because of the epistemic cut...or it may not.

    The closest I came to grasping what he might mean by this term came from his references to von neumann.

    from von Neumann (1955, p. 352). He calls the system being measured, S, and the measuring device, M, that must provide the initial conditions for the dynamic laws of S. Since the non-integrable constraint, M, is also a physical system obeying the same laws as S, we may try a unified description by considering the combined physical system (S + M). But then we will need a new measuring device, M', to provide the initial conditions for the larger system (S + M). This leads to an infinite regress; but the main point is that even though any constraint like a measuring device, M, can in principle be described by more detailed universal laws, the fact is that if you choose to do so you will lose the function of M as a measuring device. This demonstrates that laws cannot describe the pragmatic function of measurement even if they can correctly and completely describe the detailed dynamics of the measuring constraints.

    I offered the that the pomdp could be a resolution.
    You did not really bother to suggest any reason why that view was not correct.

    Given the above - that biological stuff is fundamentally different from computational stuff in a completely defined fashion - the burden is then on the computationalist to show that computation could still be the right stuff in some way.apokrisis

    Mind is only found in living organic matter therefor only living organic matter can have a mind.
    That is an unasailable argument in that it defines the term mind to the exclusion of inorganic matter.
    But that this definition is by necessity the only valid theory of the mind is not simply a resolved matter in philosophy.
    No matter how many papers Pattee has written.

    This is another unhelpful idee fixe you have developed. As said, Pattee's theoretical formulation of the epistemic cut arose from being a physicist working on the definition of life in the 1950s and 1960s as DNA was being discovered and the central mechanism of evolution becoming physically clear. From von Neumann - who also had an interest in self-reproducing automata - Pattee learnt that the epistemic cut was also the same kind of problem as had been identified in quantum mechanics as the measurement problem.apokrisis

    Pattee does a poor job of generalizing this problem, especially considering the frequency for which he references the term epistemic cut.

    This is the closest I came to finding a general sense of what Pattee might mean.

    The epistemic cut or the distinction between subject and object is normally associated with highly evolved subjects with brains and their models of the outside world as in the case of measurement. As von Neumann states, where we place the cut appears to be arbitrary to a large extent. The cut itself is an epistemic necessity, not an ontological condition. That is, we must make a sharp cut, a disjunction, just in order to speak of knowledge as being "about" something or "standing for" whatever it refers to. What is going on ontologically at the cut (or what we see if we choose to look at the most detailed physics) is a very complex process. The apparent arbitrariness of the placement of the epistemic cut arises in part because the process cannot be completely or unambiguously described by the objective dynamical laws, since in order to perform a measurement the subject must have control of the construction of the measuring device. Only the subject side of the cut can measure or control.

    In essence the epistemic cut is a measurement problem.
    Perhaps I was wrong to call it a quantum measurement problem.

    It is not immediately clear to me how this general statement can be said to demonstrate necessarily that computation can not result in a mind (or rather that computation cannot form a subject object distinction at least).

    Fine. Now present that evidence.apokrisis

    I did mention that I argued deductively that the mind must be something that is decidable.
    but this was your response.
    No idea what you are talking about hereapokrisis

    My argument is on the first page below Tom's post.
  • apokrisis
    7.3k
    No of course I don't agree that the best theory of the mind must be biological.m-theory

    Yes. But you don't agree because you want to believe something different without being able to produce the evidence. So at this point it is like arguing with a creationist.

    I offered the that the pomdp could be a resolution.
    You did not really bother to suggest any reason why that view was not correct.
    m-theory

    But it is a resolution in being an implementation of the epistemic cut. It represents a stepping back into a physics-free realm so as to speak about physics-constrained processes.

    The bit that is then missing - the crucial bit - is that the model doesn't have the job of making its own hardware. The whole thing is just a machine being created by humans to fulfill human purposes. It has no evolutionary or adaptive life of its own.

    Mind is only found in living organic matter therefor only living organic matter can have a mind.
    That is an unasailable argument in that it defines the term mind to the exclusion of inorganic matter.
    But that this definition is by necessity the only valid theory of the mind is not simply a resolved matter in philosophy.
    m-theory

    Fortunately we only have to consider two theories of mind in this discussion - the biological and the computational. If you want to widen the field to quantum vibrations, ectoplasm, psychic particles or whatever, then maybe you don't see computation as being relevant in the end?

    It is not immediately clear to me how this general statement can be said to demonstrate necessarily that computation can not result in a mind.m-theory

    So computation is a formal action - algorithmic or rule-bound. And yet measurement is inherently an informal action - a choice that cannot be computed. Houston, we have a problem.

    My argument is on the first page below Tom's post.m-theory

    That's a good illustration of the absolute generacy of the measurement problem then. To have a formal theory of the mind involves also the informal choice about what kind of measurement stands for a sign of a mind.

    We are talking Godelian incompleteness here. In the end, all formal reasoning systems - all syntactical arguments - rely on having to make an abductive and axiomatic guess to get the game started. We have to decide, oh, that is one of those. Then the development of a formal model can begin by having agreed a basis of measurement.

    But then you mix up the issue of a measurement basis with something different - the notion of undecidability in computation.

    Science models the world. So as an open-ended semiotic process, it doesn't have a halting problem. Instead, it is free to inquire until it reaches a point of pragmatic indifference in the light of its own interests.

    You are talking about a halting problem analogy by the sound of it. And that is merely a formal property of computational space. Some computational processes will terminate, others have a form that cannot. That is something quite different.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.