Comments

  • General purpose A.I. is it here?
    2) The best theory of that what kind of stuff that actually is, is what you would expect biologists to produce. And the standard answer from biologists is biology is material dynamics regulated by semiotic code - unstable chemistry constrained by evolving memory. Agreed?apokrisis

    No of course I don't agree that the best theory of the mind must be biological.

    3) Then the question is whether computation is the same kind of stuff as that, or a fundamentally different kind of stuff. And as Pattee argues (not from quantum measurement, but his own 1960s work on biological automata), computation is physics-free modelling. It is the isolated play of syntax that builds in its presumption of being implementable on any computationally suited device. And in doing that, it explicitly rules out any external influences from the operation of physical laws or dissipative material processes. Sure there must be hardware to run the software, but it is axiomatic to universal computation that the nature of the hardware is irrelevant to the play of the symbols. Being physics-free is what makes the computation universal. Agreed?apokrisis

    I must admit I can make no sense of this.

    What the epistemic cut could be other than a measurement problem is beyond me and I had difficulty finding a good definition of that term in your reference sources.
    I can not be sure how the problem relates to computational theory of the mind or if it is actually necessary as Pattee would insist that it is.

    Pattee has also taken the liberty of defining the term semantics such that it will necessarily exclude anything which isn't biological.
    Again this may be necessary because of the epistemic cut...or it may not.

    The closest I came to grasping what he might mean by this term came from his references to von neumann.

    from von Neumann (1955, p. 352). He calls the system being measured, S, and the measuring device, M, that must provide the initial conditions for the dynamic laws of S. Since the non-integrable constraint, M, is also a physical system obeying the same laws as S, we may try a unified description by considering the combined physical system (S + M). But then we will need a new measuring device, M', to provide the initial conditions for the larger system (S + M). This leads to an infinite regress; but the main point is that even though any constraint like a measuring device, M, can in principle be described by more detailed universal laws, the fact is that if you choose to do so you will lose the function of M as a measuring device. This demonstrates that laws cannot describe the pragmatic function of measurement even if they can correctly and completely describe the detailed dynamics of the measuring constraints.

    I offered the that the pomdp could be a resolution.
    You did not really bother to suggest any reason why that view was not correct.

    Given the above - that biological stuff is fundamentally different from computational stuff in a completely defined fashion - the burden is then on the computationalist to show that computation could still be the right stuff in some way.apokrisis

    Mind is only found in living organic matter therefor only living organic matter can have a mind.
    That is an unasailable argument in that it defines the term mind to the exclusion of inorganic matter.
    But that this definition is by necessity the only valid theory of the mind is not simply a resolved matter in philosophy.
    No matter how many papers Pattee has written.

    This is another unhelpful idee fixe you have developed. As said, Pattee's theoretical formulation of the epistemic cut arose from being a physicist working on the definition of life in the 1950s and 1960s as DNA was being discovered and the central mechanism of evolution becoming physically clear. From von Neumann - who also had an interest in self-reproducing automata - Pattee learnt that the epistemic cut was also the same kind of problem as had been identified in quantum mechanics as the measurement problem.apokrisis

    Pattee does a poor job of generalizing this problem, especially considering the frequency for which he references the term epistemic cut.

    This is the closest I came to finding a general sense of what Pattee might mean.

    The epistemic cut or the distinction between subject and object is normally associated with highly evolved subjects with brains and their models of the outside world as in the case of measurement. As von Neumann states, where we place the cut appears to be arbitrary to a large extent. The cut itself is an epistemic necessity, not an ontological condition. That is, we must make a sharp cut, a disjunction, just in order to speak of knowledge as being "about" something or "standing for" whatever it refers to. What is going on ontologically at the cut (or what we see if we choose to look at the most detailed physics) is a very complex process. The apparent arbitrariness of the placement of the epistemic cut arises in part because the process cannot be completely or unambiguously described by the objective dynamical laws, since in order to perform a measurement the subject must have control of the construction of the measuring device. Only the subject side of the cut can measure or control.

    In essence the epistemic cut is a measurement problem.
    Perhaps I was wrong to call it a quantum measurement problem.

    It is not immediately clear to me how this general statement can be said to demonstrate necessarily that computation can not result in a mind (or rather that computation cannot form a subject object distinction at least).

    Fine. Now present that evidence.apokrisis

    I did mention that I argued deductively that the mind must be something that is decidable.
    but this was your response.
    No idea what you are talking about hereapokrisis

    My argument is on the first page below Tom's post.
  • Political Affiliation (Discussion)
    But that's after the fact: the killing will have already either taken place or not taken place. My point stands: no amount of legislation can prevent someone determined enough from going out and killing another human, whether that's an adult or a baby inside of themselves.Sapientia

    Yes. Legislation can only be reactionary. There is no way to strictly enforce any law. But there is a way to convict a murderer in a court beyond reasonable doubt. The same does not apply when a woman self induces miscarriage. There is virtually no way to prove a miscarriage was intentional if the accused does not admit that it was.

    This is another straw man. What I actually said is that in most cases, a better resolution is available, and I stand by that claim.Sapientia

    Sorry I must have misread.

    Feel free to go over my part in the previous discussion in order to better understand it. You've made quite a few big assumptions about my position which are in fact incorrect. Yes, there are exceptional circumstances, and yes, in places like the U.K. where I'm from, it is true that up to a point, pregnant women have a legal right to decide to have an abortion (We've even been over the actual wording and stated conditions in the relevant legislation), and I accept that there can be morally acceptable circumstances, although I would emphasise that they are acceptable, but not desirable or ideal.Sapientia

    I did not set out to misrepresent your position. I only sought to lay out my own. If I did, then I apologise again.
  • "Chance" in Evolutionary Theory

    It does seem like I would tend to do that in this example..
    .
  • "Chance" in Evolutionary Theory

    I was thinking something more like "tends to" as in "does more so than does not."
  • General purpose A.I. is it here?
    Notice that subjectivity has already appeared! AlphaGo has no subjectivity.tom

    This does not follow from^

    You are completely missing the point. It is impossible to transfer knowledge from one mind to another. Minds construct new knowledge from artefacts, problem-situations, background knowledge, by a fundamentally creative ability.

    So, the creator of the artefact, and the interpreter of the artefact, are engaged in an inter-subjective dialogue. Each person is conjecturing theories about what each other means or interprets. Perfection and justification are impossible.
    tom

    This.^

    AlphaGo can be as efficient as it likes. It will always fail the Chinese Room. It cannot create the knowledge that it is playing Go!tom

    Suppose alphago was tasked with learning the problem of the context in which chinese is used and was able to converge upon a solution such that it could efficiently and consistently pass a turing test.

    Then suppose we Chinese room Alphago and Searle.

    For the people outside the room the Chinese room is just a black box.
    If we ask them and they insist that the black box understands Chinese how would we account for that apparent knowledge?

    If the man inside insists he is only performing the necessary actions he was instructed to preform we can conclude that the knowledge did not come from him right?

    So either the people outside the black box have simply projected knowledge into meaningless strings of symbols.
    Which would be a philosophical issue for another thread I would say.

    Or the system of instructions can function in the role of the software while the man functions in the role of hardware and when combined they produce Chinese for those outside to interpret.
    If this is the account for the knowledge of Chinese then it would not conflict with the computational theory of the mind.

    I am unable to think of any other reasonable options to account for the knowledge of Chinese if the people outside the black box insist that it is there.
  • What breaks your heart?
    Well like I said back on the first page when I heard Malala Yousafzai's story I started sobbing.

    I felt a brief flash of outrage then heartbreak because I realize there was not much I could about it.

    But then when she won the nobel peace prize I was heartmended by that.
  • Political Affiliation (Discussion)
    You're right that there is no legislation which can prevent women from killing that which is living and growing inside of them, an unborn human, if they're determined enough, just as there is no legislation which can prevent women or anyone else from killing anyone else if they're determined enough. But neither are good things which should be encouraged. It is an unfortunate fact that murders and abortions occur, when in most cases, a better resolution is available. Just as someone who is contemplating murder should have access to counseling, so should someone contemplating abortion, and that is already the case in the developed world, as far as I'm aware.Sapientia

    That is fair enough.
    But it is far less difficult to prove that one person has a murdered another in a court than it is to prove that a woman intentionally miscarried.

    So when we compare these things we would not say that there is no legislation that will prevent murder because it is less difficult to demonstrate that an intentional killing has taken place.

    So even if we agreed that terminating a pregnancy was murder we are left with a far more difficult burden of proof than is the case when this happens to those that have been born.

    Also I don't agree that there is necessarily a better resolution in some cases of termination of unwanted pregnancy.
    Take the cases where a woman has been raped and it has resulted in pregnancy.
    I sternly believe that the woman should have the right to decide if she wants to procreate with a rapists.

    And personally, in general, I think that women ought to have that right to decide even if they are not raped and ultimately, as I have pointed out, they do have that right and there is nothing to be done about it in legislative terms.
  • Solipsism

    lol
    surprised there are not more of them.
  • General purpose A.I. is it here?
    That is irrelevant because you are talking about an already fully developed biology. The neural circuitry that was the result of having a hand would still be attempting to function. Check phantom limb syndrome.

    Then imagine instead culturing a brain with no body, no sense organs, no material interaction with the world. That is what a meaningful state of disembodiment would be like.
    apokrisis

    I had not thought of that.
    I suppose you are right that there was a body even if there is not one now.
    So you can still argue that the body plays a very significant role in the mind.

    I see the problem that I face now...biology has produced a mind and you can always fall back on that.

    Touché

    nicely done sir.

    Of course I disagree that the mind must necessarily always be biological...but that is a semantic debate surrounding how the term is defined.
    You have decided that the term mind must be defined biologically to the exclusion of a computational model.

    It may well be that you are correct...but it is not a settled matter in philosophy.

    I was talking about the biological basis of the epistemic cut - something we can examine in the lab today.apokrisis

    Yes and as far as I could tell from your source material it was claimed that the origin of life contains a quantum measurement problem.
    The term epistemic cut was used synonymously with the quantum measurement problem and the author continuously alluded to the origins of self replicating life.

    Again, we know that biology is the right stuff for making minds. You are not expecting me to prove that?apokrisis
    We also know that matter can compute...surely I am not expected to prove as much?

    And we know that biology is rooted in material instabilty, not material stabilty? I've given you the evidence of that. And indeed - biosemiotically - why it has to be the case.apokrisis

    Imagine if the body and brain had a sudden interruption in the supply of electrons within its neurological system?
    Biology is not without stability.

    And I've made the case that computation only employs syntax. It maps patterns of symbols onto patterns of symbols by looking up rules. There is nothing in that which constitutes an understanding of any meaning in the patterns or the rules?apokrisis

    No you have stated this as if it were a settled matter by suggesting that only biology can form semantics.
    I don't agree semantics can only occur in biology.

    So that leaves you having to argue that despite all this, computation has the right stuff in a way that makes it merely a question of some appropriate degree of algorithmic complication before it "must" come alive with thoughts and feelings, a sense of self and a sense of purpose, and so you are excused of the burden of saying just why that would be so given all the foregoing reasons to doubt.apokrisis

    Again I refer to the alternative of a undecidable mind.
    We could not know if we had one if the mind is not algorithmic it is that simple.
    If we can know without error that we have minds this is the result of some algorithm which means the mind is computational.

    Why this argument fails has not been addressed by what you have provided on this thread.
  • Moral facts vs other facts?
    No moral calculus has the same force as actual mathematical statements when it comes to accepting their truth.Moliere

    Consider the statement. It is moral to be moral. This is tautological so it has to be true it is also very robust from any mathematical analysis.

    .
  • General purpose A.I. is it here?
    Do you mean a dualistic folk psychology notion of mind? I instead take the neurocognitive view that what you are talking about is simply the difference between attentive and habitual levels of brain processing. And these are hardly completely autonomous, but rather completely interdependent.apokrisis

    Allow me to put it another way.
    We might disembody a head and sustain the life of the brain without a body by employing machines.
    Were we to do so we would not say that this person has lost a significant amount of their mind.
    Would we?
    A gruesome prospect to be sure but it is only a hypothetical.
    Perhaps it would not be practical for any but a short period I did do some research and it is not completely implausible.

    This may be a folksy rebuttal to the notion that we must understand the all of the body and even the origin of life to understand the mind.
    But it is what I immediately thought when I realized that this was the problem you seemed to be presenting.

    I am not sure what role attentive and habitual processing plays in theories of the mind or how relevant it is or how relevant it is to this subject.

    Again you shame me my lack of knowledge...I will have to research further to begin to understand your concern in this regard.

    My notion was that we might hope to model something like the default mode network.
    How dependent that network is upon attentive and habitual processing, I do not know, so I admit I may have greatly underestimated the difficulties involved.


    But the burden of proof is on you here. The only sure thing is that whatever you really mean by intelligence is a product of biology. And so biological stuff is already known to be the right stuff.apokrisis

    I don't agree that at all.
    If you state that the origins of life must be understood in order that we understand the mind that is claim that entails burdens of proof.

    This misrepresents my argument again. My argument is that there is a fundamental known difference between hardware and wetware as BC puts it. So it is up to you to show that this difference does not matter here.apokrisis

    Nonesense.
    If the mind is computational then it is simply matter that creates the environment in which computation can take place.
    That the matter must be living is a claim and will also have a burden of proof.

    The main issue at hand is whether or not computational theory of the mind is valid.
    Not whether or inorganic matter can compute.

    That would be why it seems easy to work from the top down. Computers are just mechanising what is already us behaving as if we were mechanical. But as soon as you actually dig into what it is to be a biological creature in an embodied relation with a complex world, mechanical programs almost immediately break down. They are the wrong stuff.

    Neural networks buy you some extra biological realism. But then you have to understand the detail of that to make judgements about just how far that further exercise is going to get.
    apokrisis

    Again we are working from completely different assumptions about theory of the mind.
    I am arguing a case for computational theory.
    You seem to be arguing a case for embedded cognition to the exclusion computational models.

    You are also misleading about how conclusive the matter is...it is not simply settled in the context of philosophy whether or not computational theories of the mind are valid or not...even if you have decided they aren't
    So please be charitable and don't assume that ignorance alone is what guides my views.

    I respect your position and grant that if it is the most valid, then I am just wrapped up in some hype chamber.
    You will have to forgive me, the idea of it fascinates me that I want to believe.
  • General purpose A.I. is it here?
    Nice example of misunderstanding a cultural aretfact.tom

    Or it could be a nice example of a poorly constructed artifact.
    But I will assume the fault lies with me...and hope you can forgive that.

    And again it seems. The leap to computational universality (the hardware problem) is fully understood. The leap to universal explainer (the software problem) is not understood.tom

    The software problem...
    Our software can self analyze...even it's own software.
    Why it should not be able to model itself is beyond me...and your short cryptic answers does not help me to understand (are you on a mobile phone or something?).

    Why should I agree that we cannot self analyze sufficiently to explain how we are able to analyze?

    You seem to indicate that we are at square one of this problem with no clue where to start.

    That idea seems absurd to me considering the vast amount of effort in many different disciplines aimed at explaining how it is we are able to explain (after all the unexamined life is not worth living if you can examine it).

    If it is that you believe the problem is immensely more vast then I realize then you should at the very least suggest why I should believe that too.
  • What breaks your heart?
    Powerful post.
    Thank you for that.
  • Disproportionate rates of police violence against blacks: Racism?
    I tend to agree with this, and It would be nice if it were possible to talk in a more nuanced way about things. But I think it misses an important feature of the lived experience of women and of black people in the culture.unenlightened

    This is valuable insight unenlightened...to get their points across both sides will resort to exaggeration.

    Forgive me if I exaggerated the extent to which minorities experience differences in attitudes that whites do not.

    I would like to distinguish racism as a belief system held by a few and not implemented in social institutions beyond marginal groups, from prejudice, an unconscious attitude that alters behaviour based on race or gender as the case may be. This latter is what your account leaves out, and since it is more or less universal, it is quite devastating in its effects.

    Mrs Un goes into a shop, and is immediately under suspicion; if there is a random check at the airport or the roadside, she is randomly chosen. Every relationship is tainted by not only racial prejudice, but also the performance of non-prejudice. White women in particular go out of their way to talk and act friendly, in a somewhat patronising way that quickly turns to resentment when it is not particularly appreciated. They want to have her as a friend as a symbol of their lack of prejudice - but at a safe distance, especially from their menfolk.

    This plays out in wider society cumulatively; each little incident is deniable, no racist language is used, no views expressed, but when one dude is stopped twenty times in his car by the police, and another never, with no violation recorded for either, there is something going on statistically that is unidentifiable in any single incident.

    Given that our recent past is that white supremacy and patriarchy were institutionally sanctioned and enforced, it is inevitable that there is a legacy of prejudice. And given the experience of this prejudice alongside its universal denial, it is inevitable that there is some anger and paranoia amongst the sufferers. It is especially the denial of the existence of a problem that is the daily experience of black people that becomes - maddening.
    unenlightened

    There has been a great deal of effort in the west to insure that intuitions and policies do not discriminate despite possible prejudices held by those that create those intuitions and policies.
    While I grant the system is not a perfect one, it should not be denied that these efforts are made and are the concern of many in positions of power, be they one race or the other, such that these efforts are still continuing to be made.

    If this fact is lost in narrative of systemic segregation of equality it creates the impression that whites are collectively deliberate in their prejudices as well as the impression that whites are unwilling to change and that fosters resentments within minority culture.
    Whites do not collectively as a majority conspire to agree upon how minorities should be prejudiced against so political narratives surrounding racial issues should not encourage that view.

    I believe that this is counterproductive and should be avoided.

    That is not to deny the experiences of minorities or that those experiences do not have any role in political discourse about racial issues.
    It is only to say that being white should not be considered synonymous with being racist or being prejudice anymore than being a minority should.
  • General purpose A.I. is it here?
    So you hope to discover the software by examining the hardware? The trouble is, since we don't know what we're looking for, how could we recognise it?tom
    That is a good point maybe you are right.

    I thought we were just looking for a way to encode semantics relative agency.

    But there could be much more to it than just this...I have to admit I don't know.

    Back to epistemology. If we want to create an AGI then the problem of how to create knowledge will have to be solved. You can't transfer knowledge from one mind to another. Instead one mind creates cultural artefacts, from which the other mind discerns something not contained within the artefact - its meaning. As Karl Popper said, "It is impossible to speak in such a way that you cannot be misunderstood. This by the way, dispenses with the Chinese Room.tom

    If we had a thinking machine that interacted with humans there is no reason to assume it would not be able to communicate with the conventions humans use.

    t has been suggested that the human brain evolved the way it did in order to facilitate efficient knowledge transfer. Humans are unique (i.e. they are the last remaining species) in that they interpret meaning and intention - i.e. they create knowledge from artefacts and behaviours.

    Now, here's the amazing thing if this account of our evolutionary history is true: once you can create knowledge, there is no stopping you. This is a leap to universality. Once you are an explainer you are automatically a universal explainer because the same mechanisms are involved.

    Prior to the leap to universal explainer, there must have been another leap - the leap to computational universality in the human brain. This is a hardware problem, which we have long solved!
    tom
    I am not so sure.
    It could be that the brains software became more efficient to and that it is not strictly a hardware leap.
  • "Chance" in Evolutionary Theory

    True. We do have to face the fact that in reality some things are more likely to happen than other things. And it may be that is by design but knowing why that is the design is no simple matter to prove if that is what you believe.
  • Disproportionate rates of police violence against blacks: Racism?
    In many social science departments of many western universities, they now teach that the west is fundamentally patriarchal, and fundamentally white supremacist. Racism is "power + privilege". They accept it as a brute fact that whites have all the power and all the privilege in the west, making all white people racist. It's hard to believe that this comes out of actual university curriculum, but it's becoming more and more evident. We're being told that as white men we're unaware of the naturally ingrained systems of oppression, which can be complex and subtle, that benefit us at the expense of women, of people color, even more so at the expense of women of color (and so on with a litany of possible identities which might entail facing any sort of obstacle in life which white men might not face). "Intersectionality" they call it, which is in itself worthy of it's own discussion.VagabondSpectre

    I don't agree with this...I would argue that academia teaches that because blacks are a minority that they will have a psychological propensity to view the majority white culture with suspecion especially in a historical context.
    I imagine that if I was a minority ethnicity it would have some psychological effect on me as well.

    The view that this automatically amounts to racism is more popular among minorities sure...but I would still suggest the majority of minorities don't believe it.
    And certainly the majority of whites do not believe it.

    We have real examples of history to informs us what white supremacists institutions and policies look like.
    And that is not what our current system is.

    The same goes for patriarchy, we have real examples of cultures where women amount to property...and that is not how the west operates in terms of social values.

    At least in my experience most people don't agree with such views...I would say that is more of an extremist fringe.

    In some ways, any would be leader of the BLM movement is going to somehow have to put the "black" in "#BlackLivesMatter". It is very difficult to do this without amplifying a racial lens, but my own approach would be to address the issue of police use of force without focusing on racism or race as a fundamental causative factor behind the problem, and to also address the larger issue facing the black community, which leads to many of the events which spark BLM protests, which is crime in and of itself in black communities. The discussion must necessarily involve economics, politics and culture, and while it runs the risk of being obfuscated by likewise presuming that the economic, political, and cultural realities facing many black communities are symptoms of that larger white supremacist system contemporary schools of thought point to, it could still bare fruit. In summation, the BLM rhetoric at large is not outwardly "us against them", it is rather an idea lurks just under it's surface, and because of lost complexity and some inherently evocative underpinnings, it's now beginning to rear it's ugly head.VagabondSpectre

    Agree that BLM should have a more inclusive tone...after all police brutality affects all of our society.

    I did do a google search and reviewed two different BLM sites.
    The rhetoric was very racially charged and as a white male I felt alienated by that message.
    As though my support or involvement would not be welcomed as anything but part of the problem.
    That is sad to me...I am certainly not motivated to be sympathetic to such a view.
  • General purpose A.I. is it here?

    To put it another way I don't agree that a mind is utterly dependent upon all of life's complicated systems.
    I think it is more dependent upon the computation that life is able to perform and that computers can be designed to perform similarly without necessarily being one-to-one biological or one-to-one simulations of the biological.
  • General purpose A.I. is it here?
    Great. So in your view general intelligence is not wedded to biological underpinnings. You have drunk the Kool-Aid of 1970s cognitive functionalism. When faced with a hard philosophical rebuttal to the hand-waving promises that are currency of computer science as a discipline, suddenly you no longer want to care about the reasons AI is history's most over-hyped technological failure.apokrisis

    I just don't think we have to crack the origin of life before we can crack the problem of machines with minds.

    That is the bottom up approach.
    We are reverse engineering from the top down as you pointed out.
    And I believe that somewhere in the middle is where the mind breakthrough will happen.
    I believe this because a great deal of what the body and brain do is completely autonomous from the mind...or at least what we mean by the term mind.

    I granted that of course those processes have feedback that informs the mind...but I do not see that a significant portion of them do.
    I think the level of detail regarding that feedback can be considered negligible (for example I don't think we need to model the circulatory system or the neurology that supports it to in order to achieve a mind...the list of systems I believe are unnecessary to model does not end there).
    Upon this is were we seem to disagree most.

    That is nothing like what I suggest. Instead I say "mind" arises out of that kind of lowest level beginning after an immense amount of subsequent complexification.

    The question is whether computer hardware can ever have "the right stuff" to be a foundation for semantics. And I say it can't because of the things I have identified. And now biophysics is finding why the quasi-classical scale of being (organic chemistry in liquid water) is indeed a uniquely "right" stuff.
    apokrisis

    Most of what happens at the nano or quantum scale has little to do with how the brain forms semantics in my view.
    For my view I believe semantics in the context of the mind is entailed by self aware syntax.
    For a machine to create a model of itself does not require that it is biological in my view.

    For this reason I think simulations of thought do not have to recreate the physics of biology at the nano scale before a mind can be modeled.

    Again we mostly have a different view on how the relevant terms ought to be defined.

    I explained this fairly carefully in a thread back on PF if you are interested....
    http://forums.philosophyforums.com/threads/the-biophysics-of-substance-70736.html

    So here you are just twisting what I say so you can avoid having to answer the fundamental challenges I've made to your cosy belief in computer science's self-hype.
    apokrisis
    Well I think I get it...Pattee argues that life may be like a unique state of matter at the quantum scale and we just might not be able to tell because of the measurement problem (I know it is much more complicated then that I just could not think of a better analogy for breviaries sake).

    I just don't agree that intelligence is necessarily dependent upon that state.
    I don't see why computers can not be the "right stuff" as you put it.
    Pattee does not provide conclusive evidence that such is the case.
    And you haven't either.

    Also you don't have to be so condescending in your replies.
    We can disagree without being insulting to each other...I may be wrong and stupid for what I believe but I am entitled to be wrong and stupid and it does not hurt any one but me.
    It kind of hurts my feelings man because I have a lot of respect for you.

    I thought you were referring to the gaudy self-publicist, Jeff Hawkins, of hierarchical temporal memory fame - https://en.wikipedia.org/wiki/Hierarchical_temporal_memory

    But Bayesian network approaches to biologically realistic brain processing models are of course what I think are exactly the right way to go, as they are implementations of the epistemic cut or anticipatory systems approach.

    Look, it's clear that you are not even familiar with the history of neural networks and cybernetics within computer science, let alone the way the same foundational issues have played out more widely in science and philosophy.

    Don't take that as in insult. It is hardly general knowledge. But all I can do is point you towards the arguments.

    And I think they are interesting because they are right at the heart of everything - being the division between those who understand reality in terms of Platonic mechanism and those who understand it in terms of organically self-organising processes.
    apokrisis

    Hey thanks.
    That cheered me a bit.
    You are right I am not well versed in the history of neural network theory.
    I guess I have a lot more research to do before become of aware of the issues you are referring to.

    My main concern is that some want to define terms surrounding the issue in such a way that they are not decidable.
    That is not productive because very obviously they must be or we could not know that is what we are doing when we think.

    What we mean by the term mind is that we ourselves can know definitively that we have one...that will mean that this term is something an algorithm can compute.

    So that is a foundational assumption about how the term should be defined that I have.
  • "Chance" in Evolutionary Theory

    I don't either to be honest.

    I think he may mean something like was mentioned in this thread.

    If so then I cannot comment on it because I have not researched it at all.
  • General purpose A.I. is it here?
    Right. Pattee requires you to understand physics as well as biology. ;) But that is what makes him the most rigorous thinker in this area for my money.apokrisis
    What I see as his main issue is that he believes there is something like the measurement problem when dealing with the origin of life.
    He seems to use the term epistemic cut synonymously with the measurement problem.

    Perhaps he is correct.

    To solve the problem of artificial life in general maybe sure he may have a point...however the goal in the field of A.I. is not to recreate life artificially but to create artificial intelligence.
    The problem of general artificial intelligence is not equivalent to the problem of artificial life I don't believe.
    So I don't agree we have to solve the measurement problem to solve the problem of making general purpose A.I.

    If so and if the measurement problem is undecidable then that would mean we could not answer yes or no if we had general intelligence.
    This is why I do not believe defining our terms (intelligence/mind/consciousness) in this way would be productive and it certainly is not clear that it is necessary to do so.
    It solves no issue and creates one that is not necessary if another definition is more suitable.
    That is to say if our terms are decidable things.

    I suppose if you want to argue that the mind ultimately takes place at a quantum scale in nature then Pattee may well be correct and we would have to contend with the issues surrounding the measurement problem.

    But again I don't agree that we have to solve the issue of the origins of life (and any measurement problems that exist there) in order to solve the problem of machines that can think as good if not better than humans do.

    Good grief. Not Mr Palm Pilot and his attempt to reinvent Bayesian reasoning as a forward modelling architecture?apokrisis

    Mr. Palm Pilot...I don't get it?
    :s

    What is wrong with bayesian probability I don't get it either?

    Are you saying that bayesian statistical methods cannot be used to form an epistemic cut because of some fundamental issue?

    Some statistical method will have to be used because exact details of initial conditions at the time of observation cannot be known.
    I don't see any issue using bayesian methods?
  • "Chance" in Evolutionary Theory

    Tenancies are observed as compared to non-tendencies.
    Intention would mean there is a reason why somethings are tendencies while others are not.
    We still have admit we don't know why some things tend to happen while others do not.
  • "Chance" in Evolutionary Theory

    lol

    If this were true consciousness would not have evolved.

    There is no survival advantage in believing in different outcomes that don't exist.

    However if possibility is real there is a tremendous survival advantage in being able to understand that possibilities exist.

    Case not closed even if you are right.

    If the universe was determined consciousness would be unnecessary and in fact should not have come to exist.
    The very interesting question of why it does would leave the case open for discussion.
  • General purpose A.I. is it here?
    Pattee has written a ton of papers which you can find yourself if you google his name and epistemic cut.

    This is one with a bit more of the intellectual history.... http://www.informatics.indiana.edu/rocha/publications/pattee/pattee.html

    But really, Pattee won't make much sense unless you do have a strong grounding in biological science. And much of the biology is very new. If you want to get a real understanding of how different biology is in its informational constraint of material instability, then this a good new pop sci book....
    apokrisis

    I have read some more and you are right he is very technically laden.
    I was hoping for a more generalized statement of the problem of the epistemic cut because I believe that the Partially observable Markov decision process might be a very general solution to establishing an epistemic cut between the model and the reality in an A.I. agent.

    I noticed he draws on the work of von neumann so I will pursue that as well.

    Thanks again for your posts and again you have given me a lot to think about.
  • "Chance" in Evolutionary Theory

    It may be that nature does favor certain tendencies.
    But until we can use this assumption to make better models it is not a necessary assumption.
    If we could predict nature on the assumption that it has favored tendencies that evidence would make it unreasonable to deny the assumption.

    As far as I am abreast there is no model that makes better predictions using that assumption so I am not compelled by that assumption.
  • "Chance" in Evolutionary Theory

    I think intention is knowing that there are different outcomes and having a preference among those outcomes.
    So if nature was intentional it would have a preferred outcome for things like sub atomic behavior and coin tosses.

    If we say that nature does have intention this does not help our models because there is no indication why one outcome is preferred oppose to another.
    If we say that nature has intentions then we are forced to admit we cannot fathom from the evidence what those intentions may be.

    Any given speculation as to what those intentions were would be equally as valid as the next unless you were able to predict outcomes more accurately than current models.
    Only then would you be able to claim that your assumption that nature has intentions is validated.
  • "Chance" in Evolutionary Theory
    They 'know' how to behave apparently, but it is implausible that they could know that they know. But, this is also true, it is commonly thought, of most or even all animals.

    Perhaps to know that you know, or at least think that you know, requires symbolic language; the kind of self-reflection that it provides. The same could be said, I think, about knowing facts, in the discursive sense at least, and also being able to conceive of ostensive facts, and the idea that things may not be as they seem.
    John

    I mean to say that I cannot make sense of the notion that nature is an intentional being.

    What does it mean to say reality or nature is intentional?

    Saying this certainly does not improve our understanding of nature or reality.
    That is to say we cannot make better models of reality and nature under that assumption.
  • "Chance" in Evolutionary Theory
    Then you don't understand the point. Probability, possibility, and chance, only exist in relation to an intentional being. That is why it is necessary to bring in the intentional being.Metaphysician Undercover
    No this is simply wrong...unless you mean to suggest that sub atomic particles are intentional beings.
    If so I can't make sense of that view.
    Sorry.
    Epistemic possibility, logical possibility, exists only as a property of the intentional being's knowledge. Ontological possibility exists only in relation to what the intentional being can and cannot do. That the intentional being can flip a coin to produce a 50/50 probability, roll a die, create a lottery, or create a stochastic system, all of these being artificial creations of randomness, provides no evidence that such a thing as randomness could exist naturally. Therefore any claim that probability is something natural is what is unjustified.Metaphysician Undercover

    Again sub atomic particles don't have knowledge.
    I can't make sense of the notion that they do.

    Ontological possibility simply means that the same laws of physics can have different outcomes.
    Sometimes you can get heads and sometimes you can get tails.
    The laws of physics are not violated because both outcomes are ontologically possible...just not at the same time.
    This type of ontological possibility is fundamental at the quantum scale.

    Quantum mechanics is probabilistic by definition not interpretation.
    That is to say the best model of nature we have so far was designed under the assumption that ontological possibility is real regardless if an observer has some intention or not.

    That this model works so well is justification enough that the assumption is valid.
  • General purpose A.I. is it here?
    That is the question. Does it actually learn its own semantics or is there a human in the loop who is judging that the machine is performing within some acceptable range? Who is training the machine and deciding that yes, its got the routine down pat?apokrisis

    Again with regular humans there is a human in the loop.
    As you grew from an infant to a child it was not in a vacuum...you learn from expectations of others.

    But yes, sometimes humans have to intervene and give guidance.

    However all this amounts to is reward and penalty value assignment changes.
    If deepmind gets stuck on a problem in which it needs to explore more to be efficient then the value of reward for exploring is tweaked.

    The thing is that all syntax has to have an element of frozen semantics in practice. Even a Turing Machine is semantic in that it must have a reading head that can tell what symbol it is looking at so it can follow its rules. So semantics gets baked in - by there being a human designer who can build the kind of hardware which ensures this happens in the way it needs to.

    So you could look at a neural network as a syntactical device with a lot of baked-in semantics. You are starting to get some biological realism in that open learning of that kind takes place. And yet inside the black box of circuits, it is still all a clicking and whirring of syntax as no contact with any actual semantics - no regulative interactions with material instability - are taking place.

    Of course my view relies on a rather unfamiliar notion of semantics perhaps. The usual view is based on matter~mind dualism. Meaning is held to be something "mental" or "experiential". But then that whole way of framing the issue is anti-physicalist and woo-making.

    So instead, a biosemiotic view of meaning is about the ability of symbol systems - memory structures - to regulate material processes. The presumption is that materiality is unstable. The job of information is to constrain that instability to produce useful work. That is what mindfulness is - the adaptive constraint of material dynamics.

    And algorithms are syntax with any semantics baked in. The mindful connection to materiality is severed by humans doing the job of underpinning the material stability of the hardware that the software runs on. There is no need for instability-stabilising semantics inside the black box. An actual dualism of computational patterns and hotly-switching transistor gates has been manufactured by humans for their own purpose.
    apokrisis

    I am arguing that the semantics in this algorithms example are not simply baked-in because it can learn on it's own to shift biases as it discovers new information about it's environment and itself in relation to it's environment.

    I don't agree with the notion that humans have semantics from birth (perhaps some) semantics is something we learn not just from ourselves but from others.

    Semantics is a dynamic thing and this is the first example of an algorithm with a robust dynamic semantic capability.
    That is to say it is flexible enough that it can handle the dynamic semantics of a variety of tasks with a hardy degree of autonomy.

    This system can handle instability of environments (I gave this example of a system that it learned to regulate)

    Yes. But the robot hand is still a scaled-up set of digital switches. And a real hand is a scaled-up set of molecular machines. So the difference is philosophically foundational even if we can produce functional mimickry.

    At the root of the biological hand is a world where molecular structures are falling apart almost as soon as they self-assemble. The half-life of even a sizeable cellular component like a microtubule is about 7 minutes. So the "hardware" of life is all about a material instability being controlled just enough to stay organised and directed overall.

    You are talking about a clash of world views here. The computationalist likes to think biology is a wee bit messy - and its amazing wet machines can work at all really. A biologist knows that a self-organising semiotic stability is instrincally semantic and adaptive. Biology know itself, its material basis, all the way down to the molecules that compose it. And so it is no surprise that computers are so autistic and brittle - the tiniest physical bug can cause the whole machine to break-down utterly. The smallest mess is something a computer algorithm has no capacity to deal with.

    (Thank goodness again for the error correction routines that human hardware designers can design in as the cotton wool buffering for these most fragile creations in all material existence).
    apokrisis

    My point was there is a difference between engineering to replicate one-to-one systems and designing to accomplish one-to-one utility.
    We can often achieve the same utility with out modeling the exact system.

    But I will concede you main point here, that a human hand can adapt by process of evolution as a consequence of its complicated systems where as a robot hand will never be able to adapt in that way.
    I don't see that as a major one because evolution takes so long to produces describable adaptation and because we do not necessarily want a robot model of the human hand to adapt under environmental pressure over the course of many many generations.


    But the question is how can an algorithm have semantic understanding in any foundational sense when the whole point is that it is bare isolated syntax?

    Your argument is based on a woolly and dualistic notion of semantics. Or if you have some other scientific theory of meaning here, then you would need to present it.
    apokrisis

    I tried to explain that there is a general sense of the term mind as something others have.
    As a term that means a general way of thinking.
    I believe this sense of the term mind is an algorithm and it is how we account for the fact that vastly different people can agree on semantics...because they learn the same problems and form the same solutions to those problems and are taught by people that have the same general algorithm.

    I am suggesting that there is a single algorithm for general intelligence that not only we possess but others possess and that is how we can answer the question of whether or not we have a mind with a yes or no without error.

    If there is no general intelligence algorithm that is quite a curious thing...that so many different individuals and different cultures should share so much in common.
    One would expect that if each mind is not a general template upon which the individual is formed but rather its own unique iteration then there should be much more variety and that the other minds of humans would seem utterly alien to us more often than they would seem similar to ourselves.
  • General purpose A.I. is it here?

    Even if this algorithm makes that possible it would still take quite a while to teach it how to have any thing resembling the common sense we expect of developed humans.
  • "Chance" in Evolutionary Theory
    The point though, is that each of these two types of "possibilities" only exist in relation to the intentional being. In relation to the past, there is possibility with respect to the intentional being's knowledge. In relation to the future, there is possibility with respect to what the intentional being can do. Remove the intentional being, and there is no such possibility of either type, though we could assume that the world would continue to existMetaphysician Undercover

    Sorry but this was just inserted with no justification.
    There is no reason to create an intentional being to understand nature when probability does a fine job of describing nature without the existence of an intentional being.
  • General purpose A.I. is it here?
    Second, I think there is a strong tendency to underrate animal (wet) intelligence. It isn't learning how to recite Beowulf from memory that is the only impressive human achievement. It's also remembering the odor of the room where we learned Anglo-Saxon and now feel nostalgia for that faint musty odor when we recite Beowulf, that's distinctive. [SEE NOTE] Dry intelligence can replay Beowulf, but it can't connect odors and texts and feelings. It can't feel. It can't smell.Bitter Crank

    This is a more a matter of sensory apparatus, dry intelligence would be able to record and recall this input if it had the sensory input to record it.

    Dry intelligence can't connect with the feelings of a dog excited by the walk it's about to take. Dry intelligence can't lay on the floor and determine whether the guy walking around is getting ready to go to work (alone) or is going to take the dog for a walk. Dogs can do that. They can tell the difference between routine getting ready to go to work and getting ready to go out of town (which the dog will probably disapprove of, considering what happened the last time "they" left). So can cats.Bitter Crank

    This algorithm does have primitive feelings.
    It understands from experience that there is reward in the world and there is penatly in the world.
    It also understands which of these it experiences will depend on the choices it makes in its environment.

    Wet brains and wet intelligence have developed over an exceedingly long time. Wet brains aren't the only defense animals have, but they are remarkably effective. A rat's wet brain does, and will, out-performs Deep Blue and all of it's Blue successors, Screwed Blue, Dude Blue, Rude Blue, etc. because it has capabilities that can be reproduced by an algorithm.

    It's not the algorithm, it's the structure of the body and its history.

    [NOTE] I never learned Anglo Saxon and I can't recite Beowulf. I can pretend I did, and even feel like I did. Betcha Deep Blue can't do that.
    Bitter Crank

    This is a different example from deepblue because it has the above mentioned reinforcement learning techniques employed.
    Deepblue had to be programmed what the problem of chess was, that program had to be hand crafted by human engineers.
    Alphago had to have it's ability to learn hand crafted, but once that was done it learned what the problem of go was, learned what the solution to that problem is (to win) and it learned all this from scratch.

    This algorithm is also different because it is not limited to playing go.
    Deepblue can only play chess unless it is reconfigured by human programmers (it would have to use a different algorithm to learn a different game and it would not perform well at go because go has far too many possible moves to solve with brute force techniques).
    Deepmind on the other hand can learn to play atari games in the same way it learn to play go.

    This algorithm is a breakthrough because, so far, it appears that the algorithm can be applied to any problem in general.
  • General purpose A.I. is it here?
    Ok. But from my biophysical/biosemiotic perspective, a theory of general intelligence just is a theory of life, a theory of complex adaptive systems. You have to have the essence of that semiotic relation between symbols and matter built in from the smallest, simplest scales to have any "intelligence" at all.

    So yes, you are doing the familiar thing of trying to abstract away the routinised, mechanics-imitating, syntactical organisation that people think of as rational thought or problem solving. If you input some statement into a Searlean Chinese room or Turing test passing automaton, all that matters is that you get the appropriate output statement. If it sounded as though the machine knew what you were talking about, then the machine passes as "intelligent".
    apokrisis

    I don't mean to sweep away your criticisms.
    I freely admit if we are using a biological metric of life then we are no where close to simulating intelligence.
    If simulating biology is the criterion we can safely conclude machines don't think.

    So again, fine, its easy to imagine building technology that is syntactic in ways that map some structure of syntax that we give it on to some structure of syntax that we then find meaningful. But the burden is on you to show why any semantics might arise inside the machine. What is your theory of how syntax produces semantics?apokrisis

    I argue that because this algorithm has to learn from scratch it must discover it's own semantics within the problem and solution to that problem.

    Take the go example, alphago would not be able to learn to play the game as well as humans unless it is forming semantics.
    Because it has to learn the problem and learn the solution, often at the same time, it learns to have biases about different syntactical relationships within the context of the problem and the solution.
    Not all syntactical relationships are equal within the context of what the problem is and within the context of what is the solution.

    You may argue that it is a rather crude and primitive form of semantics when compared to humans and perhaps you are right...but it is still a form of semantics.

    I might use another analogy.
    Consider the task of creating robot hand that is dexterous as the human hand.
    You might argue that the finished product can not sense what it grasps, that it has no nerves, no skin, no bones, no blood coursing through it and then you claim this is not a hand.

    But if we ask the question of whether or not it is a hand by a different criterion, whether or not it can perform any action a human hand can preform, then the problem is very different.

    Instead of trying to replicate the human hand we are trying to replicate the utility of a human hand and that is a far less difficult engineering goal.

    So again...this algorithm, if it does have semantic understanding...it does not and never will have human semantic understanding.
    But I do not agree that we can be sure that it won't be able to perform at human level utility of human semantic understanding.

    Biology's theory is that of semiotics - the claim that an intimate relation between syntax and semantics is there from the get-go as symbol and matter, Pattee's epistemic cut between rate independent information and rate dependent dynamics. And this is a generic theory - one that explains life and mind in the same physicalist ontology.apokrisis

    Pattee's epistemic cut was not very clear to me, and he seems to have coined this term.
    Do you have any references for epistemic cut?
    I did not find it as an entry in Stanford.

    I tried to read through your link but got hung up on that term, the definition is not clear to me.

    But computer science just operates on the happy assumption that syntax working in isolation from material reality will "light up" in the way brains "light up". There has never been any actual theory to back up this sci fi notion.

    Instead - given the dismal failure of AI for so long - the computer science tendency is simply to scale back the ambitions to the simplest stuff for machines to fake - those aspects of human thought which are the most abstractly syntactic as mental manipulations.

    If you just have numbers or logical variables to deal with, then hey, suddenly everything messy and real world is put at as great a distance as it can possibly be. Any schoolkid can learn to imitate a calculating engine - and demonstrate their essential humanness by being pretty bad, slow and error-prone at it, not to mention terminally bored.
    apokrisis

    Again I recall my hand example.
    It is exceedingly difficult to simulate the human hand to the finest detail.
    It is not nearly so difficult to engineer a machine that replicates the utility of a human hand.

    I believe a similar thing applies to A.i.

    Then we humans invent an actual machine to behave like a machine and ... suddenly we are incredibly impressed at its potential. Already our pocket calculators exceed all but our most autistic of idiot savants in rapid, error-free, syntactical operation. We think if a pocket calculator can be this unnaturally robotic in its responses, then imagine how wonderfully conscious, creative, semantic, etc, a next generation quantum supercomputer is going to be. Or some such inherently self-contradicting shit.apokrisis

    Well again I understand that you believe there is a fundamental problem that engineering human level A.I. faces.
    I will try to read through Pattee's work again to see if I can address that point.
  • General purpose A.I. is it here?
    So every time I point to a fundamental difference, your reply is simply that differences can be minimised. And when I point out that minimising those differences might be physically impractical, you wave that constraint away as well. It doesn't seem as though you want to take a principled approach to your OP.apokrisis
    No that is not it at all.

    I was seeking to make a distinction between simulating a human being and simulating general intelligence.

    I did concede that if we must digitally simulate a human at nano-scale before we can hope to simulate a mind that this would be a monumental task.
    And perhaps you are also correct that may may be impossible.

    I just don't use that criterion.
    I was using the criterion of if a computer could learn any problem and or solution to a problem that a human could...then that would be a mind in the general sense.
    Why, if a computer could do such, is that not a mind in a general sense?

    I believe there are two distinct meanings to the term mind.
    One meaning is the intimately personal and rich inner self.
    The second meaning is the sense in which others have minds...if we took away all the differences of personal minds and focused on the general template of what that term means then the problem of creating a mind is minimized to a hardy degree I would argue.

    Anyway, another way of phrasing the same challenge to your presumption there is no great problem here: can you imagine an algorithm that could operate usefully on unstable hardware? How could an algorithm function in the way you require if it's next state of output was always irreducibly uncertain? In what sense would such a process still be algorithmic in your book if every time it computed some value, there would be no particular reason for the calculation to come out the same?apokrisis

    I believe we can argue the same thing of a human.
    If our brain suffers trauma and damage it can result in sever impairment.

    I just don't agree that the top down approach is necessarily faulty all the way up until a nano scale human simulation is achieved.

    I suspect that somewhere in the middle of top down design something mindlike should be possible.

    The reason I believe this is because a lot of the human body and brain functions are autonomous of what we mean by the term mind...and while I understand your point that there is feedback of these system that inform consciousness/mind (understanding that the extent of which it does is unclear) and that is what contributes to what we call an individual persons...but mind also has a more general meaning...and I am suggesting that should be possible before we achieve nano-scale human simulation.
  • General purpose A.I. is it here?
    No, that's not a contradiction at all. As far as I am concerned it is a statement of fact.

    Over and out on this thread, thanks.
    Wayfarer

    You toss these phrases off, as if it is all settled, as if understanding the issues is really simple. But it really is not, all you're communicating is the fact that you're skating over the surface. 'The nature of knowledge' is the subject of the discipline of epistemology, and it's very difficult subject.Wayfarer

    I was not trying to be dismissive and I did not intend for it to seem as though I do not appreicate that epistemology is vast subject with a great many complexities.
    But in fairness we have to start somewhere and I think starting with a formal definition of meaning and/or understanding as learning what is a problem and learning what is that problem's solution is a reasonable place to begin.

    Whether computers are conscious or not, is also a really difficult and unresolved question.Wayfarer

    Well I did argue in the first page that either the mind/consciousness must be decidable (it is possible to answer the question "do I have a mind/consciousness" with a yes or no correctly each time you ask it).
    If consciousness and/or the mind is undecidable these terms are practical useless to us philosophically and I certainly don't agree we should define the terms thus.
    (sorry can't link to my op but it on the first page)

    The implication here is that the mind/consciousness is an algorithm so that just leaves the question which algorithm.

    But you still make a valid point...the question of which algorithm is the mind/consciousness is hardly a settled matter.
    Sorry if I gave the impression that I believed it was settled...I intended to give the impression that I see no reason why it should not become settled in light of the fact that we are modeling after our own minds and brains.

    No, that's not a contradiction at all. As far as I am concerned it is a statement of fact.Wayfarer
    I guess I will have to take your word for it.

    Over and out on this thread, thanks.Wayfarer

    Yes thank you too for posting here...you made me think about my own beliefs critically...hopefully that is a consolation to the frustration I was responsible for.
    Again sorry about the misunderstanding it was not my intention to be curt.
  • General purpose A.I. is it here?
    And for programmable machines, we can see that there is a designed in divorce between the states of information and the material processes sustaining those states.apokrisis

    The same is true of the brain and the mind I believe.
    It has taken the course of hundreds of thousands of years for us to study the mechanisms of the brain and how the brain relates to the mind.
    We don't know personally how our brain and/or subconscious work.

    And as I have pointed out we would have to build in this selfhood relation from the top down. Whereas in life it exists from the bottom up, starting with molecular machines at the quasi classical nanoscale of the biophysics of cells. So computers are always going against nature in trying to recreate nature in this sense.apokrisis

    I am not sure I agree that we have to completely reverse engineer the body and brain down to the nano scale to achieve a computer that has a mind....to achieve a computer that can simulate what it means to be human sure....I must concede that.

    I would instead draw back to my question of whether or not we could reverse engineer a brain sufficiently that the computer could solve any problem a human could solve.
    Granted this machine would not have the inner complexity a human has but I still believe we would be forced to conclude, in a general sense, that the such a machine had a mind.

    So in top down bottom up terms I think when we meet in the middle we will have arrived at a mind in the sense that computers will be able to learn anything a human can and solve any problem a human can.

    Of course the problem of top down only gets harder as you scale so the task of creating a simulated human is a monumental one requiring unimaginable breakthroughs in many disciplines.
    If we are defining the term mind in such a way that this the necessary criterion prior then we still have a very long wait on our hands I must concede.