• Anthony
    197
    When consciousness itself isn't entirely understood, in what way wouldn't it be prevaricating trying to assert a machine can be conscious? There appears to be some sort of dishonesty involved. When it isn't even known what human consciousness is, how on earth could anyone be mentally lazy enough to skip over this fact and project a half-baked conception of what it is into a machine? Indubitably, our species has formed a very, very close bond with computers, smartphones, so it may be possible such a close communication has changed the brain's wiring according to algorithms, which makes people believe mind/consciousness phenomena far simpler than what seems to be the case.
  • Anthony
    197
    Living things, including brains, don't just "restructure their hardware" randomly, or for no reason. Input is required to influence those changes (learning). Learning requires memory. Your input (experiences of the world) over your lifetime has influenced changes in your hardware and software. A robot with a computer brain could be programmed to update its own programming with new input. Actually, computers do this already to a degree. Input from the user changes the contents of the software (the variables) and how the hardware behaves.Harry Hindu
    If learning requires memory, and there's no other kind of learning which doesn't require memory, then how would an infant begin to learn in the first place? There is a chicken and egg problem. Young children have only a modicum of memory to navigate the infinitely complex world they live in (far more complex than could ever be remembered), yet they aren't completely helpless. Why? Imagination and intelligence. Imagination isn't learned or all that dependent on memory in the same way understanding, judgement, thinking, and mentation are unlearned and not dependent on memory (they're built into the mind as it becomes conscious) . Even though it is a bit of an enigma, the mind has its own organization to it prior to any memory; the organization is intelligence. Memories themselves are rather stuck the way they are with no ability to figure out how to combine themselves with other memories to make wholes or to divide into smaller fragments. What is it that creates these combinatorial contexts of memories and edits them? It's not the memories themselves. Memories aren't self emergent, they're combined or divided, emergently or reductively, by a preexisting manifold of mental organization.
  • Harry Hindu
    5.1k
    You guys have read my full post.

    Is there disagreement, for example, in that you uphold life itself to be entropic rather than negentropic?
    javra
    No, the problem is that your definition of entropy and negentropy isn't clear. Where do you draw the boundary of life and non-life? Are dragonflies entropic or negentropic? What about starfish, jellyfish, an oak tree, mushrooms, bacteria or viruses? Life is just more complex non-life. The solar system is a closed system of order (negentropic) that has existed for billions of years and will continue to exist more billions of more, well beyond your own negentropic state. In order for our bodies to exist for any extended period of time, there must be some established negentropic state-of-affairs prior for us to evolve. Our existence is part of the increasing complexity that already exists in this corner of the universe.

    You guys want to say that we'll be making negentropic computers soon. OK. I can't argue with this issue of faith (other than by questioning what the benefits would be of so doing). But my point was that until its negentropic its not thinking, or understandings, or intelligent, etc.javra
    I never said that today's computers are as complex as we are, but they are complex. Do you know how one works, or how to program one? Do you know how to manipulate another person, especially one that you know well? I already explained the differences between robots and human beings, yet there are similarities as well. When it comes to thinking, I think we are more similar in that thinking is simply processing information for a purpose. It's just that the computer doesn't have it's own purpose (not that it doesn't think) because human beings would find that scary.
  • Harry Hindu
    5.1k
    If learning requires memory, and there's no other kind of learning which doesn't require memory, then how would an infant begin to learn in the first place?Anthony
    Instincts, which is built-in (unconscious) knowledge thanks to natural selection.

    Check out the field of evolutionary psychology.
  • Anthony
    197
    Instincts, which is built-in (unconscious) knowledge thanks to natural selection.

    Check out the field of evolutionary psychology.
    Harry Hindu

    Then there is a kind of knowing (distinct from knowledge, which must be recalled as memory) which spans across lifetimes. Intelligence = instinct in a preconscious way (do all individuals have equal amount of instinct, why or why not?). Memories are only memories if they can be recalled consciously. It's a stretch applying evolutionary psychology to AI. Our primal intelligence is quite different from AI are we not admitting here? Generally, human intelligence is far more complex and cryptic than AI, which always has implied programming as knowledge issuing from the head of the learned programmer. There's no knowing involved in carrying out instructions. When soldiers lose communication with their orders, they run amok, unintelligent.
  • javra
    2.6k
    No, the problem is that your definition of entropy and negentropy isn't clear.Harry Hindu

    What parts were unclear? I thought I’d simplified the concepts into very clear terms. Entropy: energy moving toward thermal equilibrium. Negentropy: energy moving away from thermal equilibrium.

    Where do you draw the boundary of life and non-life? Are dragonflies entropic or negentropic? What about starfish, jellyfish, an oak tree, mushrooms, bacteria or viruses?Harry Hindu

    Hm. Viruses, viroids, and prions are non-life; I don't know of any such thing as a dead virus, viroid, or prion. Bacteria and everything more complex is life; all these can be either living (and not decomposing) or dead (and decomposing in entropic manners).

    We could approach this issue via the cybernetic concept of autopoiesis. But without background on these concepts we might be running around in circles. And I grant that my knowledge of cybernetics is only second-hand. Still, I know something about autopoiesis.

    Life is just more complex non-life.Harry Hindu

    This can translate into “negative entropy is just a more complex form of entropy”.

    Can you provide justification for this? To be clear, something that is not mere speculation.

    Yes, our empirical world evidences that non-life developed into life. I’m not dispelling this. But you’re forgetting that no one understands how. Also that there is a clear distinction between life, which is animate, and non-life, which is inanimate.
  • Devans99
    2.7k
    You guys want to say that we'll be making negentropic computers soon. OK. I can't argue with this issue of faith (other than by questioning what the benefits would be of so doing). But my point was that until its negentropic its not thinking, or understandings, or intelligent, etcjavra

    I don't see why an entropic entity cannot display intelligence? For example, a software neural net is trained and learns a specific task. Software programs can in general modify their own logic and thus grow/learn.
  • javra
    2.6k


    I can relate to inanimate things being capable of displaying what we interpret to be intelligence. A smartphone is after all considered to be “smart”. It learns your typing habits, for example. And the greater the complexity of these processes, the more intelligence seems to be displayed.

    I however question the ontological verity of real, rather than faux, intelligence being applicable to givens devoid of animate agency. Which again resolves into issues of life v. non-life.

    Take robots found in Transformers or Terminator, for example. They were alive in so far as they could be killed, subsequent to which they’d undergo entropic processes of decomposition. A better example for me are the organic androids of Blade Runner. In all these cases we can relate ourselves as living beings to other forms of living systems. This being not too different than our capacity to relate to extraterrestrial aliens in sci-fi. We, for example, understand that they strive and suffer in manners that are in some ways similar to us, which enables us to hold sympathy for them (in certain situations).

    But these are all examples of negentropic beings. What we have today is not this. I take today's AI to be complex decoys of life and of intelligence proper. But not instances of real intelligence as it holds the potential to apply to life.

    As to terms, their semantics can differ between different cohorts of people. An electronic watch can be deemed intelligent. Even a crystal which “knows” how to grow over time. But this is not the intelligence I refer to when I express the term (real, or actual) intelligence in non-allegorical manners.

    From Wiktionary: Intelligence: (1) Capacity of mind […]

    I associate mind with life.
  • Baden
    16.3k


    Was caught in spam filter. Restored. Let's refer to it as a case of artificial stupidity.
  • Devans99
    2.7k
    I however question the ontological verity of real, rather than faux, intelligence being applicable to givens devoid of animate agency. Which again resolves into issues of life v. non-life.javra

    I agree that AI today is mostly faux intelligence but that's just because AI is at a very primitive stage of development. Nature has made intelligent machines using matter and evolution; we have the same matter plus design so should be able to achieve the same results (and better eventually).

    We, for example, understand that they strive and suffer in manners that are in some ways similar to us, which enables us to hold sympathy for them (in certain situations).javra

    Animals and humans are driven purely by physical/emotional pain/pleasure. We seek to maximise pleasure and minimise pain. It would be interesting if we could give a computer a nervous system and pain/pleasure drivers we have. As we saw in Bladerunner the result might be computers that are indistinguishable from us.

    But these are all examples of negentropic beings. What we have today is not this. I take today's AI to be complex decoys of life and of intelligence proper. But not instances of real intelligence as it holds the potential to apply to life.javra

    I don't see anything particularly special about life: we are just complex machines. We and computers are both just driven by cause and effect: our outputs are determined solely by our inputs.

    I think intelligence is a broad church encompassing everything from humans to very simple animals like single celled creatures; they are all machines of different levels of complexity and they all exhibit intelligence of some form. I don't think we can argue that life has to reach a certain level of complexity before it is intelligent; I think intelligence is a property that all life possesses in differing levels. And as life is just a form of machine, we can say that computers also possess intelligence (admittedly very limited at the moment).

    There are aspects of intelligence (self-awareness, consciousness) that only the more advanced life possess but I think these aspects are outgrowths of the more simple intelligence rather than something unique to life that could not be achieved with computers.
  • javra
    2.6k
    I don't see anything particularly special about life: we are just complex machines.Devans99

    This seems to be the very crux of the disagreement. I could phrase in terms of there being a pivotal difference between a) pulling the plug on a very complex machine and b) pulling the plug on some living being who’s on life support. It’s not the same thing.

    Still, this ultimately revolves around differing ontological perspectives regarding the nature of agency … which winds its way toward the ontological nature of reality.

    There are aspects of intelligence (self-awareness, consciousness) that only the more advanced life possess but I think these aspects are outgrowths of the more simple intelligence rather than something unique to life that could not be achieved with computers.Devans99

    I don’t disagree with this, btw. But again, for me, only if computers were to somehow become living. ... Well, a correction: I don't disagree with the contents of the quote save that intelligence is unique to life.
  • Wayfarer
    22.5k
    I am learning coding as my occupation (tech writer) nowadays requires it. Whereas 10 years ago programmers coded and tech writers wrote, nowadays the lines have become blurred, mainly because of the great success of GUI’s pioneered by Jobs, which makes devices much easier to use. So you no longer have to write procedural instructions for a lot of software (how to do...), but you do have to know how to code interactive web pages and learning resources and how to convey high-level concepts along with procedural info. And that takes knowledge of HTML, CSS, JavaScript and so on - granted, not full-on programming languages but a lot of code to master. I agree, it does teach you how to think - in some senses. But if, for example, you were sociopath, you could be an effective programmer, and remain sociopathic. :wink:


    Descartes looks out of his bathroom window and wonders if a passerby is an automaton....Valentinus

    I often feel as if Descartes should have had some more able successors who could have really responded to later critics. But in any case, the net consequence of Cartesian dualism has been disastrous in many insidious ways. That is where I have learned a lot from neo-Thomist philosophy and hylomorphic (as distinct from Cartesian) dualism.

    Thought your ‘entropic/negentropic’ distinction was spot on. :up:
  • Devans99
    2.7k
    Interesting to see how far we are away from achieving machine intelligence. It apparently takes 28 computers 10 hours to simulate a single cell division:

    https://www.theatlantic.com/technology/archive/2012/07/to-model-the-simplest-microbe-in-the-world-you-need-128-computers/260198/

    So we cannot model even a single cell in real time. Computer technology is very immature compared to life. 100 years Vs 4 billion years development.

    This seems to be the very crux of the disagreement. I could phrase in terms of there being a pivotal difference between a) pulling the plug on a very complex machine and b) pulling the plug on some living being who’s on life support. It’s not the same thingjavra

    If the machine was conscious though it would be immoral either way.

    Well, a correction: I don't disagree with the contents of the quote save that intelligence is unique to life.javra

    Would you class a virus as intelligent? What about a single celled organism? What I'm getting at is there a point where a machine (biological or otherwise) becomes intelligent? All life evolved from inanimate matter and inanimate matter is not intelligent. Early forms of life (pre single cell creatures) must have been simple machines without DNA, RNA. Would they qualify as intelligent? At what point of complexity of matter does intelligence first manifest?
  • javra
    2.6k
    This seems to be the very crux of the disagreement. I could phrase in terms of there being a pivotal difference between a) pulling the plug on a very complex machine and b) pulling the plug on some living being who’s on life support. It’s not the same thing — javra


    If the machine was conscious though it would be immoral either way.
    Devans99

    I feel like we’re starting to go around in circles. The property of being conscious is one held by living systems. The point of the “pulling the plug” example being that humans (and other living beings) are something other than mere “complex machines”—mere complex machines not being living and thereby lacking the property of being conscious.

    Would you class a virus as intelligent? What about a single celled organism? What I'm getting at is there a point where a machine (biological or otherwise) becomes intelligent? All life evolved from inanimate matter and inanimate matter is not intelligent. Early forms of life (pre single cell creatures) must have been simple machines without DNA, RNA. Would they qualify as intelligent? At what point of complexity of matter does intelligence first manifest?Devans99

    As it happens, I’ve address much of this in my last reply to @Harry Hindu. Viruses are not living. Living things are autopoietic (roughly: self-sustaining). A bacterium, which is autopoietic, might be argued to hold some miniscule form of mind and intelligence, but not a virus (which is not autopoietic). Autopoiesis being a negentropic process. Otherwise, what would the distinction between life and non-life be? Or is there no distinction whatsoever?

    As to life being “machinery”, be it simple or complex, one can think of it this way: There are two forms of machines: living and non-living, which brings us back to square one: differentiating the properties of life (such as that of degree of intelligence) from those of non-life.

    Thought your ‘entropic/negentropic’ distinction was spot on.Wayfarer

    It’s good to know I’m not the only one. :smile: Thanks.

    I’ll likely be backing away from this discussion. Happy holidays and a good new year to everyone.
  • Jake
    1.4k
    I appear to have a reply that I made earlier deleted on me. I'm guessing that linking a youtube video is a violation of the ToS which wasn't my intention. Is my guess correct?Happenstance

    Youtube links are ok. But sometimes the spam filter of the forum kills posts that it shouldn't. If you describe the situation to the mods they can usually restore the falsely killed post.
  • Devans99
    2.7k
    OK I guess we just have to remain in disagreement on this one. Thanks for the discussion though.
  • Wayfarer
    22.5k
    All life evolved from inanimate matterDevans99

    That is not known, but assumed. I don’t think it is ever likely to be definitively proven but even so it is used to underwrite a whole set of attitudes to questions of the nature of life and mind.

    Really this view originated with the kind of cultural materialism that grew out of the Enlightenment. But it has now assumed the status of popular myth very much like the religious myths that it displaced; not in terms of content, but in terms of providing an apparent answer to these big questions.
  • SophistiCat
    2.2k
    If machines truly were sentientAnthony

    I am not sure why you wrote that string of remarks in response to my post. I didn't opine on whether machines were "truly sentient." All I said was that words have meanings that we give them. There is no law that says that the word "intelligent" can only imply "true sentience," forever and ever. This word has been used in other senses for a long time. Moreover, I don't know if there even was a time when the word exclusively referred to the totality of sentience, as you insist, rather than some aspect of it.

    Some words are just names for things well-known. Other words, terms-of-art, are invented words, or invented meanings for existing words, and the words themselves or meanings thereof really cannot be understood without already understanding the thing the word refers to. To refer to a machine as intelligent is the use of the word "intelligent" in just that latter sense.tim wood

    I think the word "intelligent" is widely used in a more general sense of exhibiting complex, responsive, behavior, well suited to a purpose so that it should not even be considered to be a term of art.
  • SophistiCat
    2.2k
    When consciousness itself isn't entirely understood, in what way wouldn't it be prevaricating trying to assert a machine can be conscious?Anthony

    Substitute anything else for "consciousness" in the above sentence and you'll realize how absurd it is.

    When motion itself isn't entirely understood, in what way wouldn't it be prevaricating trying to assert that something can be moving?

    When kindness itself isn't entirely understood, in what way wouldn't it be prevaricating trying to assert that someone can be kind?
  • Devans99
    2.7k
    "All life evolved from inanimate matter
    — Devans99

    That is not known, but assumed. I don’t think it is ever likely to be definitively proven but even so it is used to underwrite a whole set of attitudes to questions of the nature of life and mi
    Wayfarer

    The only realistic alternative is the panspermia hypotheses? And with that life still came from inanimate matter. Even if we were designed, we are still made from inanimate matter.

    So I see no reason why a computer, also made from inanimate matter, should not be as intelligent or more intelligent than humans eventually.
  • Wayfarer
    22.5k
    Even if we were designed, we are still made from inanimate matter.Devans99

    We don't know what matter is. The largest and most expensive machine in the history of the world has been designed to disentangle just that question, yet the more they look into it, the more questions there are.

    I am quite sympathetic to panspermia, I read Fred Hoyle and Chandra Wickramasingha’s book on it decades ago. (Hoyle has died but the latter is still carrying the torch.)

    But as regards the question - computers are not intelligent at all. They’re not beings, they’re devices. But I know what you’ll say - what’s the difference? And it’s very hard to answer that question. The fact that it’s difficult, is the indication of a deep problem, in my view.
  • Devans99
    2.7k
    But as regards the question - computers are not intelligent at all. They’re not beings, they’re devices. But I know what you’ll say - what’s the difference? And it’s very hard to answer that question. The fact that it’s difficult, is the indication of a deep problem, in my view.Wayfarer

    Science has managed to synthesise DNA in the lab and replace the DNA of single celled animals with the synthetic DNA to produce a new type of single celled animal. If science advanced to the point where all the cell components could be replaced by synthetic equivalents, would the resulting organism be alive as well as 100% synthetic?
  • Wayfarer
    22.5k
    Science has managed to synthesise DNA in the lab and replace the DNA of single celled animals with the synthetic DNA to produce a new type of single celled animal.Devans99

    It hasn't done it 'de novo' i.e. starting from elements and building all the proteins. It's trying to reverse-engineer living cells. Science couldn't come close to manufacturing proteins and the like 'de novo', from the elements on the periodic table.

    Anyway, the best argument I can think of against any form of materialism is this. You cannot derive the fundamental laws which govern logic, abstract thought, and meaningful language, on the basis of anything known to the physical sciences. To even HAVE physical sciences, relies on abstraction, language, and logic. Every single step in any science is based on making logical inferences, 'this means that because of so and so', 'if X is this then Y must be...' and so on. That goes for physics also.

    Now the neo-Darwinian account of how this came to be, is that humans evolved, and that therefore these abilities can be understood and described in the same terms as the abilities of other species. Intelligence is in this picture an adaption, something that the species has exploited, the 'nimble human' outwitting their lumbering animal competitors because of the evolved hominid fore-brain. I know the theory perfectly well - and no, I'm not about to spring an 'intelligent design' argument.

    What I'm saying is that understanding the nature of mind and intelligence through that perspective must inevitably be reductionist. The belief that we have an in-principle understanding of what intelligence is, based on those premisses, is, I believe, the misapplication of the theory of evolution. It is, after all, a biological theory about the evolution of species. It's not a 'theory of knowledge' as such, except insofar as knowledge can be understood in those terms (which it can to some extent through cognitive science, evolutionary psychology, etc.) But it is then used to underwrite the notion that somehow, science has an understanding of how we as biological organisms are able to exercise reason and thought. But really it does no such thing, and really there is no such theory. It's a result of evolution having occupied the role previously assigned to religion and philosophy. But I don't think it's up to the task, or at any rate, it is far from a completed theory.

    There are influential theorists who believe or promise such things, such as Francis Crick, who co-discovered DNA, a convinced materialist. But for every one on that side of the argument, there are plenty of dissidents. Materialism of the kind you're proposing is by no means established and in many respects it's on the wane. At any rate, it is nowhere near as signed and sealed as you seem to believe it is.

    If science advanced to the point where all the cell components could be replaced by synthetic equivalents, would the resulting organism be alive as well as 100% synthetic?Devans99

    Let me ask you this: do you think if you physically damage a computer, like, break it or hit it with a hammer, or throw it in the water, that it suffers? Would you feel empathy for it, 'oh, poor machine'?

    Consider it further: imagine if science did create an actual being, something that was self-aware - so no longer just 'something' but a being. What kind of predicament do you think that being would feel itself to be in? If it began to ask the question 'what am I? How did I come to be? What is my place in the world?'

    A few years back, there was some idle chatter about whether it might be possible to revive an extinct species of hominid, Jurrasic-park style, through biotech. Imagine if that happened, the predicament that being would be in.

    They're the kinds of questions that really need to be asked about this subject.
  • Devans99
    2.7k
    Let me ask you this: do you think if you physically damage a computer, like, break it or hit it with a hammer, or throw it in the water, that it suffers?Wayfarer

    Animals and humans are driven purely by physical/emotional pain/pleasure. We seek to maximise pleasure and minimise pain. It would be interesting if we could give a computer a nervous system and pain/pleasure drivers we have. As we saw in Bladerunner the result might be computers that are indistinguishable from us.Devans99

    So I think that a computer could be designed so that it suffers when it's injured. Obviously it's a long way off in technology terms but give a computer the same basic senses and drivers as a human and the result should be a humanlike computer?

    Consider it further: imagine if science did create an actual being, something that was self-aware - so no longer just 'something' but a being. What kind of predicament do you think that being would feel itself to be in? If it began to ask the question 'what am I? How did I come to be? What is my place in the world?'Wayfarer

    I think all high intelligence beings would think in a similar manner. 'Great minds think alike' applies to computers, aliens, gods and humans all, I suspect. The synthetic being would think like us I think. It would worry first about survival in this world. Once that was assured, it would sign up on a philosophy forum and start worrying about survival in the next world.
  • Jake
    1.4k
    But as regards the question - computers are not intelligent at all.Wayfarer

    The Turing test seems a reasonable standard. If a computer can perform some operation in a convincingly human manner, why not stop quibbling about dictionary definitions and just call it intelligent?
  • Harry Hindu
    5.1k
    Then there is a kind of knowing (distinct from knowledge, which must be recalled as memory) which spans across lifetimes. Intelligence = instinct in a preconscious way (do all individuals have equal amount of instinct, why or why not?).Anthony
    The "lower" animals are driven more by instinct (genetic knowledge) than learned knowledge. Humans, chimps, dolphins, elephants, have larger brains and instinct drives our behavior to a much lesser degree. Learned behaviors begin to take over as we develop because we have larger brains to store more information (memories).

    Memories are only memories if they can be recalled consciously. It's a stretch applying evolutionary psychology to AI. Our primal intelligence is quite different from AI are we not admitting here? Generally, human intelligence is far more complex and cryptic than AI, which always has implied programming as knowledge issuing from the head of the learned programmer. There's no knowing involved in carrying out instructions. When soldiers lose communication with their orders, they run amok, unintelligent.Anthony
    No. You obviously didn't check out evolutionary psychology so you're just not knowing what you're talking about here.
    "Evolutionary psychology adopts an understanding of the mind that is based on the computational theory of mind." — Wikipedia
    May I suggest Steven Pinker's book: How the Mind Works
  • Harry Hindu
    5.1k
    As it happens, I’ve address much of this in my last reply to Harry Hindu. Viruses are not living. Living things are autopoietic (roughly: self-sustaining). A bacterium, which is autopoietic, might be argued to hold some miniscule form of mind and intelligence, but not a virus (which is not autopoietic). Autopoiesis being a negentropic process. Otherwise, what would the distinction between life and non-life be? Or is there no distinction whatsoever?

    As to life being “machinery”, be it simple or complex, one can think of it this way: There are two forms of machines: living and non-living, which brings us back to square one: differentiating the properties of life (such as that of degree of intelligence) from those of non-life.
    javra
    I made the point that the solar system or the sun is a perfect self-sustaining balance between the outward push of the nuclear reactions and the inward pull of gravity, and has been like that for billions of years. You have now adjusted your claim (moved the goal posts) to say that living things are autopoietic and that is what makes them intelligent.

    The term autopoiesis refers to a system capable of reproducing and maintaining itself. — Wikipedia
    The sun maintains itself but doesn't reproduce itself. Is that what makes life intelligent and non-life not intelligent - our ability to have sex and have babies? Those are instinctive behaviors, not intelligent ones.
  • Harry Hindu
    5.1k
    I am learning coding as my occupation (tech writer) nowadays requires it. Whereas 10 years ago programmers coded and tech writers wrote, nowadays the lines have become blurred, mainly because of the great success of GUI’s pioneered by Jobs, which makes devices much easier to use. So you no longer have to write procedural instructions for a lot of software (how to do...), but you do have to know how to code interactive web pages and learning resources and how to convey high-level concepts along with procedural info. And that takes knowledge of HTML, CSS, JavaScript and so on - granted, not full-on programming languages but a lot of code to master. I agree, it does teach you how to think - in some senses. But if, for example, you were sociopath, you could be an effective programmer, and remain sociopathic. :wink:Wayfarer
    How do you think those GUI's are made? By higher level procedural coding, like Java (which isn't related to JavaScript), Visual Basic, Python, and C++. I have experience with some of these. C++ is very complex but it allows you to fine-tune the computer's behavior more than most other programming languages.

    Sociopaths would just learn how to manipulate other people more efficiently. Jobs didn't say that programming would change your world-view - just how to think more consistently.
  • Anthony
    197
    You obviously didn't check out evolutionary psychology so you're just not knowing what you're talking about here.Harry Hindu
    If it involves computational theory of mind, I'll have to choke it down. AI being anything like a mind, or genetic determinism, and such anthorpogenic conceptual projections are what I try to consistently argue against. There should be some kind of dialectic involved, to be sure, so perhaps there will be fuel to be found in your reading recommendations.

    The idea the cosmos has to do computations for anything to happen is completely ridiculous to me. Nature obeys homeostatic setpoints and servomechanisms, which are more like measuring what comes through biological channels than doing computations; cybernetics is appealing, however, there is a difference between a homeostatic machine and nature; nature isn't a machine; cities are more like machines than the natural order treated by natural philosophy. Which is why it makes as much sense to look for a mind in natural cycles as it does in anything human exclusive. Whatever gave rise to us (abiogenesis, say) gave rise to all life on earth, why would we not want to focus on a hierarchic echelon above humans and what humans have made if it is the truth we are after? Perhaps abiogenesis would be the opposite of AI? Just raw primeval intelligence. A computer world would be made of metal and circuitry...which isn't what it is made up of. When you step outside your home into the original state of the natural order, you don't step on metal and silicon. Our species is attempting to establish its own truth over the only truth, its own time and space over the only time and space. Such attempts are doomed to fail.

    Take the insidious device that preceded much other technics: the mechanical clock. It has no feedback system, it measures nothing...it's basically in a runaway mode all the time. This completely non homeostatic device is ruling our biologic world. It has no biological negative feedbacks whatever as it ticks away. Yet most people feel the organization it affords is a good thing. Even to the extent it coordinates anthropocentric activity, it does so irrespective of their biology. What a golem we have at the center of all human relations.

    Excessive AI focus reminds me all too much of relilgious dogma of Genesis: "When God created man, in the likeness of God made he him....When man created AI, in the likeness of man made he it." The inflexible culture of anthropocentric, anthropogenic, narcissistic anthropolatry is definitely thick enough to cause delusional beliefs anent our place in nature, in the transmundane order . The ways in which religious dogma have shaped our world for the worse may be nothing compared to what AI myopia will do.

    Computations are the hammer and nail of computer scientists, who used to be called "computers." Human computers no doubt believe their machine computers are like them.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.