• VagabondSpectre
    1.9k
    The elimination of unfit members is natural selection in action. The species itself would die out if suicide was a general trait. My definition of life included persistence, so I have to disagree. Humanity as a whole is something that tends to persist. Humanity is an example of life. I also don't think there is intention involved, but you're free to apply that word to what a tulip does.noAxioms

    We're both operating fast and loose with the definition of life, so let's not forget that all I'm trying to do is compare and explore given examples of "life" to see what similarities there are rather than to qualify individual examples of life from the starting definition.

    A human can be both alive and suicidal at the same time; they're not mutually exclusive, which is the minor point I tried to make.

    No. If it can perpetuate without procreation (just be sufficiently immortal), it can be life. Perhaps creation of competitors is not in its best interest. Procreation is just one way to achieve this, and it is a far more efficient way to speed evolution, so that method tends to get selected over the more evolution-resistant method of immortality. It is harder (but certainly not impossible) to make improvements to an individual than to a species.
    Yes, life tends to die. Something that is immortal needs a mechanism to ensure survival from major accidents, which are inevitable. There can be no single points of failure.
    noAxioms

    Here you're referencing life more as I view it. It's a whole, a thing, with continued existence. Procreation is an excellent way for these things to perpetuate their existence across generations, but procreation is an act of life rather than it's defining characteristic. A sterilized gerbil will never reproduce, but it most certainly can be considered alive.

    They do record data readily. How else do we know the long term history of the planet? Ask the rocks. The information is stored nowhere else it seems. Their lack of USB port to download the information just means you need to learn their language if you want them to talk to you.noAxioms

    It has to do with the way the data is organized. The way data in the human brain is organized itself facilitates the mechanical extrapolation and development of consciousness. The way data contained in DNA is organized within the nucleus of a cell is what itself provides mechanical intelligent instruction to the rest of the cell.

    Data leftover from the earth's geological history is like parts of story scattered and mixed about. It's there to be remarked upon by discerning minds, but it doesn't do anything otherwise. Data coiled into a strand of DNA and within a cell functions as an instructive data set for a greater machine with the uncanny ability to anticipate it's environment; very different from arbitrarily layered rock. There's more data in the universe than we will ever collect, but what's notable is that we collect data and assimilate into a peculiar kind of organizational structure which we then continuously employ and interact with.

    We have not defined life. Banno says fire meets the requirement, and since 'unnatural' was found to not belong in my definition, I think fire is life, just a very trivial form. So there's the example of one not complex, and that lack of complexity is why most don't consider it life.noAxioms

    I'm not asking you to define life, I'm asking you to give me an example of anything which could plausibly be agreed to as life which also happens to be uncomplicated. I don't have a good answer as to why life needs to be complex, it just is. Maybe because simple things never do anything intelligent. I don't know, the answer is complex.

    Which is why I'm happy to lay the statement's head in the guillotine and agree to pull the rope as soon as I can find even a single example of life which defies the description of "complex".

    Fire is not life, it's a chemical reaction we call combustion. It doesn't anticipate it's environment, it consumes it as fuel. It doesn't display intelligence or behave in a manner conducive to it's survival. It chaotically consumes what is available to it and then is extinguished in a predictable manner.

    If you really want to hinge your definition of life on "natural vs unnatural" go ahead, but try actually defining those two terms coherently and you'll see what I mean. The risk of equivocation is too constant with the term "unnatural".

    If you don't agree, I think the claim of a requirement for a certain level of complexity needs to be defended. Fire doesn't seem to partake in natural selection, but nobody has listed that as a requirement. "Sufficient complexity to support natural selection"? That would add the need for data, which your definition had, and mine did not, and which fire seems not to have.noAxioms

    I would be very interested in associating natural selection with data recording (the relationship between the two notably giving rise to "complexity"), just so long as I can reconcile the idea with human and AI consciousness. Toward that end, perhaps this loop is adequate to describe the way DNA, and both human and AI intelligence learns:

    fycPrpy.png

    I don't like the word 'intent'. I think bacteria intends to persist no more than does fire.noAxioms

    Here are some of the intelligent behaviors of single-cellular life pulled from wikipedia:


    • The formation of biofilms requires joint decision by the whole colony.
    • Under nutritional stress bacterial colonies can organise themselves in such a way so as to maximise nutrient availability.
    • Bacteria reorganise themselves under antibiotic stress.
    • Bacteria can swap genes (such as genes coding antibiotic resistance) between members of mixed species colonies.
    • Individual cells of myxobacteria and cellular slime moulds coordinate to produce complex structures or move as multicellular entities.
    • Populations of bacteria use quorum sensing to judge their own densities and change their behaviors accordingly. This occurs in the formation of biofilms, infectious disease processes, and the light organs of bobtail squid.
    • For any bacterium to enter a host's cell, the cell must display receptors to which bacteria can adhere and be able to enter the cell. Some strains of E. coli are able to internalize themselves into a host's cell even without the presence of specific receptors as they bring their own receptor to which they then attach and enter the cell.
    • Under rough circumstances, some bacteria transform into endospores to resist heat and dehydration.
    • A huge array of microorganisms have the ability to overcome being recognized by the immune system as they change their surface antigens so that any defense mechanisms directed against previously present antigens are now useless with the newly expressed ones.

    The panda is sufficiently perfected for its niche that adaptability is all but gone. It cannot transition faster than its environment is changing, and will likely only stick around in captivity as do so many other sufficiently cute creatures. Possibly not, since they don't seem to thrive well in captivity. A bird of paradise has the same problemnoAxioms

    Many creatures go extinct instead of successfully reproducing long enough to change into something else (when pressures force change). Genepool's learn primarily or solely through trial and error it seems. When something works, variations upon it are then tested (reproduction), when something doesn't work, no variations upon it get tested by default (extinction).

    OK. Is a computer virus an organism? Are there really 'parts' to it? I guess there are, just like there are parts to DNA that serve different function.
    The only difference between a computer virus and a biological one is that the former is known to be an intelligently designed thing. That suggests that biological primitives might be as well. Biology seems to have a better than even chance of having fallen here from the cosmos rather than having originated here. If the former, perhaps it was engineered by (as opposed to evolved from) some non-biological predecessor, but then that just defers the origin question further back, asking how those predecessors came to be. Somewhere, something had to happen just by chance, given non-deistic assumptions. Even the ID community has backed off on the life thing. The teleological argument now puts the tunings of our universe at a far lower probability than the odds of life appearing naturally.
    noAxioms

    There are decidedly more differences between a biological virus and a computer virus. By comparison the computer virus is simple and easily understood. some of them might take a team of computer scientists to decipher (and therefore a team of them to make), but their behavior is predictable and within finite bounds. Biological viruses on the other hand already display microbial intelligence beyond the complexity of any computer virus that could possibly yet exist. A sophisticated enough computer virus however, sure, but our computers would need to be much more sophisticated to provide the necessary environment for it to thrive (it would need lots of computing power and space).

    I don't think that anything suggests intelligent design though, even the fine tuning of our universe.
  • apokrisis
    7.3k
    I'm not asking you to define life, I'm asking you to give me an example of anything which could plausibly be agreed to as life which also happens to be uncomplicated. I don't have a good answer as to why life needs to be complex, it just is. Maybe because simple things never do anything intelligent. I don't know, the answer is complex.VagabondSpectre

    A biologist would define life semiotically. That is, a line is crossed when something physical, like a molecule, can function as something informational, like a message.

    At the simplest level, that is a bit of mechanism like a switch. A recipe read off a strand of DNA gets made into a molecular message that then changes the conformation of a protein complex and leads to some chemical reaction taking place.

    Of course, we then want to think of life in terms of individual organisms - systems that are closed circuits of signals. There has to be some little sealed world of messages being exchanged that thus gives us a sense of there being a properly located state of purpose. An organism is a collection of semiotic machinery talking to itself in a way that makes for a definite inside and outside.

    So what that says is even the simplest semiotics already has to make that leap to a holistic complexity. It becomes definitional of an organism that it has its particular own purpose, and thus it individuates itself in meaningful fashion from the wider world. A network of switches is the minimal qualification. And that is why a virus seems troubling. We can't really talk about it as an "it" because it is not self-sustaining in that minimal fashion. It is a bare message that hijacks other machinery.

    Computers then fail the definition for life to the degree that they are not organismic. Do they have their own purpose for being - one individuated from their makers? Do they regulate their own physics through their messages? (Clearly not with a normal computer which is designed so the software lives in a different world to its hardware.)

    So the semiotic or organismic view defines life and mind fairly clearly by that boundary - the moment a molecule becomes a message. But for that to happen, a messaging network with some closed local purpose must already be in place. To be a sign implies there is some existent habit of interpretation. So already there is irreducible complexity.

    This can seem a troubling chicken and egg situation. On the other hand, it does allow one to treat life as mind-like and intelligent or anticipatory down to the most primitive level. When biologists talk about information processing, they really do mean something that is purposeful and meaningful to an organism itself.
  • A Christian Philosophy
    1.1k

    'Tall' can still be used as a relation, because saying "Y is tall relative to X" is the same as saying "Y is taller than X". Thus 'tall' and 'taller' have very much the same essence in this case. Also, 'tall' should always be relative to X if we want to say something that is objective and accurate.

    Now I agree that in an everyday conversation, people may say "He is tall" (with no relation). In which case, this is more an expression of the subject than a description of the object, and there are indeed fuzzy boundaries to the word. It follows that not all words have essences; but some do, as is the case for "taller", or "tall" when saying "tall relative to X".
  • noAxioms
    1.5k
    Also, 'tall' should always be relative to X if we want to say something that is objective and accurate.

    Now I agree that in an everyday conversation, people may say "He is tall" (with no relation).
    Samuel Lacrampe
    There must always be a relation to an X, and there is no objective X. It seems always contextual. In everyday conversation, "He is tall" references a context-dependent X. The relation is there, else the statement is meaningless. The X is indeed probably fuzzy, making it more also a function of opinion, but my point is that there is always an X, and X is not objective.

    I am considered tall (probably over 80% of all humans, so there's one plausible X: a certain unstated percentile of height over some implied reference class), yet the pin-oak in my yard is twice my height and is not tall at all. Different context, so different X, both of them fuzzy in this case. I can't think of a non-fuzzy case where X is not explicitly stated.
  • noAxioms
    1.5k
    And that is why a virus seems troubling. We can't really talk about it as an "it" because it is not self-sustaining in that minimal fashion. It is a bare message that hijacks other machinery.apokrisis
    I am also not self sustaining, hijacking the machinery of plants to harvest solar energy. Nothing is completely self-contained, so I don't see the issue with viruses. They have the semiotics and sufficient machinery to live off of their environment, which is other cellular life.

    Likewise the computer virus seems to be life, living off the machinery, but not containing that machinery itself any more than I contain the machinery to photosynthesize. Fire on the other hand is not life. No semiotics that I can see.

    The biologists have a pretty good definition, and it applies to non-biological forms.
  • noAxioms
    1.5k
    A human can be both alive and suicidal at the same time; they're not mutually exclusive, which is the minor point I tried to make.VagabondSpectre
    Yes, and sterile as you point out. Defective examples of life are still life.

    It has to do with the way the data is organized. The way data in the human brain is organized itself facilitates the mechanical extrapolation and development of consciousness. The way data contained in DNA is organized within the nucleus of a cell is what itself provides mechanical intelligent instruction to the rest of the cell.
    I think 'intelligence' is about as fuzzy a term as 'life' or 'unnatural', 'intent' and 'consciousness' and we should avoid the terms. Apo has the right term. Semiotics is the difference between the data in DNA and the data in rocks.

    Fire is not life, it's a chemical reaction we call combustion. It doesn't anticipate it's environment, it consumes it as fuel. It doesn't display intelligence or behave in a manner conducive to it's survival. It chaotically consumes what is available to it and then is extinguished in a predictable manner.
    You just described humans. The difference seems again to be the semiotics. Yes, I agree that fire is not life.

    Bacteria can swap genes (such as genes coding antibiotic resistance) between members of mixed species colonies.
    This one is pretty cool, bordering on the benefit we get from sex.
    Much of your list shows that it isn't entirely remarkable at all that multicellular life forms evolved.
  • apokrisis
    7.3k
    But what molecular machinery does a virus have? It has no ribosomes or mitochondria or any of the other gear to construct an organismic economy. It doesn't even have the genetic information to code for that machinery.

    So I am not too fussed about whether to define a virus as alive. It is OK that it is on the margins of the definition in being a genetic fragment that can hijack proper organismic complexity. Problems only arise in thinking that the simplicity of a virus might make it a stepping stone precursor that marks the evolutionary path from the abiotic to the biotic. I mean you wouldn't treat cancer as a simpler lifeform, or an amputated leg.

    Then you are self sustaining in the "closed for causality" fashion I specified. You have your own respiratory machinery for burning or oxidating electron sources. You don't hijack the respiratory machinery of plants. You take that intricate living machinery and metabolically burn it. It's usually pretty dead by the time it gets into your stomach. A virus needs a living host. You just need chemical bonds you can crack for their energy.

    A computer virus is an analogy for a real virus, but computers - of the regular Turing Machine kind - are nothing like life. As I said, they lack the qualities that define an organism. And thinking in terms of organisms does usefully sharpen up what we - or biologists - mean by life.

    Life (like mind) still has echoes of a vitalistic ontology - the presence of some generic spirit that infects the flesh to make it reactive. Talking about organisms ensures that structural criteria - like being closed for causality in terms of embodying a purpose with efficient means - are top of mind. We are paying attention to the process of how it is done rather than treating life as some vague reactive matter.
  • VagabondSpectre
    1.9k


    I wouldn't say a regular computer is alive, but a (true)artificial intelligence that exists within the environment of digital infrastructure (with inputs/outputs) I think would qualify.

    I would be interested to hear your thoughts on the abstraction inherent in the semiotic approach versus reductionism as it applies to the study of complex systems. Here's what I mean (and if it makes sense):

    I do hold that the general thrust of reductionism (the whole can be described by dissembling it down to it's simpler parts) should apply to even the most complex forms of life, but since the computational requirements for doing so are well beyond the limits of the human mind, we're forced into partially superficial analytical approaches. When we think of computer data as being a 1 or a 0, that is an abstraction of what it really is, which are physical states of sequentially positioned physical memory cells (it's an abstraction that loses no descriptive power though because there are only two states and their only behavior is to switch or be read in a query). It's impractical to build a program by thinking of the physical states of memory cells directly (just as it is impractical to write code in binary directly) and so we abstract basic chunks of binary code as functions or commands in computer language which increasingly becomes more abstract from what it really is: a massive network of two-state switches. The way we program is a compromise on perfection because to program in the base language directly (and therefore with optimal efficiency) would involve too many interacting parts to consciously chart and navigate; a programming language is therefore like an artistic strategy for creating a function..

    This affliction seems rampant in our genetic science: the 4 base states of DNA (A-G-C-T) are beyond us as a comprehensible base unit of data, and so we're relegated to identifying "genetic markers" (very long ordered strings of the specific 4 base units) which we reckon are vaguely related to specific heritable traits like height and skin color (and everything else that is heritable and therefore the result of DNA). We see the 1's and 0's of DNA, but it's all gobbledygook until we zoom out and start identifying words whose direct physical effects we then mostly use statistics to guess at.

    For now, the art of cataloguing genetic markers and seeing what physical traits correlate with their prevalence in specific organisms is the best we can do to decode the data contained in DNA. It's analogous to a strategy (of prediction), just like coding in a higher language without understanding how the pre-packaged commands actually work.

    This seems to be a feature of trying to study any complex system. Take chess strategy as an example: there is an objectively best chess strategy for any given situation, but in order to actually discover or know it analytically would be beyond the computational limits of all human computers and human brains combined. Instead of confronting this irreducible complexity with a reductionist analytical approach, chess players reduce irreducible complexity into stratagems which fall short of perfection.

    What I know about machine learning also seems to reflect this reality. We're able to program a machine that's capable of learning in a rudimentary sense, but the network of individual bits of data which emerges within that environment is too complex to itself be consciously understood on that most basic level (as the reductionist approach strives toward). It's only by approximating the behavior and flow of data within such networks as a whole that we can even begin to put together how these data bits actually interact and co-relate in meaningful ways.

    I'm very interested to get to the bottom of how anticipatory networks are structured, but I'm worried at that depth the inherent complexity makes the bigger picture impossible to see.
  • Wayfarer
    22.8k
    Life (like mind) still has echoes of a vitalistic ontology - the presence of some generic spirit that infects the flesh to make it reactive. Talking about organisms ensures that structural criteria - like being closed for causality in terms of embodying a purpose with efficient means - are top of mind. We are paying attention to the process of how it is done rather than treating life as some vague reactive matter.apokrisis

    I would suggest that a great deal of the 'talking about organisms' that has been done up until recently, was 'deflationary' in respect to the idea that life itself couldn't be accounted for in purely mechanistic or physical terms. The attempt was to forever banish the ghost (geist, gist) from the machine - but it failed, because there really is an ontological distinction between living and non-living things. And that ontological distinction becomes, in practice, a form of dualism.
  • VagabondSpectre
    1.9k
    I think 'intelligence' is about as fuzzy a term as 'life' or 'unnatural', 'intent' and 'consciousness' and we should avoid the terms. Apo has the right term. Semiotics is the difference between the data in DNA and the data in rocks.noAxioms

    The fact that data contained in DNA exists in an environment where it interacts in a complex network (and hierarchy of networks) of interactions (which form a coherent organism), and that this data relates or is pertinent/related to the behavior/internal function of this organism and it's environment is what makes the data different; it's what makes it relevant to consider from the perspective of semiotics in the first place...

    I'm not wholly opposed to the use of a word like "unnatural", but it's only as meaningful as we define it, this word in particular because some of it's connotations are quite vague (including some moral normative connotations). "Not typically seen" is a fair enough start, but I do recommend to opt for words like uncanny, singular, peculiar, interesting, or even "cool", if only because they're less easily equivocated with possible alternate meanings. If these are indeed the meanings you intended, then they're not entirely central/helpful anyhow, but they have their time and place..
  • apokrisis
    7.3k
    I think that the biophysical discoveries of the past 15 years - the new and very unexpected detail we have about the molecular machinery of cells - really explains how life and computation are deeply different.

    To sum that up, the reductionist view you just expressed hinges on the belief that the physics or hardware of the system is a collection of stable parts. Even it we are talking about circuits that can be switched, they stay in whatever state they were last left in. You can build up a hierarchy of complexity - such as the layers of microcode and instruction sets - because the hardware operates deterministically. It is fixed, which allows the software to flex. The hardware can support any programme without being the slightest bit bothered by anything the software is doing.

    But biology is different in that life depends on physical instability. Counter-intuitively, life seeks out physical processes that are critical, or what used to be called at the edge of chaos. So if you take any protein or cellular component (apart from DNA with its unusual inertness), as a molecule it will be always on the edge of falling apart ... and then reforming. It will disassociate and get back together. The chemical milieu is adjusted so that the structural components are poised on that unstable edge.

    And the big trick is that the cell can then use its genetic information to give the unstable chemistry just enough of a nudge so the parts rebuild themselves slightly more than they fall apart. This is the semiotic bit. Life is information that sends the signal to hang together. And it is the resulting flux of energy through the system - the dissipative flux - that keeps the componentry rebuilding.

    So computers have stable hardware that the software can forget about and just crunch away. If you are equating the program with intelligent action, it is all happening in an entirely different world. That is why it needs biological creatures - us - to write the programmes and understand what they might be saying about the world. To the programmes, the world is immaterial. They never have to give a moment's thought to stopping the system of switches falling apart because they are not being fed by a flux of entropy.

    Life is then information in control of radical physical instability. That is what it thrives on - physics that needs to be pointed in a direction by a sign, the molecules that function as messsges. It has to be that way as cellular components that were stable would not respond to the tiny nudges that signals can deliver.

    This leads into the other counter-intuitive aspect of life and mind - the desire for a general reduction in actual information in a living system.

    Again, with computation, more data, more detail, seems like a good thing. As you say, to model a physical process, the level of detail we need seems overwhelming. We feel handicapped because to get it right, we have to represent every atom, every event, every possibility. In principle, universal computation could do that, given infinite resources. So that is a comfort. But in practice, we worry that our representations are pretty sparse. So we can make machines that are somewhat alive, or somewhat intelligent. However to complete the job, we would have to keep adding who knows how many bits.

    The point is that computation creates the expectation that more is better. However when it comes to cellular control over falling apart componentry, semiotics means that the need is to reduce and simplify. The organism wants to be organised by the simplest system of signals possible. So information needs to be erased. Learning is all about forgetting - reducing what needs to be known to get things done to the simplest habits or automatic routines.

    This then connects to the third way biology is not like computation - and that is the way life and mind are forward modelling systems. Anticipatory in their processes. So a computer is input to output. Data arrives, gets crunched, and produces an output. But brains guess their input so as to be able to ignore what happens when it happens. That way anything surprising or novel is what will automatically pop out. In the same way, the genes are a memory that anticipates the world the organism will find itself in. Of course the genes only get it 99% right. Selection then acts to erase those individuals with faulty information. The variety is reduced so the gene pool gets better at anticipation.

    So life is unlike the reductionist notion of machinery in seeking out unstable componentry (as that gives a system of signals something to control). And at the "software" or informational level, the goal is to create the simplest possible control routines. Information needs to be erased so that signal can be distinguished from noise. It is just the same as when we draw maps. The simpler the better. Just a few lines and critical landmarks to stand for the complexity of the world.
  • A Christian Philosophy
    1.1k

    Sorry for the late response on this comment. It sounds like you have the same position as some of us in this post: that simple life of a thing is nothing but the proper functioning of the thing's parts. It follows that a factory, car engine, or even a fire have simple life. That is also my position for a simple life, like simple cells.

    I think they may be able to do it, but I don't think they will be able to explain their results objectively, using only a material/objective level of description.Cavacava
    Why is that? If we are able to produce life from material (matter and energy) only, then life is made of material only. Nothing can be created out of nothing. Note I am not including here a human being, which may not only have a life, but also a soul.
  • Cavacava
    2.4k


    Hi and thank you for your reply. Most of the information I read suggest that the main components of what comprise life will probably have to come together all at the same time in an emergent formation. I think this means that there may be no set formula for achieving life, and probably the zillions of micro chemical/physical events will have to be just so for it to emerge. Scientists may be able to make it, but that is different than being able to explain how it ends up emerging. I speculate it might not be possible to explain.
  • VagabondSpectre
    1.9k
    I think that the biophysical discoveries of the past 15 years - the new and very unexpected detail we have about the molecular machinery of cells - really explains how life and computation are deeply different.

    To sum that up, the reductionist view you just expressed hinges on the belief that the physics or hardware of the system is a collection of stable parts. Even it we are talking about circuits that can be switched, they stay in whatever state they were last left in. You can build up a hierarchy of complexity - such as the layers of microcode and instruction sets - because the hardware operates deterministically. It is fixed, which allows the software to flex. The hardware can support any programme without being the slightest bit bothered by anything the software is doing.

    But biology is different in that life depends on physical instability. Counter-intuitively, life seeks out physical processes that are critical, or what used to be called at the edge of chaos. So if you take any protein or cellular component (apart from DNA with its unusual inertness), as a molecule it will be always on the edge of falling apart ... and then reforming. It will disassociate and get back together. The chemical milieu is adjusted so that the structural components are poised on that unstable edge.
    apokrisis

    The kind of computation to which I refer isn't just basic computation; "deep learning" is an example of the type of computation that I would compare to life because the organizational structure of it's data points (a structure which emerges as the machine learns on it's own) is well beyond the complexity threshold of appearing to operate non-deterministically.

    What exactly does it mean for a system to behave non-deterministically? The idea that the same event playing out twice could have more than one possible outcome seems to be the gist, and I do understand that capacity for variance is necessary in the processes of life, but I have a hard time accepting what it means for the reductionist in me to assent to that description. I do understand the non-linearity of development in complex and chaotic systems. Events may still be pre-determined but they may not predicted in advance because each sequential material state in the system contains irreducible complexity, so it must be played out or simulated to actually see what happens. (like solving an overly-large equation piece by piece because it cannot be simplified).

    Because so many parts of the system are poised to change, it has extraordinary sensitivity, and so it's range of possible outcomes from it's initial states is too vast to consciously reckon. The chaos and instability of simulated neural networks seem to achieve this.


    So computers have stable hardware that the software can forget about and just crunch away. If you are equating the program with intelligent action, it is all happening in an entirely different world. That is why it needs biological creatures - us - to write the programmes and understand what they might be saying about the world. To the programmes, the world is immaterial. They never have to give a moment's thought to stopping the system of switches falling apart because they are not being fed by a flux of entropy.

    Life is then information in control of radical physical instability. That is what it thrives on - physics that needs to be pointed in a direction by a sign, the molecules that function as messsges. It has to be that way as cellular components that were stable would not respond to the tiny nudges that signals can deliver.
    apokrisis

    In a way advanced machine learning such as "deep learning" learning simulates information in control of itself to a high degree. It's possible that we will indeed come up with the design for a learning machine which can become smart enough to take physical control over it's own existence (if we give it sufficient apparatus to do so). Analogies for physical instabilities, controlled by signals instead of chemical nudges, exist readily in the form of the emergent interconnected complexity in the physical network of memory cells (complex/dynamic memory cells themselves being composed of groups of two-state switches), and the complex ramifications that different states have upon various forms of output.

    Again, with computation, more data, more detail, seems like a good thing. As you say, to model a physical process, the level of detail we need seems overwhelming. We feel handicapped because to get it right, we have to represent every atom, every event, every possibility. In principle, universal computation could do that, given infinite resources. So that is a comfort. But in practice, we worry that our representations are pretty sparse. So we can make machines that are somewhat alive, or somewhat intelligent. However to complete the job, we would have to keep adding who knows how many bits.

    The point is that computation creates the expectation that more is better. However when it comes to cellular control over falling apart componentry, semiotics means that the need is to reduce and simplify. The organism wants to be organised by the simplest system of signals possible. So information needs to be erased. Learning is all about forgetting - reducing what needs to be known to get things done to the simplest habits or automatic routines.
    apokrisis

    With calculation and simulation, precision and accuracy (more data) are definitely important, but when it comes to the ergonomics of computer code, it's generally understood that less is more. We try to refine data structure and algorithms to where they achieve better results in shorter run-times. The struggle to write the fastest/most-robust/most secure program is a battle with complexity because there are so many different ways to achieve the same end result (hence the plethora of coding strategies). A smaller and simpler line of code which performs the same function as a larger complicated one is seen as more powerful and more desirable in every way.

    When it comes to advanced machine learning, the way finite numbers of memory cells are linked together (and their content) causes the anticipatory strength of the overall AI to grow because the organization of the data itself is growing in complexity rather than a traditional bifurcating decision tree merely growing in size. In this way learning machines do actually simplify and intelligently organize data rather than just collect more of it.

    This then connects to the third way biology is not like computation - and that is the way life and mind are forward modelling systems. Anticipatory in their processes. So a computer is input to output. Data arrives, gets crunched, and produces an output. But brains guess their input so as to be able to ignore what happens when it happens. That way anything surprising or novel is what will automatically pop out. In the same way, the genes are a memory that anticipates the world the organism will find itself in. Of course the genes only get it 99% right. Selection then acts to erase those individuals with faulty information. The variety is reduced so the gene pool gets better at anticipation.apokrisis

    So far we've only been able to design learning machines which can learn to perform very specific tasks, but the results of their learning make them better at those specific tasks than humans ever could be. They lack certain functions and apparatus that would be required for them to perceive the world around them as an on-going input and react to it as life would (try to survive and thrive mostly) but I think we will eventually get there.

    If we built command and control pathways into machines for a sufficiently sophisticated learning AI (one which is capable of more broad learning) which gave it the ability to perceive, manipulate, and learn from the external environment directly, then hypothetically it could learn to become concerned with actually maintaining and expanding it's own existence in every detail.

    So life is unlike the reductionist notion of machinery in seeking out unstable componentry (as that gives a system of signals something to control). And at the "software" or informational level, the goal is to create the simplest possible control routines. Information needs to be erased so that signal can be distinguished from noise. It is just the same as when we draw maps. The simpler the better. Just a few lines and critical landmarks to stand for the complexity of the world.apokrisis

    A hydraulic piston in a way represents instability; a slight addition of hydraulic fluid expresses itself as drastic change in the overall mechanism. Since there are only two states, this instability is easy for us to understand and make use of. We don't typically build machines out of highly dynamic parts (one's with many internal states and many effects/behaviors) because we're unable to design coherent functions around such complicating variables.

    Machines which we build using mostly two-state parts with well defined effects are extraordinarily simple compared to those which seem to emerge on their own (using dynamic parts such as inter-connected memory cells with many states or strings of pairs of molecules which exhibit many different behaviors depending on their order). Even while I recognize the limits on comprehending such machines using a reductionist approach, I cannot help but assume these limitations are primarily owing to the strength of the human mind.
  • Banno
    25.3k
    It might not amount to much, but it seems everything we are want to label "life" does employ recorded data in some form as a necessary part of it's ability to self-organize, anticipate, and successfully navigate it's environment.VagabondSpectre

    Snowflakes and other crystals build on their "recorded" shape. It's not a bad definition, but I don't think it gets there.

    And that's the point I want to make; that when someone provides us with a definition we go through a process of verifying it; but what is it that we are verifying it against? We presume to be able to say if the definition is right or wrong; against what are be comparing it? Not against some other earlier definition, but against our common usage.

    And if we already accept that this common usage is the test of our definition, why bother witht eh definition at all?
  • Banno
    25.3k
    More than just a sliding scale; the scale needs to be extendable as well, and not just in one dimension.

    Edit: Indeed, you seem to accept this in your next post.
  • Banno
    25.3k
    An individual's height varies by about a centimetre each day. One is taller in the morning than in the evening, and the compression of the spine changes with the activities one performs.

    So a height can only be given within a certain boundary; as witha ny measure it is subject to error. So A may be 1770.1mm to within 5mm; while B is 1770.0 mm to within 5mm.

    Now who is taller? Does that question even make sense?
  • apokrisis
    7.3k
    The kind of computation to which I refer isn't just basic computation; "deep learning" is an example of the type of computation that I would compare to life because the organizational structure of it's data points (a structure which emerges as the machine learns on it's own) is well beyond the complexity threshold of appearing to operate non-deterministically.VagabondSpectre

    So my argument is that essential to a semiotic definition of life is it is information which seeks out material instability. It needs chemical structure poised on a knife edge as that is what then allows the information to act as the determining influence. That is the trick. Information can be the immaterial part of an organism that gives the hardware just enough of a material nudge to tip it in the desired directions.

    So yes, neural computer architectures try to simulate that. They apply some universal learning algorithm to a data set. With reinforcement, they can erase response variety and arrive at the shortest path to achieve some goal - like win a computer game. There is something life-like there.

    But note that you then think that to become more life-like would involve a scaling up - add more information processing to head in the direction of becoming actually conscious or intelligent.

    I instead would be looking to scale down.

    Your DeepMind is still a simulation running on stable hardware and thus merely software insulated from the real world of entropic material processes. Sure, we can imagine the simulation being coupled to the world by some system of actuators or mechanical linkages. The program could output a signal - like "fire the missile". That could flick a switch that triggers the action. But it is essential that the hardware doing this job is utterly deterministic and not at all unstable. Who wants nukes hooked up to wobbly switches?

    So while DeepMind might build a simulation of a learning system that feeds off the elimination of variety - and thus has to deal with its own artificial instability, the catastrophic forgetting problem - it still depends on deterministic devices outside its control to interface with the world. A different bunch of engineers is responsible for fabricated the reliable actuators that can take an output and turn it into the utterly reliable trip of the switch. I mean it makes no difference to the DeepMind computation whether anything actually happens after it has output its signal. A physical malfunction of the switch is not its responsibility as some bunch of humans built that part of the total system. DeepMind hasn't got the wits to fix hardware level faults.

    But for life/mind, the organism is sensitive to its grounding materiality all the way down to the quasi-classical nanoscale. At the level of synapses and dendrites, it is organic. The equilibrium balance between structural breaking down vs structural re-constructing is a dynamic being influenced by the global state of the system. If I pay attention to a dancing dot on a screen, molecular-level stuff is getting tipped in one direction or another. The switch itself is alive and constantly having to be remade, and thus constantly also in a state of anticipatory learning. The shape some membrane or cytoskeletal organisation was in a moment ago is either going to continue to be pretty much still right or competitively selected for a change.

    So my argument is that you are looking in the wrong direction for seeking a convergence of the artificial with the real. Yes, more computational resources would be necessary to start to match the informational complexity of brains. But that isn't what convergence looks like. Instead, the technology has to be pushed in the other direction - down to the level where any reliance on outside help for hardware stability has been squeezed out of the picture and replaced by an organismic self-reliance in directing the transient material flows on which life - as dissipative structure - depends.

    Life and mind must be able to live in the world as information regulating material instability for some remembered purpose. It has to be able to stand on its own two feet entirely to qualify as life (as I said about a virus).

    But that is not to say that DeepMind and neural network architectures aren't a significant advance as technology. Simulated minds could be very useful as devices we insert into tasks we want to automate. And perhaps you could argue that future AI will be a new form of life - one that starts at some higher level of semiosis where the entropic and material conditions are quite different in being engineered to be stable, rather than being foundationally unstable.

    So yes, there may be "life" beyond life if humans create the right hardware conditions by their arbitrary choice. But here I am concerned to make clear exactly what is involved in such a step.

    I do understand the non-linearity of development in complex and chaotic systems. Events may still be pre-determined but they may not predicted in advance because each sequential material state in the system contains irreducible complexity, so it must be played out or simulated to actually see what happens. (like solving an overly-large equation piece by piece because it cannot be simplified).VagabondSpectre

    It still needs to be remembered that mathematical chaos is a model. So we shouldn't base metaphysical conclusions on a model without taking account of how the model radically simplifies the world - by removing, for instance, its instabilities or indeterminancies.

    So a reductionist takes a model that can construct "chaos" deterministically at face value. It does appear to capture much about how the world works ... so long as the view is grainy or placed at a sufficient distance in terms off dynamical scale. If you average, you can pretend that spontaneous fluctuations have been turned into some steady-state blur of action. So while analytic techniques fail (the real world is still a mess of chance or indeterminism), numeric techniques just take the assumed average and get on with the computation.

    So chaos modelling is about eliminated actual complexity - of the semiotic kind - and replacing it with mere complexity. The system in question is granted known boundary conditions and some set of "typical" initial conditions are assumed. With the simulated world thus sealed at both ends, it becomes safe for calculation. All you need is enough hardware to run the simulation in the desired level of detail.

    Machines which we build using mostly two-state parts with well defined effects are extraordinarily simple compared to those which seem to emerge on their own (using dynamic parts such as inter-connected memory cells with many states or strings of pairs of molecules which exhibit many different behaviors depending on their order). Even while I recognize the limits on comprehending such machines using a reductionist approach, I cannot help but assume these limitations are primarily owing to the strength of the human mind.VagabondSpectre

    This is in fact the big surprise from modern biophysics - at the ground level, life is far more a bunch of machinery than we ever expected. Fifty years ago, cells seemed like bags of chemical soup into which genes threw enzymes to make reactions go in desired directions. Now it is being discovered that there are troops of transport molecules that drag stuff about by walking them along cytoskeletal threads. Membranes are full of mechanical pumps. ATP - the universal energy source - is charged up by being cranked through a rotating mill.

    So in that sense, life is mechanism all the way down. It is far less some soup of chemistry than we expected. Every chemical reaction is informationally regulated.

    But the flip side of that is that this then means life is seeking out material instability at its foundational scale - as only the unstable could be thus regulated by informational mechanism.

    If you are at all interested, Peter Hoffman's Life's Ratchet is a brilliant read on the subject. Nick Lane has done a number of good books too.

    So there are two things here. You are talking about the modelling of informational-level complexity - the kind of intricate patterns that can be woven by some network of switches regulated by some set of rules. And there is a ton of fun mathematics that derives from that, from cellular automata and Ising models, to all the self-organising synchrony and neural network stuff. However that all depends on switches that are already behaving like switches - ie: they are deterministic and they don't add to the total complexity by "having a mind of their own".

    But I am talking about life and mind as a semiotic process where the hardware isn't deterministic. In fact, it mustn't be deterministic if that determinism is what the information processing side of the equation is hoping to supply.

    And where are our pretty software models to simulate that kind of world? Or rather, where are our actual "machines" that implement that semiotic notion as some actual device? In technological terms, we can do a fantastic amount of things at the software simulation level. But can we do anything life-like or mind-like at the self-assembling hardware actuality level?

    Hell no. It's only been about 10 years that biology has even begun to grasp that this is such an issue.
  • apokrisis
    7.3k
    And if we already accept that this common usage is the test of our definition, why bother witht eh definition at all?Banno

    Because obviously we call for a definition because we want to narrow that common usage in some useful fashion. We want to eliminate some sense of vagueness that is in play by taking a more formally constrained view. And that has to start by a reduction of information. We have to put our finger on what is most essential. Then we have some positive proposition that we can seek to verify (or at least fail to falsify via acts of measurement).

    If we accept common usage, then yes, no problem. The usage already works well enough. But common usage is always in question - at least for a scientist or philosopher who believes knowledge is not some static state of affairs but the limit of a shared community of inquiry.
  • Banno
    25.3k
    Sure; if it makes a difference.
  • Wayfarer
    22.8k
    Machines which we build using mostly two-state parts with well defined effects are extraordinarily simple compared to those which seem to emerge on their ownVagabondSpectre

    Isn't one of the glaringly obvious points, however, that all machines (and computers) are built artefacts? Every single one, there is no exception. So even where the mechanist and computational allegories for living systems stack up, there's absolutely no explanation for how such machines and computers could be spontaneously formed; whereas that seems an obvious attribute of characteristic of living systems, which not only spontaneously form, but also spontaneously develop to higher levels of complexity.
  • apokrisis
    7.3k
    is a virus alive then?
  • Metaphysician Undercover
    13.2k
    And if we already accept that this common usage is the test of our definition, why bother witht eh definition at all?Banno

    There is no such thing as "common usage", you are making an unjustified generalization. I use a word the way I want to use it, you use it the way you want to use it, and each time one of us uses it, it is used in a different way from the last time.

    The "test" of a definition cannot be a referral to "common usage" because I will refer to usage which supports one definition, and you will refer to usage which supports another definition. What good will that do us? I will insist on using the word one way while you will insist on using it another. To be useful in any sort of logical proceeding, or argumentation, a definition must be based in agreement.

    "Common usage" is a just fiction. It implies that there are some sort of agreements, or conventions, which create this commonality, but none exist. Therefore the claim that there is such a thing as "common usage" is a misleading deception. There are no agreements behind "common usage", and any claim of convention is fictitious. So there is nothing to conform that usage and therefore no formal commonality. Reference to "common usage" is reference to a non-existent entity and utterly useless.

    If you want to say something useful, then bring up some examples of usage. That is real "common usage". But each example will be different from the last, so to produce any sort of generalization will require a synthesis of essential features from each of those instances of usage.
  • Wayfarer
    22.8k
    Is a virus alive then?apokrisis

    I think the appropriate question is, could viruses replicate sans life? As far as I know the answer to that is 'no'. They might exist, like the Andromeda Strain, floating around in interstellar gas clouds, but until they encounter a living, breathing host, they can't actually do anything. So whether they're alive can remain a moot point - they're dependent on living organisms.
  • apokrisis
    7.3k
    But that wasn't the point. The point was that you would need a definition that could decide such a question. Banno is arguing that standard usage of language is good enough.

    He said...

    We simply do not need to be able to present a definition of life in order to do biology.Banno

    But any biologist would tell him that is ridiculous. :)
  • Wayfarer
    22.8k
    Oh, I definitely agree with you there.
  • noAxioms
    1.5k
    Is a virus alive then? — apokrisis
    I differentiated the terms. I would have said the virus is life, but it is not alive since it has no functioning parts most of the time.
    So a computer virus would be life, but not necessarily alive. A full self-contained machine-organism responsible for all aspects of maintenance and persistence I guess would be alive. Suppose we dropped such entities on a planet without biology, and they lost their original task and just evolved from there. They'd eventually evolve to wonder about their own origins and would consider it obvious that at some point a most basic form was a spontaneous accident.
    I think the appropriate question is, could viruses replicate sans life? As far as I know the answer to that is 'no'.Wayfarer
    Humans also cannot replicate sans life. We have much more of the machinery of replication built into us, and are 'alive' in the sense that we function in some entropic way. But I am just as dependent on the external machinery (or at least the byproducts of it) as the virus. Apo did point out a clear distinction of dependence on the machinery itself vs us being dependent on the byproducts only.
  • A Christian Philosophy
    1.1k

    I see. So scientists may agree on the direct cause of the creation of life, that is, the right ingredients at the right settings, but may differ on the explanation or cause of that cause. And the reason is the high improbability for all the conditions to be just right, is that correct?

    From this information, I see only two logically possibilities for the original cause:
    1. random event from nature, despite the improbability
    2. not-random event, that is, intelligent design
  • A Christian Philosophy
    1.1k

    The answer in this case is unclear, but the challenge lies in the data, not in the essence of the word 'tall'. Here is the proof: Remove the fuzziness from the data, say A is 10 m tall to within 5 mm, and B is 2 m tall to within 5 mm. Who is taller? Clearly A.

    Your case gives an unclear answer, and my case gives a clear answer. The only difference between the two cases is in the data, not in the essence of 'tall' because the same essence is referred to in both cases. Therefore that cause of the unclearness is in the data, not in the essence of 'tall'.
  • Cavacava
    2.4k


    Yes, that's my position and I doubt life as a non-random event. I think life must be a potentiality of matter, to be alive something must have a form that separates it from its environment, a way to reproduce itself and an active metabolism...spores, virus and whatever that do not have active metabolisms, can only be said to be potentially alive, until they are revived, my opinion.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.