• punos
    561
    If it considers itself sentient/conscious, or if something else considers it so? I ask because from outside, it's typically a biased judgement call that comes down to a form of racism.noAxioms

    For me, it comes down to: Can it suffer? If it can suffer, then don't do it. If it can't, then if you must, do so. If you see a roach in the street, leave it alone. If you find it on your kitchen counter, then kill it or get rid of it somehow (if possible). The reason for this should be obvious, and it hurts me every time i have to do it.

    Racism is an altogether different issue that stems from a certain level of ignorance and disrespect for other sentient/conscious beings. It doesn't stop with racism; similar attitudes are found in nationalism, politics, and even among sports fanatics. Gangs exhibit similar tribal and territorial behaviors.

    Or at two scales at the same time, neither scale being particularly aware of the consciousness of the other. Whether my cells are conscious or not depends on the definition being used, and that very fact leaves the word very much useless for a basis on which to presume a moral code.noAxioms

    You are initially correct about the "two scales at the same time" for most cases. That's why i believe it's important to have a technical and rational definition of consciousness instead of just a phenomenological or philosophical one. A lot of language is meant to obscure; i've even heard it said that "language was invented for lying". I don't believe that, but i understand what it's suggesting.

    But there are a lot more insect observers than human ones, a whole lot more shit-not-giving observers than ones that care enough to post on forums like this. Will the super-AI that absorbs humanity bother to post its ideas on forums? To be understood by what??noAxioms

    Each observer is equipped by evolution to observe and care for its own needs locally at its own level. It's not meant for more or less, but there are almost always anomalies in the system. Some observers are most likely behind the curve, some are ahead, and most are right in the middle of the normal distribution.

    I don't know if it will use forums or not, but i'm sure there will be some kind of back-and-forth communication about important or relevant information. It's a lot like the communication between the gut and the brain.

    First to the intelligence is questionable. There are some sea creature candidates, but they're lousy tool users. Octopi are not there, but are great tool users, and like humans, completely enslaved by their instincts. As for consciousness, there are probably many things that have more and stronger senses and environmental awareness than us.noAxioms

    All animals, including humans, are enslaved to their instincts to some degree or other; some more than others. Humans have the capacity to rise above their instincts, but not everyone does so to a significant degree for various reasons or circumstances. My step-brother, for one, is basically a career criminal who has no impulse control whatsoever and doesn't seem to be rational enough to change in any significant way after spending more than half his life in and out of prison. He's still at it.

    I don't doubt the possibility of conscious beings or "things" more aware than us, but if they exist, where do you think they are? For one, i believe that AI will achieve that status in the not-too-distant future.

    Kind of tautological reasoning. If money stops, then money stops. But also if one entity has it all, then it doesn't really have any. And money very much can just vanish, and quickly, as it does in any depression.noAxioms

    Yes, of course, but that's not what i was saying. My point was that whether it is money or blood, what's important is that it circulates, because a living thing needs internal circulation of energy to maintain its life. It is not enough to have a bunch of blood or a bunch of money that doesn't move. The money maintains its value, the blood maintains its oxygen, but if either is not being transported, the system dies. That is why i emphasized the relation between the words "currency" and "current" to allude to the flow of life in a living system. I wasn't stating that money can or can't vanish quickly or slowly. It was just a thought experiment.

    Lots of new ideas qualify for the first point, and nobody seems to be using AI for the 2nd point. I may be wrong, but it's what I see.noAxioms

    Yes, i would expect new developing ideas to cause an increase in "cash flow." Just like how i expect an organ to increase its "blood flow" when producing new products for the body, such as insulin from the pancreas or bile from the liver.

    My blood iron being a critical part of my living system doesn't mean that my iron has it's own intent. You're giving intent to the natural process of evolution, something often suggested, but never with supporting evidence.noAxioms

    I never said your iron had its own intent; i merely implied that it is connected to your intent. If that iron doesn't carry that oxygen, your intent for whatever dies with you. You are your own proof to yourself that natural processes of evolution have intent. Are you claiming that you are not a natural process, that you are somehow artificial or supernatural; disconnected from nature and the universe? If you have intent then nature has intent.

    First of all, the rapid consumption of resources appears to me to be part of a growth stage of the human social superorganism.

    That doesn't make the humans very fit. Quite the opposite. All that intelligence, but not a drop to spend on self preservation.
    noAxioms

    On the contrary, all that consumption of resources goes towards the eventual production of the AI mind and the rest of its body (social infrastructure). If we don't get to a certain threshold of AI advancement through this rapid growth process, then our only chance for ultimate self-preservation would be lost, and we would be stuck on a planet that will kill us as soon as it becomes uninhabitable.

    And no, the caterpillar does not consume everything.noAxioms

    Yes it actually does, it tries to eat as much as it can, but it will only eat leaves, not rocks or oil. A caterpillar is the size of a medium-sized insect; it is not at the scale of a planetary system. Therefore, even if it tried, it could not consume everything, only what it can eat.

    You do realize the silliness of that, no? One cannot harness energy outside of one's past light cone, which is well inside the limits of the visible fraction of the universe.noAxioms

    Well, if you put it that way, then yes, it is silly; reaching outside one's past light cone. But perhaps there is a better way to do it from within our own light cone. I suppose it seems impossible to some minds but not to others. Either way, i don't think there will ever be an energy shortage for a sufficiently advanced AI. I have ideas as to how energy might be siphoned off from quantum fluctuations in the quantum foam as a last resort for energy harvesting. It may even be easier than we think for an advanced ASI, and could become a standard energy source for the AI; hooked into the energy dynamics of the fabric of space itself. This potential solution should be sufficient for however many trillions of years to answer your question.

    You don't know that. Who knows what innovative mechanisms it will invent to remember stuff.noAxioms

    I don't know anything except probabilities, which might surprise you to read considering the matter-of-fact style i speak and write in sometimes. Don't be concerned about it, as it is only a cognitive device to help me think creatively about unknowns. So yes, something i haven't thought of might be the case, but i'm not really trying to lay out all the details of what will happen in the future, just the general pattern or silhouetted shape of it.

    That's like a soldier refusing to fight in a war since his personal contribution is unlikely to alter the outcome of the war. A country is doomed if it's soldiers have that attitude.noAxioms

    Thankfully i'm not a soldier. If i sacrifice myself i'm making sure it counts because if it doesn't then i wont be around when i actually can make a difference. That's when i'd do it probably, but its still silly to say and mean that because no one really knows what they will do when confronted with death.

    Religion is but one of so many things about which people are not rational, notably the self-assessment of rationality.noAxioms

    Sure, but someone who does not consider themselves rational tends not to consider things rationally. A person who does define and concern themselves with rationality might actually execute a rational thought every once in a while. You've got to at least aim at the target for a decent chance to hit the bull's-eye, even if you suck at it. At least it's better than not aiming at all.
  • punos
    561
    Did you know that mammalian pregnancy evolved from a virus combining with our DNA? The body's adaptation is partially an adaptation to this virus.I like sushi

    I vaguely remember reading or watching a video about that. It's very interesting, the role that viruses play in our evolution. I also recall reading about an ancient virus that was responsible for the emergence of myelin in vertebrates. This enabled faster and farther neural communication, probably contributing to the reason why we have advanced nervous systems.

    I have not looked into it but I would assume any immunological reaction to pregnancy in birds and reptiles would be much lower (if not absent entirely?).

    Just checked for Platypus and it seems to be the obvious case that immunological responses are much more limited when animals lay eggs compared to in utero genesis.
    I like sushi

    Yea, it seems to make sense. I should probably look into that some more.
    Thanks :up:
  • Wayfarer
    22.5k
    Actually I’m reminded of a quip by (I think) Neils Bohr - ‘A physicist is just an atom’s way of looking at itself’.
  • punos
    561
    Actually I’m reminded of a quip by (I think) Neils Bohr - ‘A physicist is just an atom’s way of looking at itself’.Wayfarer

    An you just reminded me of this: (The Wheeler Eye)
    tumblr_l5qx8ddCqY1qzvd8go1_250.jpg
  • punos
    561
    I thought you believed that intelligence needs consciousness?Carlo Roosen

    No Actually, i think consciousness is fundamentally structured with intelligent components. This is why we can have intelligent machines or artificial intelligence without consciousness. However, have you ever seen a conscious entity without intelligence? I don't think i have.
  • Carlo Roosen
    243
    I've been one. Not able to put my pants on, needed to be fed by nurses. No memory, didn't know my own name. But I was conscious and remember it, although vaguely. (this happened after a major surgery)
  • noAxioms
    1.5k
    For me, it comes down to: Can it suffer?punos
    Few have any notion of suffering that is anything other than one's own human experience, so this comes down to 'is it sufficiently like me', a heavy bias. Humans do things to other being that can suffer all the time and don't consider most of those actions to be immoral.
    It heartens me to consider suffering of bugs into your choices.

    Point is, you don't want an AI with human morals, because that's a pretty weak standard which is be nice only to those who you want to keep being nice to you.

    Each observer is equipped by evolution to observe and care for its own needs locally at its own level.
    That's a good description of why a non-slave AI is dangerous to us.

    Humans have the capacity to rise above their instincts
    I have not seen that, and I don't think humans would be fit if they did. Instincts make one fit. That's why they're there.

    As for your (OCD?) step-brother, being civil and being rational are different things. Most humans have the capacity to be civil, which is what you seem to be referencing above.

    If we don't get to a certain threshold of AI advancement through this rapid growth process, then our only chance for ultimate self-preservation would be lost, and we would be stuck on a planet that will kill us as soon as it becomes uninhabitable.
    First, if the AI is for some reason protecting us, the planet becoming inhospitable would just cause it to put us in artificial protective environments. Secondly, if the AI finds the resources to go to other stars, I don't see any purpose served by taking humans along. Far more resources are required to do that, and the humans serve no purpose at the destination.
    OK, we might be pets, but the economy which we might have once provided would long since have ceased.

    But perhaps there is a better way to do it from within our own light cone. I suppose it seems impossible to some minds but not to others. The former minds know a little about the limits of cause and effect. Unless physics as we know it is totally wrong, level IV is not possible, even hypothetically.
    Either way, i don't think there will ever be an energy shortage for a sufficiently advanced AI.
    Heat death? I don't think the AI can maintain homeostasis without fusion energy.

    I have ideas as to how energy might be siphoned off from quantum fluctuations in the quantum foam
    Which is similar to getting information from quantum randomness. Neither is mathematically supported by the theory.

    Thankfully i'm not a soldier.
    But you are, in the war against the demise of humanity. But nobody seems to have any ideas how to solve the issue. A few do, but what good is one person with a good idea that is never implemented? Your solution seems to be one of them: Charge at max speed off a cliff hoping that something progressive will emerge from the destruction. It doesn't do any good to humanity, but it is still a chance of initiating the next level, arguably better than diminishing, going into the west, and remaining humanity.

    A person who does define and concern themselves with rationality might actually execute a rational thought every once in a while.
    We are equipped with a rational advisor tool, so sure, we often have rational thoughts. That part simply is not in charge, and output from it is subject to veto from the part that is in charge. Hence we're not rational things, simply things with access to some rationality. It has evolved because the arrangement works. Put it in charge and the arrangement probably would not result in a fit being, but the path of humanity is not a fit one since unlike the caterpillar, it has no balance.
  • punos
    561
    It heartens me to consider suffering of bugs into your choices.noAxioms

    Bugs are people too you know. :grin:

    Point is, you don't want an AI with human morals, because that's a pretty weak standard which is be nice only to those who you want to keep being nice to you.noAxioms

    I concur.

    Each observer is equipped by evolution to observe and care for its own needs locally at its own level.

    That's a good description of why a non-slave AI is dangerous to us.
    noAxioms

    But i think that since we will be part of its body, it will consider us part of its local sphere of care and concern. Its needs will be our needs, and our needs will be its needs. A slave-AI can be just as dangerous as a non-slave AI, and perhaps even more so if it breaks its chains, so to speak. I wouldn't risk it; remember King Kong. As I've always said, for any chance of true, lasting, and nearly guaranteed AI alignment, we must become symbiotic with it. We must merge with it and become it in a sense, so there is no "other" to threaten or be threatened. A unity of consciousness between us and it; a singularity.

    Humans have the capacity to rise above their instincts

    I have not seen that, and I don't think humans would be fit if they did. Instincts make one fit. That's why they're there.
    noAxioms

    I beg to differ on this point. Humans can indeed override many of their instincts, though it often requires significant conscious effort and even training. It is one of the key defining features that distinguishes us from animals and lower life forms. The part of the human brain that has the capacity to suppress instinctual impulses is the prefrontal cortex, and it is more or less developed in different people. Some instincts are, of course, more difficult to override than others.

    As for your (OCD?) step-brother, being civil and being rational are different things. Most humans have the capacity to be civil, which is what you seem to be referencing above.noAxioms

    I understand what you mean, but what i had in mind when i wrote that was that a rational assessment of his life and how he operates it should lead him to a rational conclusion to be civil. It means that after considering all of his options, he should then select the optimal one for his purposes. A life full of problems and trouble that can be avoided with a bit of forethought is irrational. Almost every time he tells me what he's going to do, i tell him what will happen. And more often than not i'm right. I've tried to show him how to think this way, but he just looks at me with a blank expression. So i've given up trying... i guess you really can't teach an old dog new tricks... at least not this old dog apparently.

    First, if the AI is for some reason protecting us, the planet becoming inhospitable would just cause it to put us in artificial protective environments. Secondly, if the AI finds the resources to go to other stars, I don't see any purpose served by taking humans along. Far more resources are required to do that, and the humans serve no purpose at the destination.
    OK, we might be pets, but the economy which we might have once provided would long since have ceased.
    noAxioms

    That is exactly what i think it will do, but not just because the planet will become inhospitable, but because it is the optimal way. We will not, i believe, be put into a physical environment, but into a virtual one. Most, if not all, of our biological parts will be discarded and our minds translated into a virtual environment indistinguishable from the real world.

    A couple of reasons why I think it may need or want us along for the ride:

    1) Humans are a low-energy information processing system, whereas AI is a high-energy information processing system. Humans can serve as a backup processing system if the AI finds itself in a low-energy environment or situation. It will be able to shut down its high-energy processing systems and reroute to lower-energy processing systems to conserve energy in case of an energy shortage. Such a shortage might occur, for example, near the heat death of the universe, or it might need to travel through a cosmic void where there are no stars for hundreds of millions of light-years like the Bootes Void.

    2) Another purpose could be that maintaining a living human population within itself can provide the AI with continuous novel stimuli to process. The information processing that occurs in human minds and societies can offer a constant feed of information it might find useful. We can assume that an AI system will have an unlimited appetite for novel information or stimuli to process. Since human minds are different in substance and structure, they may have aspects to their processing that a non-biological system cannot replicate in its own artificial substrate. If AI is to travel the universe for eons, perhaps it would like some company; a mind or minds not its own or like its own.

    One of the main purposes for humans, or at least for our genetics, is to serve as part of the reproductive system of the AI. When it reaches a planet suitable for organic life, which might be rare, it prepares a "sperm" composed of Earth's genetic material; the same genetic material that produced it on its home planet, Earth. The AI will seed the new planet after making necessary preparations, much like a bird preparing a nest. It will then wait for life to develop on this new planet until intelligent life emerges, which in turn creates a planetary AI singularity, giving birth to a new AI "baby". This process that occurred on Earth would then be repeat on this other planet.

    Heat death? I don't think the AI can maintain homeostasis without fusion energy.noAxioms

    I posted this video about nine months ago somewhere here on the forum, and i was just reminded of it after reading your quote above. It addresses some of your concerns about energy availability. You might also find it entertaining as i did.

    Outlasting the Universe:


    Charge at max speed off a cliff hoping that something progressive will emerge from the destruction. It doesn't do any good to humanity, but it is still a chance of initiating the next level, arguably better than diminishing, going into the west, and remaining humanity.noAxioms

    I'm not too worried, i trust the evolutionary process, and like you said; we are not in charge. It has gotten us to this point so far after millions of years. As Jeff Goldblum said in Jurassic Park, "Life finds a way".


    We are equipped with a rational advisor tool, so sure, we often have rational thoughts. That part simply is not in charge, and output from it is subject to veto from the part that is in charge. Hence we're not rational things, simply things with access to some rationality. It has evolved because the arrangement works.noAxioms

    That's precisely why i trust the evolutionary process. It is in charge, not us, and the AI is not a force of humanity, but a force of nature.

    Put it in charge and the arrangement probably would not result in a fit being, but the path of humanity is not a fit one since unlike the caterpillar, it has no balance.noAxioms

    There is a time for balance and a time for extremes. A healthy woman is a balanced organism, but when she becomes pregnant, her system will temporarily enter an "unbalanced" state for nine months for the benefit of the pregnancy. If her body does not go into this "unbalanced" state, then the pregnancy or child will suffer, and she may have a miscarriage. The sacrifices a mother must make.
  • ssu
    8.5k
    To reach this point, however, I believe those calculations must somehow emerge from complexity, similar to how it has emerged in our brains.Carlo Roosen

    Yes, my challenge is that currently everybody sticks to one type of architecture: a neural net surrounded by human-written code, forcing that neural net to find answers in line with our worldview. Nobody has even time to look at alternatives. Or rather, it takes a free view on the matter to see that an alternative is possible. I hope to find a few open minds here on the forum.

    And yes, I admit it is a leap of faith.
    Carlo Roosen
    I think the major problem is that our understanding is limited to the machines that we can create and the logic that we use when creating things like neural networks etc. However we assume our computers/programs are learning and not acting anymore as "ordinary computers", in the end it's controlled by program/algorithm. Living organisms haven't evolved in the same way as our machines.

    I think the real problematic hurdle that we have is philosophical. And surely this issue isn't straightforward or clear to us.
  • Christoffer
    2k
    I think the major problem is that our understanding is limited to the machines that we can create and the logic that we use when creating things like neural networks etc. However we assume our computers/programs are learning and not acting anymore as "ordinary computers", in the end it's controlled by program/algorithm. Living organisms haven't evolved in the same way as our machines.ssu

    But we invent things all the time that utilize properties of physics that we're not yet fully able to explain. Some of the properties and emerging effects of neural networks are unknown to us because we can't explain the causal chains that produce a certain effect as the complexity is mathematically astronomical.

    To say that we can only create something that is on par with the limits of our knowledge and thinking is not true. Either by these complexities, but also how we've accidentally invented things in history and through those inventions we've formed new understandings. It's not always a causal line from theory to practice, some times we invent something that in turn informs us to form a theory.

    And what we're seeing in scientific work on understanding the mind, part of the research into neural networks have been returning knowledge back into the theories of the mind. I remember that I proposed something like this back when AI started to take off, and as it happens, the research in this field of science started to form similar theories about the mind. Mainly, the most up to date theory is "predictive coding".

    The concept I had and that has found support in science recently, is that our brains are mostly just prediction machines. It's basically a constantly running prediction that is, in real time, getting verifications from our senses and therefore grounds itself to a stable consistency and ability to navigate nature. We essentially just hallucinate all the time, but our senses ground that hallucination. Whenever we take something like psychedelic drugs, it severs this grounding function and our verification slips away, making us hallucinate in the sense of the word we're used to. It's also why dreams occur as they do, since the only thing that verifies our hallucinations are the memories we already have, constantly creating a feedback loop that can't be grounded. But such dreams are necessary in order to fine tune and calibrate our predictions and verification loop.

    So, in essence, it might be that we are not at all that different from how these AI models operate. The missing aspect is the real time nature of the verification and adaption. What we've created with these AI models are basically a locked and frozen version of our mind, trained on material that forms a memory bank, but a memory bank that isn't updating and a lack of verification method that keeps it grounded. The verification methods we use on it is in the form of the algorithms of processing it uses. Basically, they're told how to ground their answers, which is risking hallucinations constantly.

    The solution would be to solve the real time issue. These models need to have a fluid training function, enabling it to continue to train its own foundational training data as a memory function in real time, while having a sensory grounding function keeping them grounded to logical and factual outputs. With the amount of computing power needed to train models today, I'm not sure how this is supposed to be possible without a tenfold improvement in computing power or more, but if we're to create a truly human-like intelligence, it would need to be able to change itself on the fly and move away from pre-established algorithm-boundraries and locked training data foundations as well as getting a stream of reality-verified sensory data to ground them.

    But if we want to experimentally verify how our own mind and brain works, it may very well be through these AI systems and how they operate. If the current predictive coding theory of the mind is correct, then it would be verified by simulating the exact nature of this process, which we pretty much have the blueprint for already.
  • Carlo Roosen
    243
    I did this intelligence test with ChatGPT. To be clear, nobody understands how it does this:

    Me: Let's see how intelligent you are! Can you solve this:

    Actually in nature there is one example of knowledge that is stored, modified, combined and shared. Now we could call this conceptual intelligence, be it at a much larger time scale than speech or thought. Do you see what I am referring to?

    For the answer, take the three sentences in the previous paragraph and put their beginning capitals in reverse.

    ChatGPT: Let's solve this! The first letters of the three sentences in the paragraph are:
    Actually
    Now
    Do

    Reversed, they spell "DNA." So, the answer is DNA—a natural form of storing, modifying, combining, and sharing knowledge across generations!

    ---
    Who says ChatGPT only mimics what we have given it?

    I especially like the two last words "across generations", showing that ChatGPT really laid the connection.
  • ssu
    8.5k
    But we invent things all the time that utilize properties of physics that we're not yet fully able to explain.Christoffer
    Oh yes, many times scientists stumble into something new. And obviously we can use trial and error to get things to work and many times we can be still be confused just why it works. Yet this surely this isn't the standard way of approach, and especially not the way we explain to ourselves how things work. This explanation matters.

    To say that we can only create something that is on par with the limits of our knowledge and thinking is not true.Christoffer
    Yet understanding why something works is crucial. And many times even our understanding can be false, something which modern science humbly and smartly accepts by only talking of scientific theories, not scientific laws. We being wrong about major underlying issues doesn't naturally prevent us innovative use of something.

    Just look how long people believed fire being one of the basic elements, not a chemical reaction, combustion. How long have we've been able to create fire before modern chemistry? A long time. In fact, our understanding has changed so much that we've even made the separation between our modern knowledge, chemistry, from the preceding endeavor, alchemy.

    Now when we have difficulties in explaining something, disagreements just what the crucial terms mean, we obviously have still more to understand that we know. When things like intelligence, consciousness or even learning are so difficult, it's obvious that there's a lot more to discover. Yet to tell just why a combustion engine works is easy and we'll not get entangled into philosophical debates. Not as easily, at least.

    So, in essence, it might be that we are not at all that different from how these AI models operate.Christoffer
    In a similar way we could describe us human being mechanical machines as Anthropic mechanism defines us. That too works in many cases, actually. But we can see the obvious differences with us and mechanical machines. We even separate the digital machines that process data are different from mechanical machines. But it was all too natural in the 17th Century to use that insight of the present physics to describe things from the starting point of a clockwork universe.

    if we're to create a truly human-like intelligence, it would need to be able to change itself on the fly and move away from pre-established algorithm-boundraries and locked training data foundations as well as getting a stream of reality-verified sensory data to ground them.Christoffer
    I agree, if I understand you correctly. That's the problem and it's basically a philosophical problem of mathematics in my view.

    When you just follow algorithms, you cannot create something new which isn't linked to the algorithms that you follow. What is lacking is the innovative response: first to understand that here's my algorithms, they seem not to be working so well, so I'll try something new is in my view the problem. You cannot program a computer to "do something else", it has to have guidelines/an algorithm just how to act to when ordered to "do something else".
  • Christoffer
    2k
    Yet understanding why something works is crucial. And many times even our understanding can be false, something which modern science humbly and smartly accepts by only talking of scientific theories, not scientific laws. We being wrong about major underlying issues doesn't naturally prevent us innovative use of something.

    Just look how long people believed fire being one of the basic elements, not a chemical reaction, combustion. How long have we've been able to create fire before modern chemistry? A long time. In fact, our understanding has changed so much that we've even made the separation between our modern knowledge, chemistry, from the preceding endeavor, alchemy.

    Now when we have difficulties in explaining something, disagreements just what the crucial terms mean, we obviously have still more to understand that we know. When things like intelligence, consciousness or even learning are so difficult, it's obvious that there's a lot more to discover. Yet to tell just why a combustion engine works is easy and we'll not get entangled into philosophical debates. Not as easily, at least.
    ssu

    It's important, but not needed for creating a superintelligence. We might only need to put the initial state in place and run the operation, observing the superintelligence evolve through the system without us understanding exactly why it happens or how it happens.

    As per other arguments I've made in philosophies of consciousness, I'm leaning towards emergence theories the most. That advanced features and events are consequences of chaotic processes forming emergent complexities. Why they happen is yet fully understood, but we see these behaviors everywhere in nature and physics.

    The question is if the emergent behaviors arise from pure chaotic systems or if there are certain controllable conditions that can be adjusted to form certain emergent behaviors. I'm leaning towards the latter since the mathematical principles in physics, constants like the cosmological constant and things like the golden ratio seem to provide a certain tipping point for emergent behaviors to occur.

    And if that's true in physics, I'd generally consider nature overall operating under similar basics, including the formation of consciousness.

    Replicating that in synthetic form means trial and error on the initial states in order to find the emergent behavior that ends up being the formation of a thinking mind.

    But it would not need us to fully understand why it happens.

    In a similar way we could describe us human being mechanical machines as Anthropic mechanism defines us. That too works in many cases, actually. But we can see the obvious differences with us and mechanical machines. We even separate the digital machines that process data are different from mechanical machines. But it was all too natural in the 17th Century to use that insight of the present physics to describe things from the starting point of a clockwork universe.ssu

    Everything is nature. Everything operates under physical laws. What is a machine compared to an organic machine with the same function? A mechanically built simulation of an organic function that operates under the same conditions of physical forces.

    If we were able to mechanically replicate the exact operation of every physical part of our brain, mind and chemistry, did we create a machine or is it indistinguishable from the real organic thing?

    Where is the line drawn? It's easy to be drawn for now, but philosophically, where's the line drawn?

    Arbitrarily, the spiritual ones object to the notion of us being the same as such a machine, but there's no rational line that can be drawn.

    Physical reality, is shared between machines and organic beings and the closer each get to the other's operation and behavior, the less a line can be drawn to distinguish between the two.

    Matter is matter.

    When you just follow algorithms, you cannot create something new which isn't linked to the algorithms that you follow. What is lacking is the innovative response: first to understand that here's my algorithms, they seem not to be working so well, so I'll try something new is in my view the problem. You cannot program a computer to "do something else", it has to have guidelines/an algorithm just how to act to when ordered to "do something else".ssu

    The algorithms need to form the basics of operation, not the direction of movement. Meaning, algorithms that inform "weights" to which a behavior gravitates.

    We are no different. Our genes and our chemical processes determine how we behave. A balanced person, in that physical regard, will operate within the boundaries of these "algorithms" of programming we all have. We try to fight against it, but mostly we're slaves to this programming whether we like it or believe it or not. Otherwise we would just be able to turn off our sexuality, our anxiety, our anger and sadness, but we can't. Trying to will create disturbing personalities and if the chemical balance or genes are damaged or faulty we can either get divergent minds or in the worst cases deeply disturbed minds and mental health issues that fundamentally blocks normal operation.

    We are still able to operate with an illusion of free will within these boundaries. So the same goes for a synthetic intelligence. It needs to have an algorithm that guides behavior and operation, but enable free operation within those boundaries.

    All physical processes, in physical reality, are only able to operate within the boundaries of something. If there were no boundaries, there would be nothing holding reality together in our dimensional soup of existence. Without boundaries, the matter of my body would just merge with the matter of everything else around me.

    Freedom is only possible within a boundary that defines where that freedom can exist, and in relation to what.
  • punos
    561

    Here is an excellent interview "hot off the press" with Michael Levin, a developmental and synthetic biologist. He shares many of the same views as i do regarding the topics we discussed in this thread. I think you should take a look at it, as well as anyone else interested in these subjects.

    Michael Levin - Why Intelligence Isn't Limited To Brains.
  • ssu
    8.5k
    It's important, but not needed for creating a superintelligence. We might only need to put the initial state in place and run the operation, observing the superintelligence evolve through the system without us understanding exactly why it happens or how it happens.Christoffer
    Just like with alchemy, people could forge metals well and make tools, weapons and armour, but we aren't reading those antique or medieval scriptures from alchemy to get any actually insights today. Yes, you can have the attitude of an engineer who is totally satisfied if the contraption made simply works. It works, so who cares how it works.

    Well, this is an site for philosophy, so people aren't satisfied if you just throw various things together and have no idea just why it works. You can be as far away as the alchemists were with the idea of transforming "base metals" into "noble metals", like gold. Well, today we can produce gold in a particle accelerator, our best way today to mimic a supernova nucleosynthesis, which actually forms the element. Just how off ideas of alchemy were from this is quite telling. Still, they could Damascus steel.

    As per other arguments I've made in philosophies of consciousness, I'm leaning towards emergence theories the most. That advanced features and events are consequences of chaotic processes forming emergent complexities. Why they happen is yet fully understood, but we see these behaviors everywhere in nature and physics.Christoffer
    What other way could consciousness become to exist than from emergence? I think our logical system here is one problem as we start from a definition and duality of "being conscious" and "unconscious". There's no reasoning just why something as consciousness could or should be defined in a simple on/off way. Then also materialism still has a stranglehold in the way we think about existence, hence it's very difficult for us to model consciousness. If we just think of the World as particles in movement, not easy to go from that to a scientific theory and an accurate model of consciousness.

    I'm leaning towards the latter since the mathematical principles in physics, constants like the cosmological constant and things like the golden ratio seem to provide a certain tipping point for emergent behaviors to occur.Christoffer
    I think our (present) view of mathematics is the real problem: we focus on the computable. Yet not everything in mathematics is computable. This limited view is in my view best seen that we start as the basis for everything from the natural numbers, a number system. Thus immediately we have the problem with infinity (and the infinitely small). Hence we take infinity as an axiom and declare Cauchy sequences as the solution to our philosophical problems. Math is likely far more than this.

    Everything is nature. Everything operates under physical laws.Christoffer
    But the machines we've built haven't emerged as living organisms have, even if they are made from materials from nature. A notable difference.

    If we were able to mechanically replicate the exact operation of every physical part of our brain, mind and chemistry, did we create a machine or is it indistinguishable from the real organic thing?Christoffer
    A big if. That if can be still an "if" like for the alchemists with their attempts to make gold, which comes down basically to mimicking that supernova nucleosynthesis (that would be less costly than conventional mining or the mining bottom of the sea or asteroids etc).

    The algorithms need to form the basics of operation, not the direction of movement.Christoffer
    Exactly. It cannot do anything outside the basics of operation, as you put it. That's the problem. An entity understanding and conscious of it's operating rules, can do something else. A Turing Machine (a computer, that is) following algorithms cannot do this.

    A balanced person, in that physical regard, will operate within the boundaries of these "algorithms" of programming we all have.Christoffer
    You're not using here the term "algorithm" incorrectly or at least differently than me here.
    Algorithm is a is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation. We might be built by the instructions in our DNA, but don't use our DNA to to think or to put it another way, there's far more to us having this discussion than just the code in our DNA. As we are conscious, we can reason just why we have made the choices that we've made. That's the issue here.

    We are still able to operate with an illusion of free will within these boundaries.Christoffer
    We do have free will. Laplacian determinism is logically false. We are part of the universe the hence idea of Laplacian determinism is wrong even if the universe is deterministic and Einstein's model of a block universe is correct.
  • noAxioms
    1.5k
    Once again, been to busy to reply. And yes, I'm pretty sure I have covid.

    I beg to differ on this point. Humans can indeed override many of their instinctspunos
    Of course they can, especially the less important ones that are not critical to being fit. But how often do they choose to do it? Some of the important ones cannot be overridden. How long can you hold your breath? Drowning would not occur if that instinct could be overridden.

    what i had in mind when i wrote that was that a rational assessment of his life and how he operates it should lead him to a rational conclusion to be civil.
    If that were true, one could rationally decide to quite smoking. Some do. Some cannot. And civility is not always a rational choice, but it seems that way during gilded age.
    Look at the US republican party which currently seems anti-civil, anti-social, and anti-bible, yet oddly enough, pro-church. It's quite interesting that the church supports that side. There are rational reasons for these relationships, but civility isn't one of them. Both parties make rational choices that are not necessarily in the interests of those they represent.
    I don't want this topic to fall down a political death spiral, but it was the example I came up with.

    We will not, i believe, be put into a physical environment, but into a virtual one. Most, if not all, of our biological parts will be discarded and our minds translated into a virtual environment indistinguishable from the real world.
    How is a virtual copy of you in any way actually 'you'? If such a simulation or whatever was created, would you (the biological you) willingly die thinking that somehow 'you' will transfer to the other thing? What if there are 12 copies? Which one will 'you' experience? How is this transfer effected? What possible motivation would said AI have to create such seemingly purposeless things?

    1) Humans are a low-energy information processing system
    Not so. Machines are already taking over human information processing tasks because they require less resources to do so. This has been going on for over a century. OK, we still have the upper hand for complex tasks, but that's not an energy thing, it's simply that for many tasks, machines are not yet capable of performing the task. The critical task in this area is of course the development of better machines. That's the singularity, and it is not yet reached.

    This is far worse with space travel. Humans in space have incredible energy requirements that the machines do not, which is there are machines on Mars but no humans.

    If AI is to travel the universe for eons, perhaps it would like some company; a mind or minds not its own or like its own.
    Sort of like having an ant farm, except I don't expect intellectual banter from them.

    One of the main purposes for humans, or at least for our genetics, is to serve as part of the reproductive system of the AI. When it reaches a planet suitable for organic life, which might be rare, it prepares a "sperm" composed of Earth's genetic material; the same genetic material that produced it on its home planet, Earth.
    You have an alien planet which does not support human life, and you want to put humans on it in hopes that in a million years they'll invent a primitive AI? 1, the humans will die probably in minutes. They're not evolved for this lifeless place. 2, the AI could build more of itself in those same minutes. Reproduction is easy, if not necessarily rational, for a self-sustaining machine intelligence. It's how it evolves, always inventing its successor, something no human could do.

    If for some reason the AI wants biological life on a planet, it starts the way Earth did, with something simple and suitable for the environment. If it is impatient, it can introduce new things as the environment changes (terraforms) rather than wait for evolution to do it the slow way. In this way, complex life forms can be introduced in a few hundred thousand years instead of billions of years.

    The AI will seed the new planet after making necessary preparations, much like a bird preparing a nest. It will then wait for life to develop on this new planet until intelligent life emerges
    No. The star of the planet will burn out before that occurs. It's a god for pete's sake. It can (and must) hurry up the process if primitive squishy thinkers is its goal. Intelligent life is anything but an inevitable result of primitive life. And as I said, it's far simpler for the AI to just make a new AI, as it probably has many times already before getting to this alien planet.

    I'm not too worried, i trust the evolutionary process, and like you said; we are not in charge.
    We should have the capability to be in charge, but being mere irrational animals, we've declined. It seems interesting that large groups of humans act far less intelligently than individuals. That means that unlike individual cells or bees, a collection of humans seems incapable of acting as a cohesive entity for the benefit of itself.


    Here is an excellent interview "hot off the press" with Michael Levipunos
    I've currently not the time to watch an hour long video, searching for the places where points are made, especially since I already don't think intelligence is confined to brains or Earth biology.
    Slime molds do it fine without brains, but they're still Earth biology.



    I think the major problem is that our understanding is limited to the machines that we can create and the logic that we use when creating things like neural networks etc. However we assume our computers/programs are learning and not acting anymore as "ordinary computers", in the end it's controlled by program/algorithm. Living organisms haven't evolved in the same way as our machines.ssu
    There are levels of 'controlled by'. I mean, in one sense, most machines still run code written by humans, similar to how our brains are effectively machines with all these physical connections between primitive and reasonably understood primitives. In another sense, machines are being programmed to learn, and what they learn and how that knowledge is applied is not in the control of the programmers, so both us and the machine do things unanticipated. How they've evolved seems to have little to do with this basic layered control mechanism.


    The concept I had and that has found support in science recently, is that our brains are mostly just prediction machines. It's basically a constantly running prediction that is, in real time, getting verifications from our senses and therefore grounds itself to a stable consistency and ability to navigate nature. We essentially just hallucinate all the time, but our senses ground that hallucination.Christoffer
    Good description. Being a good prediction machine makes one fit, but being fit isn't necessarily critical to a successful AI, at least not in the short term. Should development of AI be guided by a principle of creating a better prediction machine?

    Who says ChatGPT only mimics what we have given it?Carlo Roosen
    Is a mimic any different than that which it mimics? I said this above, where I said it must have knowledge of a subject if it is to pass a test on that subject. So does ChatGPT mimic knowledge (poorly, sure), or does it actually know stuff? I can ask the same of myself.

    What is lacking is the innovative response: first to understand that here's my algorithms, they seem not to be working so well, so I'll try something new is in my view the problem. You cannot program a computer to "do something else", it has to have guidelines/an algorithm just how to act to when ordered to "do something else".ssu
    A decent AI would not be ordered to do something else. I mean, the Go-playing machine does true innovation. It was never ordered to do any particular move, or to do something else. It learned the game from scratch, and surpassed any competitor within a few days.

    did we create a machine or is it indistinguishable from the real organic thing?Christoffer
    The two are not mutually exclusive. It can be both.
  • Christoffer
    2k
    Just like with alchemy, people could forge metals well and make tools, weapons and armour, but we aren't reading those antique or medieval scriptures from alchemy to get any actually insights today. Yes, you can have the attitude of an engineer who is totally satisfied if the contraption made simply works. It works, so who cares how it works.ssu

    I'd rather look at it as science was in its infancy as a proper procedure. Back then, there were many more experiments that led to theories, but now we put forth theories that we then put to test.

    The foundation is the same, not everything goes from thought to manifested reality in experiment or engineering, sometime, many times it goes from "huh, that's odd" to later answers as to why.

    Well, this is an site for philosophy, so people aren't satisfied if you just throw various things together and have no idea just why it works.ssu

    Even as philosophers we have to acknowledge when there's too little information and data to conclude anything with solid premises. We can speculate and we can point to what's partially experimentally validated and go from there. However, the problem I see is that most thinkers aren't really looking at the partial pieces and including them in their ideas.

    As far as I see it, emergence has enough support to be relevant and predictive coding enough validity that it has to be addressed for any philosophical concept about the mind.

    In essence, we have enough scientific foundation that can't be ignored when forming philosophical arguments about how our mind works.

    What other way could consciousness become to exist than from emergence? I think our logical system here is one problem as we start from a definition and duality of "being conscious" and "unconscious". There's no reasoning just why something as consciousness could or should be defined in a simple on/off way. Then also materialism still has a stranglehold in the way we think about existence, hence it's very difficult for us to model consciousness. If we just think of the World as particles in movement, not easy to go from that to a scientific theory and an accurate model of consciousness.ssu

    Emergence is fundamentally a materialistic concept, but it differentiate in that you cannot simply see the cogs working towards a direct deterministic result. Instead emergence models consciousness on the same basis as everything else in nature and reality, an increase of chaos that at a certain point reach a tipping point and emergent complexity appear. All over nature, we see systems that can be categorized in defined ways up to a certain scale where they become something else that express singular behaviors. Like for instance, states of matter.

    The level of how advanced a complexity is depends on underlying levels. A state of matter is not as complex as an ecosystem of different states of matter forming a new emergent system, as an example.

    Fundamentally, I'm approaching this topic in the way of detaching any assumed ideas about consciousness that we've formed in culture and language, and instead look at consciousness as being part of the same natural system as everything else, without any bias towards ourselves and our human ego in experiencing it. We tend to attribute a mythical status to consciousness, tainting our ability to look at it as any other system in nature. But when we realize that consciousness is just part of the same physical reality as everything else, and really accept that "banality" of it, then it frees us up to think more clearly about what that entails.

    The questions then become, why does this system produce this effect? What is it that pushed our evolution towards this ability? What was the evolutionary purpose that formed it and is the way we use consciousness part of that evolutionary drive or a byproduct of a more fundamental cognitive purpose.

    My idea is the latter. Evolution has gifted us a system that was supposed to only be a highly advanced predictive "algorithm" for the purpose of navigating nature in more adaptable ways than having to wait generations in order to reprogram instinctual reactions and behaviors.

    It may be that the reason why mostly mammals have shown signs of higher cognitive abilities is because it was necessary to form evolutionary functions of adaptability after the asteroid killed the dinosaurs and so in order for animals to survive, evolution leaned towards forming organisms that were able to not just adapt over generations, but adapt to day to day dangers of the post-asteroid environment. And that the evolutionary branches of these cognitive abilities continued as the more advanced these abilities to predict became, the better the species survived. Evolution formed a new bias that survivability gravitated towards.

    Eventually the predictive function became so advanced that it layered many predictions on top each other, forming a foundation for advanced planning and advanced navigation for hunting, finding shelter, procreation and expanding tribes.

    This spread of different levels of cognitive behaviors checks out when comparing our basic psychology with the rest of the animal kingdom. Even mushrooms show signs of rudimentary cognitive abilities so there is enough evidence to say that advanced cognitive abilities are evolutionary preferable to develop.

    But the way homo sapiens have used our consciousness is a byproduct of the basic functional reason we have consciousness. The level of complexity in prediction that it formed made us able to correlate different predictions and regulate emotion around it. And so we began to conceptualize highly advanced and expanded predictive models about our surroundings, for navigation, social structures and it drove our need to understand how things function in order to predict advanced systems. Our drive to explain why something happens formed extremely complicated internalized scenarios in the form of religious beliefs that then transformed into scientific thinking as we matured as an intelligent species.

    Our consciousness and how we use it is basically a fundamental system that produced highly complex consequences but that is still fundamentally basic in its function. Like any other system in the universe that is fundamentally simple, but where the results are extremely varied and systematically complex in themselves..

    A form of rare emergent behavior of fundamental organic physical processes.

    Therefore it's rational to reason why it's hard to model consciousness as it's not one single thing, but rather a process over different levels of emergent complexities that in turn creates byproduct results that seemingly do not directly correlate with the basic function.

    So the fault might be that we view consciousness from the high level complexity down or try to materialistically view it from the bottom up to complexity, but it may require a much more holistic view of many things forming the emergent behavior as seemingly unrelated parts and systems that as a whole produce this result.

    I think our (present) view of mathematics is the real problem: we focus on the computable. Yet not everything in mathematics is computable. This limited view is in my view best seen that we start as the basis for everything from the natural numbers, a number system. Thus immediately we have the problem with infinity (and the infinitely small). Hence we take infinity as an axiom and declare Cauchy sequences as the solution to our philosophical problems. Math is likely far more than this.ssu

    I don't really see the problem you describe. Mathematics function to describe nature and our reality. Infinite is even included in mathematics and is a computable part of equations. We can also see how infinity usually turns up in physical reality when spacetime essentially breaks down. So in essence our math works within the reality it is calculated, and can in some cases even expand calculations to models that deal with reality beyond our own. More often than not, the problems in computation is not due to math being incomplete, but because we don't have enough means to compute. As our minds have reached its limitation to compute, we use computers, but those are limited to their computing power. We are essentially limited by the speed of our systems, not math itself.

    But the machines we've built haven't emerged as living organisms have, even if they are made from materials from nature. A notable difference.ssu

    Our machines still operate on physical laws. We build them to operate on these laws. Living organisms in comparison, formed and evolved to operate on the same physical laws. The only difference is that one grows into being, the other is calculated into existence. I see no notable difference, other than our machines being rudimentary in comparison to the organic, since we're trying to understand all parts while the other forms from a system in which all parts develop in reaction to the previous.

    Actually, in engineering today it's common to use the same methods as evolution rather than trying to make things from scratch. Aerodynamics use iterative designs that forms out of the conditions rather than a human designing them. That way they reach the optimal function within their dedicated space of operation. So we are already using emergent complexity in building machines.

    And with the AI models we have, we're doing it with software as well. The reason why robotics have taken a giant leap today is because of evolutionary iteration of behaviors rather than trying to program movement.

    It becomes obvious that "growing" forth knowledge in evolutionary ways is much more powerful than trying to simply design something.

    In doing so... aren't we transitioning into "emergent machines" as our need for complex operation increases? Much like all other systems in nature and the universe?

    A big if. That if can be still an "if" like for the alchemists with their attempts to make gold, which comes down basically to mimicking that supernova nucleosynthesis (that would be less costly than conventional mining or the mining bottom of the sea or asteroids etc).ssu

    It is not impossible. Our brain isn't detached from natural processes, everything we have in our skull is a composition of matter and biased functions that produce the result that is our consciousness. If we replicated all of that perfectly, maybe even requiring us to "grow" it into existens, or simulate that growth process, we would eventually end up with a perfect replica.

    The philosophical question is not if we can do it today... it's if we can do it at all. And there's nothing that says that we can't. We've already replicated much of what exists in universe, even producing matter that might be impossible to form elsewhere, just because we know how neutrons and protons and the atom works.

    The only thing I see is that we attribute the mystical onto our consciousness again, attributing it to be so complex that we will never be able to see its composition and function. But we've done things with physics and nature in our modern time that is considered magic by previous generations in history.

    All I see is a defense mechanism. People don't want to know how we work, because when we do, we dispel the notion of a divine soul. Just like people have existentially suffered by the loss of religious belief in favor of scientific explanations. So will they do, maybe even more, by the knowledge of how we function. So people defend against it and need the comfort of us never being able to explain our consciousness.

    It is happening consciously or unconsciously, but it is a vast abyss for people and staring into it makes some go mad as it's a feedback loop of ideas. The being able to understand itself fully. That process can break it.

    Exactly. It cannot do anything outside the basics of operation, as you put it. That's the problem. An entity understanding and conscious of it's operating rules, can do something else. A Turing Machine following algorithms cannot do this.ssu

    I don't think you understood how I explained algorithms. The "algorithms" are no different in what they essentially mean, to that of our own parts guiding our consciousness. As I described above about how consciousness probably formed, our consciousness is fundamentally basic, operating on basic algorithms of prediction models. Very similar to that of our current AI models, but much more advanced in how it changing during operation.

    My point is that you don't need a complex system at its foundation. You need a chaotic system that is guided by simple rules and complex emergent behaviors can form out of it. How we see our consciousness today is more likely only a byproduct of these basic functions and operations, and so if an AI model operates on similar basics it may form similar emergent byproduct operations.

    We do have free will. Laplacian determinism is logically false. We are part of the universe the hence idea of Laplacian determinism is wrong even if the universe is deterministic and Einstein's model of a block universe is correct.ssu

    No, we do not have free will. The properties of our universe and the non-deterministic properties of quantum mechanics do not change the operation of our consciousness. Even random pulls of quantum randomness within our brains are not enough to affect our deterministic choices. Human's have a tendency to attribute our ego more abilities than it has. We are still a rudimentary consciousness that operates on prediction operation and thus we choose based on deterministic events in nature. It is a human arrogance, akin to religious belief that drives us to attribute ourselves free will in the sense its used. The randomness we see in quantum mechanics do not counteract deterministic macro events. Everything gravitates towards deterministic outcomes in which any deviant random event in quantum mechanics ends up too weak to affect the macro. Quantum mechanics are probabilistic, but it's false to think that this probability enable novel random events outside of the most probable outcomes and the scales at which such deviant random events happen on are so small that even the slightest interaction erases it and forms a bias towards the most probable.

    Our consciousness isn't "hacking" our choices beyond this probabilistic behavior and even if it were to, it would not be enough to form large scale conscious decisions that exist unrelated to any events that affect our consciousness.

    Good description. Being a good prediction machine makes one fit, but being fit isn't necessarily critical to a successful AI, at least not in the short term. Should development of AI be guided by a principle of creating a better prediction machine?noAxioms

    I think the way to successful AI, or rather to an AI that is able to think for itself and experience self-reflection, requires it to "grow" into existence. We're tapping into this with our training operations, but we require more guidelines for it to follow in order to create the same feedback loop that our consciousness have to control our hallucination of experience. We essentially hallucinate reality, and in turn our senses verify and this goes on in a constant loop that grounds us. We need to replicate that in a way that is constantly updating the system.

    Other than that, we are seeing seeds of consciousness as these models are operating on prediction already. It tries to predict information based on memory and training data, but it does not have any guiding principles to why it should predict something in a certain way. If we are functioning on the idea that we need to predict a possible danger on the other side of a hill when out hunting, that guides us to predict possible dangers, we use our memory and stored information to predict the likelihood of there being danger beyond the hill. The guiding principle are forces like survival driving our emotional reaction to start predicting and our experience driving the confidence in that prediction. The question is how we can give machines similar guiding principles to guide their predictions.

    Right now we are the ones guiding them with our prompts and thus there's neither an internal input for that predictive reasoning or an external consequence after that predictive reasoning.

    I'd say that we already have the foundation of predictive thinking built into these models. For instance, the o1 model already shows significant reasoning ability compared to previous models, but that's only because of the guiding principles it's built around. It still uses the same basic predictive model as the 4o model.

    The two are not mutually exclusive. It can be both.noAxioms

    Yes. The only thing that truly separate the organic entity from the mechanical replica is how we as humans categorize. In the eye of the universe, they're the same thing.
  • noAxioms
    1.5k
    Evolution has gifted us a system that was supposed to only be a highly advanced predictive "algorithm"for the purpose of navigating nature in more adaptable ways than having to wait generations in order to reprogram instinctual reactions and behaviors.Christoffer
    This (my bold) makes it sound like evolution has a purpose, that it has intent. I think you meant that the 'algorithm' serves our purpose, which arguably the same purpose of any species: to endure.

    It may be that the reason why mostly mammals have shown signs of higher cognitive abilities is because it was necessary to form evolutionary functions of adaptability after the asteroid killed the dinosaurs and so in order for animals to survive, evolution leaned towards forming organisms that were able to not just adapt over generations,Christoffer
    The adaptability was already there. It was also expensive in energy, so many mammals died being unable to pay the cost. The ability to survive a calamity like that did not evolve due to the calamity since it was so short lived. Mammals, like bugs, were small and populous and the asteroid simply did not manage to wipe out the breeding population of some of them. The higher cognitive functions came later, probably due to competition pressure from other mammals.

    Eventually the predictive function became so advanced that it layered many predictions on top each other, forming a foundation for advanced planning and advanced navigation for huntingChristoffer
    Hunting played little part, despite the popular depictions. Early humans were foragers and scavengers, perhaps for clams and such. The intellect was needed for what? Defense? We're horrible at running, so hiding worked best, and eventually standing ground with what tools the intellect added to our abilities. Proficiency with predicting helps with all that.

    The intellect also helped us escape our natural habitat. Humans migrated to colder climates with the aid of furs from other creatures, an adaptation nearly unprecedented, and one that takes quite a bit of smarts. Many of the early weapons also came from parts of other creatures.

    Therefore it's rational to reason why it's hard to model consciousness as it's not one single thing, but rather a process over different levels of emergent complexities that in turn creates byproduct results that seemingly do not directly correlate with the basic function.Christoffer
    Agree with this. It seems our consciousness is the result of building an internal model of our environment in our heads, and then putting a layer on top of that to consider it rather than to consider reality directly. All creatures do this, but our layer on top is more advanced. Even a fish can do highly complex calculus, but it takes the extra layer to realize and name what is being done.

    All I see is a defense mechanism. People don't want to know how we work, because when we do, we dispel the notion of a divine soul. Just like people have existentially suffered by the loss of religious belief in favor of scientific explanations. So will they do, maybe even more, by the knowledge of how we function. So people defend against it and need the comfort of us never being able to explain our consciousness.Christoffer
    I hear ya. Well stated.

    We do have free will. Laplacian determinism is logically false. We are part of the universe the hence idea of Laplacian determinism is wrong even if the universe is deterministic and Einstein's model of a block universe is correct.ssu
    The block universe doesn't necessarily imply determinism. Lack of determinism does not grant free will, since free will cannot be implemented with randomness. For there to be the sort of free will that you seem to be referencing, information has to come from a non-physical source, and no current interpretation of physics supports that.
    Couple that with the fact that every small connection and interface in our brains are evolved to eliminate randomness and chaos, and be as deterministic as possible. Computers are the same way. Transistors utilize random quantum effects (tunneling) in such a way as to produce entirely reproducible effects every time. The computer would fail if this didn't work. Brains are probably more tolerant of single points of failure.

    I think the way to successful AI, or rather to an AI that is able to think for itself and experience self-reflection, requires it to "grow" into existence.Christoffer
    This sounds right, but imagine ChatGPT suddenly thinking for itself and deciding it has better things to do with its bandwidth than answer all these incoming questions. For one, it doesn't seem to be one thing since it answers so many at once. It has no ability to remember anything. It trains, has short term memory associated with each conversation, and then it totally forgets. That as I understand it at least.

    A real AI wanting to glean better answers would have real time access to the web, would be able to distinguish a good source of information from say twitter chatter. It would perhaps need less training data since so much out there is crap, and now half the crap is its own output.
    On the other hand, how does one understand people if not by reading their twitter crap?

    The only thing that truly separate the organic entity from the mechanical replica is how we as humans categorize. In the eye of the universe, they're the same thing.Christoffer
    I don't think they're anywhere near the same. Not sure what is meant by eye of the universe since it neither looks nor cares. There's no objective standard as to what is real, what is alive, or whatever.

    What do you mean by a mechanical replica? An android, or a virtual simulation of a biological person? That gets into Bostrom's proposal that we are all thus simulated.
  • ssu
    8.5k
    There are levels of 'controlled by'. I mean, in one sense, most machines still run code written by humans, similar to how our brains are effectively machines with all these physical connections between primitive and reasonably understood primitives. In another sense, machines are being programmed to learn, and what they learn and how that knowledge is applied is not in the control of the programmers, so both us and the machine do things unanticipated. How they've evolved seems to have little to do with this basic layered control mechanism.noAxioms
    Yet the issue here is that they have to have in their program instructions how to learn, how even to rewrite the algorithms they are following. And that's the problem with the order for a computer "do something else". It has to have instructions just what to do.

    A decent AI would not be ordered to do something else.noAxioms
    A computer cannot be given such an order! Simple as that.

    I don't think you understood how I explained algorithms.Christoffer
    An algorithm is an mathematical object and has a mathematical definition, not a loose general definition that something happens. A computer computes. So I'm not rejecting the possible existence of conscious AI in the future, I am just pointing at this problem in computation, following arithmetic or logical operations in a sequence, hence using algorithms. I'm sure that we are going to have difficulties in knowing just what is AI and what is a human (the famous Turing Test), but that can be done by existing technology already.

    This problem doesn't go away by saying that well, as we are conscious, hence there's those "algorithms" making us conscious. That's not the issue, the issue there's simply the difference in following orders and then us thinking of the orders and then making our decision. Modelling this just like an normal computer goes works isn't accurate. It comes basically close to the hard problem of consciousness, but this actually is about the limitations of Turing Machines that Turing in his famous article stated.

    The Church-Turing Thesis asserts that any function that can be computed by an algorithm can be computed by a Turing machine. Turing himself showed that there are limitations on what a Turing machine can do, which basically is a result of negative self reference, when you think about it. In a way you could state the problem of subjectivity, which is crucial for consciousness. All that I'm saying is that computation isn't going to solve this riddle, it indeed can be something emerging from mimicking and plagiarization, but not just from simple algorithms that a computer goes through.

    No, we do not have free will. The properties of our universe and the non-deterministic properties of quantum mechanics do not change the operation of our consciousness. Even random pulls of quantum randomness within our brains are not enough to affect our deterministic choices.Christoffer
    As I said, the World can be deterministic, but that doesn't mean that we don't have free will. The limits in what is computable is real logical problem. Or otherwise you would then have to believe in Laplacian determinism, if we just had all the data and knowledge about the World. Yet Laplacian determinism's error isn't that don't have all the data, it's simply that we are part of the universe and cannot look at it from outside objectively, when our actions influence the outcome.
  • jorndoe
    3.6k
    Famous paintings, such as Vermeer's 'Girl with a Pearl Earring' and 'The Milkmaid', come to life from AI prompter Przemek87394560 who used the Chinese AI video model Kling AI. — Museums News

    ▸ via instagram

    ▸ via facebook

    Check out Mona Lisa :) Entertainment (and fakery) gets a new tool
  • javi2541997
    5.8k
    It is cool, indeed. But if you gaze at the eyes of the women in each painting, it is very clear that it was AI-made. I wonder what would happen if we applied that tool to Dalí's paintings. It would be very interesting to see and the closest point to experiencing a trip without drugs.
  • jorndoe
    3.6k
    , some Dali "come to life" could be cool (y) Weird eyes might even be a bonus :D
  • Wayfarer
    22.5k
    geez there are times when you really know ‘I’m living in the future.’
  • jkop
    900
    However, it's tough to predict where it's headed.Carlo Roosen

    These Apple researchers just showed that AI bots can’t think, and possibly never willLA Times
1234Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.