• TogetherTurtle
    353
    My line of thinking is that humans are self aware because they can distinguish themselves from everything else. They are aware that they are themselves. Animals more or less exist within their ecosystem. Of course, at the subatomic level, we are nothing but particles bound together but never touching. We are all just pieces of the primordial soup the universe is made out of, and will return to that someday. Animals and plants are one step above that in terms of scale. They are aware to some extent that they are just one thing, but still are very connected to the world. They don't seem to fully grasp that they are alive. When their lives are in danger, they fear primarily because of instinct, and not because of losing their ability to experience the world. Often when a person is on their deathbed or bleeding out, you hear them talking about what they are leaving behind, what they never got to do, fearing if there is anything after death. A dog or cat fear death because they are biologically hardwired to fear things that can kill them, same with humans, but when you see a man on his deathbed, you come to realize that there is much more than just a mechanism for self preservation at play. While humans are biologically similar in relation to their scale and reliance on biology to animals, it is the complexity of the human brain that brings the next level, it is why man is aware that it is a separate entity from the universe, or at least the mind is. While you need to consume resources from the universe to retain that self awareness in the form of biological life, you can recognize yourself as a separate thing entirely.

    Or I'm wrong. There's only two possibilities right? What do you think?
  • tom
    1.5k
    Or I'm wrong. There's only two possibilities right? What do you think?TogetherTurtle

    I think that most people when confronted with the idea that animals are not sentient, do not possess qualia, don't even know they exist etc. find that notion repulsive and experience various degrees of emotional outrage.

    However, I gave an outline of various hints and arguments that this is indeed the case. There is a computational and epistemological argument that they cannot know anything beyond what they are programmed to know, and they are not programmed to be self-aware or other-aware, because they, lacking appropriate hardware, cannot be.

    Another argument comes from the impressive work of the psychologist R. W. Byrne. Animals learn by behaviour parsing, not by understanding.
    http://pages.ucsd.edu/~johnson/COGS260/Byrne2003.pdf

    For some reason we find the notion that animals don't suffer horrifying, when it is in fact a blessing.
  • Heiko
    519
    AI is exciting only when one cannot forsee what it will do.
  • TogetherTurtle
    353
    However, I gave an outline of various hints and arguments that this is indeed the case. There is a computational and epistemological argument that they cannot know anything beyond what they are programmed to know, and they are not programmed to be self-aware or other-aware, because they, lacking appropriate hardware, cannot be.tom

    In that we are in agreement. Self awareness is simply the result of superior hardware and software.

    For some reason we find the notion that animals don't suffer horrifying, when it is in fact a blessing.
    8 hours ago
    tom

    I think that it would be impossible for animals to not have emotions. They are a process of evolution, and are useful in the wild. If they didn't it would be better for us, but I don't really buy that my cat is faking it when hes glad to see me.

    While I don't think that it is right to treat animals poorly on purpose, some killing is inevitable. Meat and its consumption is deeply ingrained into the culture of almost every people on earth. We are omnivores after all. Animals feel emotion, but in the human world, we overlook feelings for the greater good, so why wouldn't we apply that to animals as well? Death is simply the end of life, destined to happen from birth. Animals are our friends, and we should treat them well, but in the end, that's just how things are on our planet. Food chains and all. There is no reason to fear the facts of life.

    As the more intelligent beings, I would like to believe it should be our responsibility to see to it that the life we are so closely related to and so dependent upon is treated well for as long as we can afford to let it live. Someday we will know enough to gift them with the blessings nature has given us naturally, and we will be able to very easily create identical copies of their meat just by having the elements that make them up. Today however, is not that day.
  • TheMadFool
    13.8k
    I think this is problematical, as I think that 'complete self awareness' of that kind is a logical impossibility.Wayfarer

    I fail to see a contradiction in the idea of complete self-awareness. Think of hunger, thirst, pain and the senses etc. These sensations are a form of awareness of the chemical and physical states of the body or the environment.

    Why do you think total self-awareness is an impossibility?
  • TheMadFool
    13.8k
    We are not like computers, at all.Bitter Crank

    We're NOT computers, I agree. But are we machines, just of a higher order? That's what I want to know.
  • TheMadFool
    13.8k
    Why, you experience the illusion of course.TogetherTurtle

    I mean there must be an x for which consciousness or whatever else is an illusion. Is this x real or also an illusion?

    Are you saying there is no such thing as consciousness?
  • TheMadFool
    13.8k
    :up:

    So you think social existence contributes towards intelligence. I think so too but what about the ''fact'' that geniuses are usually depicted in culture as socially inept? Is this just one of those myths that have spawned out of movies and literature or is there some truth in it?

    I suppose genius-social-misfits aren't completely normal.
  • TheMadFool
    13.8k
    Being x. It's always rather odd to me people want to focus on computer models (computer as model) as representing intelligence or awareness instead of, say, the integrated processes (mind) of an old growth forest. My style of consciousness and components of mind communicate in a way unimpeachably closer to the minute feedback systems you find in the "cognitive network" of an ecologically complex superorganism (forests). Living compost on a forest floor is far more impressive and complex in its self-awareness than a computer could ever be (interspecies communication requires species; is a computer a specie? nope). Yet this is only a small, local slice of what's going on "information-processing" wise in a organic superorganism, like any robust sylvan environment. Mycelial mats connect plants and trees and search for feedbacks that then determine what they will do in bringing balance to players of the network locally, and non locally. Mycelial mats can connect thousands of acres of forest in this way. This is very much like a neural network of nature.

    Honestly, taking computers to be intelligent, or most absurdly, at all self- aware, and not nature, tends to gore my ox...so I'm apt to wax too emotional here, but perhaps I'll be back with some cool examples as to why computers cannot be self-aware compared to the far more self-aware "being x" that can be found in nature (of which I'm a part way more than a computer). That is to say, my self-awareness is far more an extension of the order and processes going on in the superorganism of a forest than anything in silicon. We can understand (a priori), computers don't understand anything. We are aware of our limitations, computers are not. Because we are aware of our limitations thanks to nature's gift of metacognition (note I'm not saying a computer's gift of metacognition), we can ask questions about how we are limited, such as boundaries the subconscious puts on conscious awareness. You can even ask sci-fi questions about computer sentience thanks to nature's vouchsafing of self-awareness. Somehow, self-awareness is a part of having a mind that is informed nonlocally by interminably incomplete information. A machine only has to handle incompleteness according to its implied programming or manufacturing: algorithms and egos are veeery much alike, and both are chokingly narrow-minded, unreasoning. Seeing as the human brain-mind isn't invented or programmed and doesn't do calculations, and that it is likely nonlocally entangled with the universe in a way that remains forever incomplete (unless perhaps in deep sleep or dead), we think about thought and have thought about thinking, emote about emotions and have emotions about emoting: nothing is more sublime than wondering about wonder, however. I wonder if computers could ever wonder? What about the utterly unreasonable idea that a computer could have emotions reliant on programming...laughable. Reminds me of someone, having missed the punchline, laughing at a joke just because he knows it's supposed to be funny.
    Anthony

    Perhaps we've already achieved the greatest thing possible - duplicating rationality - with computers. What remains of our mind, its irrationality, self-awareness, and creativity, aren't as important as we think they are.
  • TheMadFool
    13.8k
    What does it mean to be "completely self aware" as opposed to just self aware?tom

    Complete self-awareness would be knowing the position, function and state of every atom within our bodies and knowledge of our subconscious.

    In a way we're not actually free unless we know these things.
  • gurugeorge
    514
    Yeah I think it's two things overlapping. Sociality sets the stage for the development of intelligence, but perhaps with the neural mechanisms that make for intelligence, beyond a certain point other factors take over to make super-high intelligence out of balance with other factors.

    Like, suppose intelligence evolved to require the co-operation of A, B, C, D, E genes, with the total contributing to intelligence level, and the set being roughly in balance with most people, but then suppose in some people the E factor is much more heavily weighted than the other factors. That would produce a super-high intelligence. But what if the E factor happens to clash with other aspects of the total personality, making the person inhibited or socially inept?

    Another possibility: human beings and animals generally are like these Heath Robinson contraptions, stuck together with duck tape, sticks and glue, that "pass muster" in the circumstances they evolved in for the bulk of their evolution, but don't necessarily function so well outside those conditions. For example sociality in our ancestral environment would have meant knowing, say, about 20 people quite well, and half a dozen really well. What happens when a creature designed for that type of environment is rammed cheek by jowl with millions of strangers in a modern conurbation? Maybe they withdraw into themselves, or whatever.

    Lots of possibilities here, of course one would have to know the science and investigate to figure out what's really going on.
  • BC
    13.5k
    We're NOT computers, I agree. But are we machines, just of a higher order? That's what I want to know.TheMadFool

    We are not machines, either. We are organisms, and more, beings. We are born, not manufactured. Our biological design incorporates a billion years of evolution. Life exists without any designing agent: no owners, no designers, no factories, etc. Life is internally directed; machines are made, and have no properties of beings or organisms.

    Machines are our human creations; we like our machines, and identify with the cleverness of their design and operation. Our relationship to the things we make was the subject of myth for the ancient Greeks: Pygmalion from Greek mythology, A king of Cyprus who carved and then fell in love with a statue of a woman, which Aphrodite brought to life as Galatea; (the name of a play by George Bernard Shaw, the name of a musical, My Fair Lady--the same theme). We pour our thoughts into our computers, they deliver interesting viewing material to us -- none of it comprehended or created by our machine computers.

    That there are "biological mechanisms" like DNA replication, respiration, oxidation, etc. doesn't in any way make us "machines" because "biological mechanisms" is itself a metaphor of a machine mechanism. We're victims of our language here. Because we call the body a machine, (levers, pulleys, engines, etc.) it's an easy leap to body status in things like office copiers and computers, ships, cars, etc.

    So... No, we are not machines, not computers, not manufactured, not hardware, not software.
  • TogetherTurtle
    353
    Are you saying there is no such thing as consciousness?TheMadFool

    In a way, yes. it isn't a tangible thing. Consciousness is more of a culmination of our senses in a way that makes sense to us, and that we can question. consciousness is just the brain translating for the mind per say. I guess the question really is, how do you know you are conscious? You can think internally, you can see, hear smell, feel, taste. I would argue a computer can do all of those things through various peripherals, so therefore a computer of sufficient hardware and software capabilities could be conscious. If all else fails, you could build a human brain out of synthetic materials, and I would argue that would be conscious.

    So I guess you are the x. All of your brain cells and your eyes and ears and mouth, They collect information and that is the illusion. If we had more senses, there would be more of an illusion. All of this information is brought together in the brain, it decides what chemicals to shoot through your body, and what results is consciousness.
  • Arne
    815
    That's not a particularly convincing argument.tom

    Touche.

    That is funny.
  • Wayfarer
    22.3k
    I think this is problematical, as I think that 'complete self awareness' of that kind is a logical impossibility.
    — Wayfarer

    I fail to see a contradiction in the idea of complete self-awareness. Think of hunger, thirst, pain and the senses etc. These sensations are a form of awareness of the chemical and physical states of the body or the environment.

    Why do you think total self-awareness is an impossibility?
    TheMadFool

    This is a difficult point and I'm not claiming that I am correct in what follows. But one of the principles that I have learned from Vedanta is expressed aphoristically as 'the eye cannot see itself, but only another. The hand cannot grasp itself, but only another'. 1 So I take from that, that what we are aware of appears to us as an object or the 'other'. It seems to me to be inherent in the nature of awareness itself.

    Now obviously I can be aware of my internal states, like hunger or lust or depression, and so on. But even in all of those cases, the psyche is recipient of sensations like the feeling of hunger or is thinking about its circumstances, and so on. But the psyche cannot turn it's gaze on itself as it is the subject of experience, not the object of perception. And that subject-object relationship seems fundamental to the nature of awareness.

    There's a wikipedia entry on Kant's Transcendental Apperception which I think comes very close to expressing this same idea:

    Transcendental apperception is the uniting and building of coherent consciousness out of different elementary inner experiences (differing in both time and topic, but all belonging to self-consciousness). For example, the experience of the passing of time relies on this transcendental unity of apperception, according to Kant.

    There are six steps to transcendental apperception:

    1. All experience is the succession of a variety of contents (per Hume).
    2. To be experienced at all, the successive data must be combined or held together in a unity for consciousness.
    3. Unity of experience therefore implies a unity of self.
    4. The unity of self is as much an object of experience as anything is.
    5. Therefore, experience both of the self and its objects rests on acts of synthesis that, because they are the conditions of any experience, are not themselves experienced.
    6. These prior syntheses are made possible by the categories. Categories allow us to synthesize the self and the objects.

    Now, number 5 is crucial here *: we're actually not aware of the 'act of synthesis' which underlies and indeed comprises conscious experience; that is what 'the eye not seeing itself' means. Which stands to reason, as I think these correspond to the role of the unconscious and sub-conscious. That is the process of 'world-making' which the mind is continually engaged in; it is in this sense that reality is 'constructed' by the subliminal activities of consciousness into what appears as a coherent whole. This kind of understanding is characteristic of the philosophy of Kant and Schopenhauer.

    But it also has some similarities with Vedanta and Buddhism, which are also aware of the sense in which 'mind creates world'. But to say that in the context of secular Western culture is to invariably be misunderstood (at least in my experience), as the formative myth of secular culture is the so-called 'mind-independent' nature of the world. But what this precisely has lost sight of, is the role of the mind in the construction of reality. In fact the very idea is taboo (as explained in Alan Watts' 'The Book: On the Taboo against Knowing who you Are').

    So - as to whether any intelligence can be 'completely self-aware', then in light of this analysis, it seems unlikely. And in fact I read somewhere not that long ago about it being understood in Eastern Orthodox theology, that even God does not know himself, that He is a complete mystery to Himself (although I suspect I won't be able to find the reference.)

    --------
    * I'm not at all sure I agree with 4 but it's not important for this analysis.
  • TheMadFool
    13.8k
    from Vedanta is expressed aphoristically as 'the eye cannot see itself, but only another. The hand cannot grasp itself, but only another'.Wayfarer

    Brilliant point. The way I understand this is each level of what I call existence is separated from the other and awareness, as in knowledge of, may not be able to cross the boundaries between these levels. For instance the individual cells in our bodies don't know what ''love'' means. To know what ''love'' means requires different experiences and environments than the cell is exposed to, not to mention the cell's lack of machinery to comprehend.

    That said, I do see a way in which cells may become aware of ''love'' by way of hormones, adrenaline, etc. And the process works in reverse too - cells in a low glucose environment signal hunger. Of course ''total self-awareness'' is a far cry yet.
  • TheMadFool
    13.8k
    In a way, yes. it isn't a tangible thing.TogetherTurtle

    I can't make sense of telling myself that I'm an illusion. Are you Buddhist and bringing up annata here?
  • TheMadFool
    13.8k
    So... No, we are not machines, not computers, not manufactured, not hardware, not software.Bitter Crank

    Well, I think we're more like machines than we think. Biology = chemistry+physics.
  • TheMadFool
    13.8k
    Yeah I think it's two things overlapping. Sociality sets the stage for the development of intelligence, but perhaps with the neural mechanisms that make for intelligence, beyond a certain point other factors take over to make super-high intelligence out of balance with other factors.

    Like, suppose intelligence evolved to require the co-operation of A, B, C, D, E genes, with the total contributing to intelligence level, and the set being roughly in balance with most people, but then suppose in some people the E factor is much more heavily weighted than the other factors. That would produce a super-high intelligence. But what if the E factor happens to clash with other aspects of the total personality, making the person inhibited or socially inept?

    Another possibility: human beings and animals generally are like these Heath Robinson contraptions, stuck together with duck tape, sticks and glue, that "pass muster" in the circumstances they evolved in for the bulk of their evolution, but don't necessarily function so well outside those conditions. For example sociality in our ancestral environment would have meant knowing, say, about 20 people quite well, and half a dozen really well. What happens when a creature designed for that type of environment is rammed cheek by jowl with millions of strangers in a modern conurbation? Maybe they withdraw into themselves, or whatever.

    Lots of possibilities here, of course one would have to know the science and investigate to figure out what's really going on.
    gurugeorge

    :up: thanks
  • TogetherTurtle
    353
    Nope. I don't really know much about Buddhism in general. Maybe two paths have reached the same end? All I know is that we don't see the world exactly as it is. Everything comes together to create a facade. How we examine the world is not the only way to do so, neither is it the most effective. There are many more possible senses than the 5 we have, and they are very easy to trick as it is.
  • Eugenio Ullauri
    5
    Short Answer: Software

    Long Answer:Software and the Materials they are made with.
  • TheMadFool
    13.8k
    All I know is that we don't see the world exactly as it is.TogetherTurtle

    Is this evidential or just a gut feeling?
  • TheMadFool
    13.8k
    All I know is that we don't see the world exactly as it is.TogetherTurtle

    Agreed. I too believe our senses can be deceived or that the picture of the world we create out of them isn't the actual state of affairs. It's like taking a photograph with a camera. We have an image in our hands but it isn't the actual object the image is of.

    Everything comes together to create a facadeTogetherTurtle

    As far as I'm concerned there's a limit to illusion. EVERYTHING can't be an illusion, especially our sense of self. In the basic definition of an illusion we need:
    1. an observer A
    2. a real object x
    3. the image (illusion) of the object x, x1

    I can accept 3 but what is undeniable is the existence of the observer A who experiences the illusion x1 of the real object x.

    Are you saying the observer A itself is an illusion? In what sense?

    In the Buddhist context, the self is an illusion because it lacks any permanent existence. The self, according to Buddhism, is a composite "material" and when decomposed into its parts ceases to exist.
  • TogetherTurtle
    353
    Is this evidential or just a gut feeling?TheMadFool
    It is evidential to some extent. I apologize if I didn't make it clear before, but I don't believe nothing exists. I'm more on the line of thinking that how we view existing objects is arbitrary.

    As far as I'm concerned there's a limit to illusion. EVERYTHING can't be an illusion, especially our sense of self.TheMadFool

    I agree with this. When I said everything, I more meant every way we experience the world. Your sense of hearing, for instance, can be tricked by focused, weak soundwaves. That is what you are experiencing when you put on headphones. While no one else can hear your music or audio book or other media, you hear it like the performer was in the room with you. This of course, is not the case, and other senses verify that. Therefore, it is very possible some things in the natural world go unnoticed because we can't sense them. What we sense is very selective, labeled arbitrarily, and subject to trickery.

    I may in time take interest in the Buddhist view on this subject. For a religion they have a strangely materialistic view on the concept of a soul.
  • aporiap
    223
    Yes. That can be correctly classified as some level of self-awareness. This leads me to believe that most of what we do - walking, talking, thinking - can be replicated in machines (much like wormw or insects). The most difficult part is, I guess, imparting sentience to a machine. How does the brain do that? Of course, that's assuming it's better to have consciousness than not. This is still controversial in my opinion. Self-awareness isn't a necessity for life and I'm not sure if the converse is true or not.
    Hmm, I would think self awareness comes part and parcel with some level of sentience. I think a robot that can sense certain stimuli - etc. light, color, and their spatial distribution in a scene - and can use that information to inform goal directed behavior must have some form of sentience. They must hold some representation of the information in order to manipulate it and use it for goal based computations and they must have some representation of their own goals. All of that (i.e. having a working memory of any sort) presupposes sentience.

    I
  • Heiko
    519
    They must hold some representation of the information in order to manipulate it and use it for goal based computations and they must have some representation of their own goals.aporiap
    The AIs whose construction is inspired by the human brain are merely a bunch of matrices chained together resulting in a map from an input to an output. m(X) = Y. These get trained (in supervised learning at least) by supplying a set of desired (X,Y)-Tuples and using some math. algorithm to tweak the matrices towards producing the right Y values for the Xes. Once the training-sets are handled sufficiently well chances are good it will produce plausible outputs for new Xes.
    The point here is: those things just "work" - not meaning that this works well, but the whole idea of the concept is not to implement specific rules but just train a "black box" that solved the problem.
    Mathematically such AIs separate the input-space by planes, encirceling regions for which certain results are to be produced.
    These things do not exactly have a representation of their goals - they are that representation.
    One cannot exactly forcast how such an AI develops if not stopping alteration of the matrices at some point: The computation that would be needed to do this is basically said development of the AI itself.
  • aporiap
    223
    The AIs whose construction is inspired by the human brain are merely a bunch of matrices chained together resulting in a map from an input to an output. m(X) = Y. These get trained (in supervised learning at least) by supplying a set of desired (X,Y)-Tuples and using some math.
    algorithm to tweak the matrices towards producing the right Y values for the Xes. Once the training-sets are handled sufficiently well chances are good it will produce plausible outputs for new Xes.
    Isn't this true for only a subset of AIs. I'm unsure if this is how, for example a self navigating, walking honda robot works, or the c. elegans worm model, etc. And even in these cases, there is still a self monitoring mechanism at play -- the optimizing algorithm. While 'blind' and not conventionally assumed to involve 'self awareness', I'm saying this counts -- it's a system which monitors itself in order to modify or inform its own output. Fundamentally, the brain is the same just scaled up in the sense that there are multiple self monitoring, self modifying blind mechanisms working in parallel.

    These things do not exactly have a representation of their goals - they are that representation.
    They have algorithms which monitor their goals and their behavior directed toward their goals no? So then they cannot merely be the representation of their goals.
  • Heiko
    519
    Isn't this true for only a subset of AIs. I'm unsure if this is how, for example a self navigating, walking honda robot works, or the c. elegans worm model, etc.aporiap
    Sure there are other methods. But the ones that are derived from the functioning of the human brain, which generally means interconnected neurons passing on signals are usually expressed that way.

    They have algorithms which monitor their goals and their behavior directed toward their goals no?aporiap
    The whole program is written to fulfill a certain purpose. How should it monitor that?
  • aporiap
    223
    Sure there are other methods. But the ones that are derived from the functioning of the human brain, which generally means interconnected neurons passing on signals are usually expressed that way.
    I still think neural networks can be described as self monitoring programs - they modify their output in a goal-directed way in response to input. There must be learning rules operating in which the network takes into account its present state and determines how this state compares to a more optimal state that it's trying to achieve. I think that comparison and learning process is an example of self monitoring and modification.

    The whole program is written to fulfill a certain purpose. How should it monitor that?

    I was wrong to say it monitors its own goals, rather it monitors its own state with respect to its own goals. Still there is a such thing as multi task learning - and forms of AI that can do so can hold representations of goals.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment