Or I'm wrong. There's only two possibilities right? What do you think? — TogetherTurtle
However, I gave an outline of various hints and arguments that this is indeed the case. There is a computational and epistemological argument that they cannot know anything beyond what they are programmed to know, and they are not programmed to be self-aware or other-aware, because they, lacking appropriate hardware, cannot be. — tom
For some reason we find the notion that animals don't suffer horrifying, when it is in fact a blessing.
8 hours ago — tom
I think this is problematical, as I think that 'complete self awareness' of that kind is a logical impossibility. — Wayfarer
We are not like computers, at all. — Bitter Crank
Why, you experience the illusion of course. — TogetherTurtle
Being x. It's always rather odd to me people want to focus on computer models (computer as model) as representing intelligence or awareness instead of, say, the integrated processes (mind) of an old growth forest. My style of consciousness and components of mind communicate in a way unimpeachably closer to the minute feedback systems you find in the "cognitive network" of an ecologically complex superorganism (forests). Living compost on a forest floor is far more impressive and complex in its self-awareness than a computer could ever be (interspecies communication requires species; is a computer a specie? nope). Yet this is only a small, local slice of what's going on "information-processing" wise in a organic superorganism, like any robust sylvan environment. Mycelial mats connect plants and trees and search for feedbacks that then determine what they will do in bringing balance to players of the network locally, and non locally. Mycelial mats can connect thousands of acres of forest in this way. This is very much like a neural network of nature.
Honestly, taking computers to be intelligent, or most absurdly, at all self- aware, and not nature, tends to gore my ox...so I'm apt to wax too emotional here, but perhaps I'll be back with some cool examples as to why computers cannot be self-aware compared to the far more self-aware "being x" that can be found in nature (of which I'm a part way more than a computer). That is to say, my self-awareness is far more an extension of the order and processes going on in the superorganism of a forest than anything in silicon. We can understand (a priori), computers don't understand anything. We are aware of our limitations, computers are not. Because we are aware of our limitations thanks to nature's gift of metacognition (note I'm not saying a computer's gift of metacognition), we can ask questions about how we are limited, such as boundaries the subconscious puts on conscious awareness. You can even ask sci-fi questions about computer sentience thanks to nature's vouchsafing of self-awareness. Somehow, self-awareness is a part of having a mind that is informed nonlocally by interminably incomplete information. A machine only has to handle incompleteness according to its implied programming or manufacturing: algorithms and egos are veeery much alike, and both are chokingly narrow-minded, unreasoning. Seeing as the human brain-mind isn't invented or programmed and doesn't do calculations, and that it is likely nonlocally entangled with the universe in a way that remains forever incomplete (unless perhaps in deep sleep or dead), we think about thought and have thought about thinking, emote about emotions and have emotions about emoting: nothing is more sublime than wondering about wonder, however. I wonder if computers could ever wonder? What about the utterly unreasonable idea that a computer could have emotions reliant on programming...laughable. Reminds me of someone, having missed the punchline, laughing at a joke just because he knows it's supposed to be funny. — Anthony
What does it mean to be "completely self aware" as opposed to just self aware? — tom
We're NOT computers, I agree. But are we machines, just of a higher order? That's what I want to know. — TheMadFool
Are you saying there is no such thing as consciousness? — TheMadFool
I think this is problematical, as I think that 'complete self awareness' of that kind is a logical impossibility.
— Wayfarer
I fail to see a contradiction in the idea of complete self-awareness. Think of hunger, thirst, pain and the senses etc. These sensations are a form of awareness of the chemical and physical states of the body or the environment.
Why do you think total self-awareness is an impossibility? — TheMadFool
Transcendental apperception is the uniting and building of coherent consciousness out of different elementary inner experiences (differing in both time and topic, but all belonging to self-consciousness). For example, the experience of the passing of time relies on this transcendental unity of apperception, according to Kant.
There are six steps to transcendental apperception:
1. All experience is the succession of a variety of contents (per Hume).
2. To be experienced at all, the successive data must be combined or held together in a unity for consciousness.
3. Unity of experience therefore implies a unity of self.
4. The unity of self is as much an object of experience as anything is.
5. Therefore, experience both of the self and its objects rests on acts of synthesis that, because they are the conditions of any experience, are not themselves experienced.
6. These prior syntheses are made possible by the categories. Categories allow us to synthesize the self and the objects.
from Vedanta is expressed aphoristically as 'the eye cannot see itself, but only another. The hand cannot grasp itself, but only another'. — Wayfarer
In a way, yes. it isn't a tangible thing. — TogetherTurtle
So... No, we are not machines, not computers, not manufactured, not hardware, not software. — Bitter Crank
Yeah I think it's two things overlapping. Sociality sets the stage for the development of intelligence, but perhaps with the neural mechanisms that make for intelligence, beyond a certain point other factors take over to make super-high intelligence out of balance with other factors.
Like, suppose intelligence evolved to require the co-operation of A, B, C, D, E genes, with the total contributing to intelligence level, and the set being roughly in balance with most people, but then suppose in some people the E factor is much more heavily weighted than the other factors. That would produce a super-high intelligence. But what if the E factor happens to clash with other aspects of the total personality, making the person inhibited or socially inept?
Another possibility: human beings and animals generally are like these Heath Robinson contraptions, stuck together with duck tape, sticks and glue, that "pass muster" in the circumstances they evolved in for the bulk of their evolution, but don't necessarily function so well outside those conditions. For example sociality in our ancestral environment would have meant knowing, say, about 20 people quite well, and half a dozen really well. What happens when a creature designed for that type of environment is rammed cheek by jowl with millions of strangers in a modern conurbation? Maybe they withdraw into themselves, or whatever.
Lots of possibilities here, of course one would have to know the science and investigate to figure out what's really going on. — gurugeorge
All I know is that we don't see the world exactly as it is. — TogetherTurtle
All I know is that we don't see the world exactly as it is. — TogetherTurtle
Everything comes together to create a facade — TogetherTurtle
It is evidential to some extent. I apologize if I didn't make it clear before, but I don't believe nothing exists. I'm more on the line of thinking that how we view existing objects is arbitrary.Is this evidential or just a gut feeling? — TheMadFool
As far as I'm concerned there's a limit to illusion. EVERYTHING can't be an illusion, especially our sense of self. — TheMadFool
Hmm, I would think self awareness comes part and parcel with some level of sentience. I think a robot that can sense certain stimuli - etc. light, color, and their spatial distribution in a scene - and can use that information to inform goal directed behavior must have some form of sentience. They must hold some representation of the information in order to manipulate it and use it for goal based computations and they must have some representation of their own goals. All of that (i.e. having a working memory of any sort) presupposes sentience.Yes. That can be correctly classified as some level of self-awareness. This leads me to believe that most of what we do - walking, talking, thinking - can be replicated in machines (much like wormw or insects). The most difficult part is, I guess, imparting sentience to a machine. How does the brain do that? Of course, that's assuming it's better to have consciousness than not. This is still controversial in my opinion. Self-awareness isn't a necessity for life and I'm not sure if the converse is true or not.
The AIs whose construction is inspired by the human brain are merely a bunch of matrices chained together resulting in a map from an input to an output. m(X) = Y. These get trained (in supervised learning at least) by supplying a set of desired (X,Y)-Tuples and using some math. algorithm to tweak the matrices towards producing the right Y values for the Xes. Once the training-sets are handled sufficiently well chances are good it will produce plausible outputs for new Xes.They must hold some representation of the information in order to manipulate it and use it for goal based computations and they must have some representation of their own goals. — aporiap
Isn't this true for only a subset of AIs. I'm unsure if this is how, for example a self navigating, walking honda robot works, or the c. elegans worm model, etc. And even in these cases, there is still a self monitoring mechanism at play -- the optimizing algorithm. While 'blind' and not conventionally assumed to involve 'self awareness', I'm saying this counts -- it's a system which monitors itself in order to modify or inform its own output. Fundamentally, the brain is the same just scaled up in the sense that there are multiple self monitoring, self modifying blind mechanisms working in parallel.The AIs whose construction is inspired by the human brain are merely a bunch of matrices chained together resulting in a map from an input to an output. m(X) = Y. These get trained (in supervised learning at least) by supplying a set of desired (X,Y)-Tuples and using some math.
algorithm to tweak the matrices towards producing the right Y values for the Xes. Once the training-sets are handled sufficiently well chances are good it will produce plausible outputs for new Xes.
They have algorithms which monitor their goals and their behavior directed toward their goals no? So then they cannot merely be the representation of their goals.These things do not exactly have a representation of their goals - they are that representation.
Sure there are other methods. But the ones that are derived from the functioning of the human brain, which generally means interconnected neurons passing on signals are usually expressed that way.Isn't this true for only a subset of AIs. I'm unsure if this is how, for example a self navigating, walking honda robot works, or the c. elegans worm model, etc. — aporiap
The whole program is written to fulfill a certain purpose. How should it monitor that?They have algorithms which monitor their goals and their behavior directed toward their goals no? — aporiap
I still think neural networks can be described as self monitoring programs - they modify their output in a goal-directed way in response to input. There must be learning rules operating in which the network takes into account its present state and determines how this state compares to a more optimal state that it's trying to achieve. I think that comparison and learning process is an example of self monitoring and modification.Sure there are other methods. But the ones that are derived from the functioning of the human brain, which generally means interconnected neurons passing on signals are usually expressed that way.
The whole program is written to fulfill a certain purpose. How should it monitor that?
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.