Absolutely not! I, for one, don't come to this forum for conservation. I come to explain and argue, and in the process, I clarify what I think, to myself! Reading is one thing, writing is another, and the latter is where the real work lies. — Constance
First, do you accept that when you observe something, you actually are observing it, and this is not reducible in the way the model indicates? — Constance
But are you reading what I wrote? — Constance
the assumption of a physicalist conception of the world as foundational provides NO epistemic extension so that other worlds is anything more than the one localized body of events. Knowledge connections, this is what is needs to be shown. — Constance
How does the physicalist model of a brain manage to "get to anything out there"? And, as I said, even the concept of physicalism itself now to be understood with exactly this delimitation. How is it that when I am gazing at my uncle. the entire affair is not reducible to brain events? — Constance
How is it that existence itself is not just a unique brain event; that when you ponder existence and other sweeping terms that are all inclusive. you are not just making a statement that entirely conceived within a brain matrix, and everything you can imagine is just this and nothing else. You would have to have an independent theory of experience, then, apart from what science can observe and think about. — Constance
How would it be that this three and a half pound greyish "thing" produces ideas that are "about" something else? — Constance
This is what is required, and of course, you can say that science is an experimental/theoretical work in progress that will one day unlock the secrets of epistemic relationships, but this will have to include a dramatic reconception of what it means for a thing to be physical; some new "law of epistemic connectivity" will have to be introduced, but note that physicalism does not have this at all! — Constance
You may not want to restrict yourself to this model, and I say very good. — Constance
how can you extract P from the knowledge conditions that make knowing P possible? — Constance
Once again, does the fact that they cannot be confirmed preclude their existence? If so, how? In what scope? — enqramot
The complaint of this rests solely with the epistemic deficits of physicalism. — Constance
The argument doesn't care about what the future holds. Either you can tell me what the essential epistemic connectivity is about or you cannot. Again, if you want to include something that physicalism COULD have then you have to make sense of this "could". Otherwise it is merely empty speculation. — Constance
In order for it to be a heavily filtered version of him, it has to be first shown that it is possible to affirm anything at all of him. How would physicalism make this affirmation, GIVEN all that has been said above? (Pls don't just ignore all of this, and continue to say how outrageous it al sounds, The argument itself has to be dealt with.) — Constance
The supposition that my uncle is there in a world at all is in question, as well as even the supposition that the posited physical brain can be there ar all given the reduction of all affairs to brain events. — Constance
This is a way to present a reductio ad absurdum on the premise that the physical brain is the sole foundation for experience. — Constance
After all, if there is a strict correspondence between brain and mental events such that only brain can produce these, then the physicality itself of the brain is no longer tenable, since it, too, now, the knowledge of it, is just a brain event. — Constance
but physical brain events cannot be shown to carry epistemic connectivity — Constance
that is, there is nothing in a physical description of relations between objects that can account for epistemic relations — Constance
this means nothing can be confirmed outside of the brain, and therefore exterior events cannot be outside at all — Constance
the axiom that affirms physicality itself is made untenable. — Constance
"Things" did you say? What things? How does a physical brain affirm things, for the logic itself is reduced brain events, meaning is a brain event, I mean, intuitions, dogs, cats, religion, and the entire human dramatic unfolding are brain events only, according to thsi model. why? Because physicality is not epistemic, meaning looking closely at physical relations, there is nothing that place what is out there, in here. Does the circuitry in my computer through its camera "know" the world it "sees'? — Constance
Look at the opacity test: is a brain opaque or transparent to the world? And even if it were some sort of mirrored organ, it would remain a 100 billion neurons of dense matrical events, and dense matrical events are not my uncle. — Constance
But I want to point out that it is not that internal, external, real and the rest are obviated by the subsuming internal events of the brain that process all things equally; I mean fine, but it goes further: for even brain processes are not "really" brain processes, because it took a brain process to produce this very notion of brain processes. Nothing at all survives the physicalist model, even the physicalist model. — Constance
Does a dented car fender "know" the offending guard rail? Dented fenders are not brains, of course, but how is it that a brain's complexity qua complexity make for an epistemic connection; I mean, "something out" there still has to make it 'in here". — Constance
I am reminded of Zizek, who defends Hegel and borrowing from someone else, likens our inability to grasp where Geist is going in future rational possibilities to a program in which there are trees and clouds, but there is nothing in the program that allows for any detail beyond the beyond the distant visage. There simply does not exist, in this world, any interior to the trees or sun that illuminates the clouds and the like. Such things are therefore "impossible" in this world. — Constance
What stands before me, this visage of my uncle, is just a brain event, and every thought in my head that asks questions like, what brought him here, how did he get here? and so on, are not anything but a program. there are no events. Events are just the way we interpret affairs before us; but there is no "before us" or near or far, or anything at all. Even the thought experiment questioning my uncle's existence is just patterns of complexity in the brain. Nothing at all, and this means everything conceivable, survives this model. — Constance
Contemporary science? But what do they have to do with philosophy? You call Neil De Grasse Tyson, and talk like this, and he will simply give you a condescending sneer. — Constance
But how do you escape the final description of all you say above being brain events only, even, and this is important, the reference to something being a brain event? Talk about my "real" uncle: is such a thing even possible to imagine in good analytically grounded conscience? — Constance
The barn door: The complexity of your computer is not at issue, for first you have to explain the fundamentals: at best, at the level of basic questions (the most basic, which is what philosophy is about) the best you are ever going to get is a causal connection between your computer and and the wind howling through its receptors. It could be AI of the highest order, and you would still be routed to this question: how do causal explanations suffice for knowledge claims? Barn doors and howling winds have exactly this same causality "between" them. — Constance
It is not a matter of "what if he is reducible." You are facing here an ineluctable situation, for, as you and I will agree, there are brains and uncles and there is a world. I simply ask, how does the the latter get into the former? — Constance
HAHA so ironic... but our posts are based on Lockean empiricism! not Ancient Greece philosophy! — javi2541997
But let's say you had met my uncle. The assumption in place is that it was my uncle, and that this was not something reducible to interior events inside a three and a half pound mass. My uncle is not IN your brain. He is exterior to this object. Why is it that this object can extend beyond itself and do something like affirm something that is not part of a brain at all? It really is a simple question. I mean, we all know what uncles are, and what brains are.
Why are brains and uncles different regarding this epistemic connection? — Constance
My uncle is not IN your brain. He is exterior to this object. — Constance
It may seem absurd in case of a barn door, but isn't so absurd in case of a computer. In a way, it does "know" certain things and acts upon them. It doesn't make a computer conscious, of course. Sensor-based input can be built into computer systems. This works very much like unconscious part of our brain, for example, when goose bumps appear as an automatic reaction to lowering the temperature. Fully automatic reaction, something in you "knows" how to react.How absurd is it to say a barn door "knows" what the wind is that howls through its hinges? — Constance
I don't see how this assumption could be proven or disproven. What if he IS reducible? How can you be sure? Have you seen Cast Away movie with Tom Hanks? His only companion on a desert island was a volleyball that (whom?) he called Wilson. He's reduced to tears when Wilson the volleyball floats away during a storm. Modern, much improved upon, version of Wilson would be lamda the word processor (chatbot), a machine that fooled a supposed senior software engineer doubling up as a priest to believe that it's sentient (maybe he wasn't really fooled, maybe he did it for money, I don't know - he lost his job anyway). So, as you can see, creative interpretation can go a long way. Maybe you add something to your uncle, something that isn't there.But let's say you had met my uncle. The assumption in place is that it was my uncle, and that this was not something reducible to interior events inside a three and a half pound mass. — Constance
Not necessarily speculation. Without scientific proof we can also "trust" that uncle Sidney is more than just a neural network. We can go with common sense (cultural assumptions) that everyone has an "I" ness, a selfness, beyond simply being some mechanical binary machine calculating one's and twos (philosophical zombies) because we know we are, and assume similar things (other people) have similar qualities and behave in similar ways. — Benj96
Just a question, and I am sure there is a ready answer; and then, I will be on my way, satisfied that the world is the world. Would someone please tell my why, when I greet my uncle Sidney, I am not "greeting" exclusively (!) systems of neuronal activity?
Troubled sleep over this. — Constance
Immortality and or eternal youth to me seems like my personal hell. — Benj96
philosophers are not content with mechanical "operational principles" — Gnomon
Philosophy has no "settled questions" — Gnomon
Nevertheless, pragmatic scientists are still working on a Consciousness Meter to update the crude EEGs and somewhat more sophisticated MRIs. They are even using Artificial Intelligence to search for signs of Consciousness in Natural Intelligences that appear to be anaesthetic (unconscious). — Gnomon
The world's religions have very deep roots, going back to very ancient times. Any contemporary religious operation may seem (and actually be) corrupt, but I think it is safe to say that religions didn't begin as a scam.
Humans need some kind of explanation for the world they live in. They need some way to give meaning to their existence, replete with joys and sorrows. If rationality is plentiful, we use rationality, If poetry, myth making, story telling, and ritual are plentiful, that's what we use. — Bitter Crank
↪enqramot
You presented a one-dimensional position with no attempt to offer up any other explanations for the existence of ‘religions’.
Your entire argument starts on the assumption that people started religions for purely selffish/nefarious means. That could be true of course, but I see no attempt made to consider any other possibility. Hence, you are arguing/asking from a position of clear bias. — I like sushi
Poor and symplistic analysis based on biases you carry. — I like sushi
The interesting question is as to why they take Trump at his word? What motivates their taking Trump at his word? — Janus
Pffft. Educated villains are all the rage now. — baker
Oh, does it? Or are people just thinking hard how they can capitalise on the backwardness of others? — baker
Consciousness and Sentience are sometimes used interchangeably. But "sentience" literally refers to sensing the environment. And AI can already to that. — Gnomon
Therefore, the philosophical question here is "does a robot (AI) know that it knows"? Is it self-aware? To answer that question requires, not an Operational (scientific) definition, but an Essential (philosophical) explanation. — Gnomon
When an octopus acts as-if it recognizes its image in a mirror, is that just an operational function of sentience, or an essential function of self-awareness? We could debate such rhetorical questions forever. So, I can only say that, like most philosophical enigmas, it's a matter of degree, rather than Yes or No. Some intelligences are more conscious than others. — Gnomon
Your question hinges on your philosophical or technical definition of "Consciousness". Literally, the "-ness" suffix implies that the reference is to a general State or felt Quality (of sentience), not to a specific Thing or definite Quanta (e.g. neurons). In Nature, animated behavior (e.g. seek food, or avoid being food) is presumed to be a sign of minimal sentience, and self-awareness.
AI programs today are able to crudely mimic sophisticated human behaviors, and the common expectation is that the animation & expressions of man-made robots will eventually be indistinguishable from their nature-made makers -- on an "operational level". When that happens, the issue of enslaving sentient (knowing & feeling) beings could require the emancipation of artificial creatures, since modern ethical philosophy has decided that, in a Utopia, all "persons" are morally equal -- on an essential level.
Defining a proper ethical hierarchy is not a new moral conundrum though. For thousands of years, military captives were defined as "slaves", due to their limited freedom in the dominant culture. Since, many captives of the ruling power happened to have darker skin, that distinguishing mark came to be definitive. At the same time, females in a male-dominated society, due to their lack of military prowess, were defined as second-class citizens. At this point in time, the social status of AI is ambiguous ; some people treat their "comfort robots" almost as-if they are "real" pets or persons. But, dystopian movies typically portray dispassionate artificial beings as the dominant life-form (?) on the planet.
But, how can we distinguish a "real" Person from a person-like Mechanism? That "essential" difference is what Chalmers labeled the "Hard Problem" : to explain "why and how we have qualia or phenomenal experiences". The essence-of-sentience is also what Nagel was groping for in his query "what does it feel like?". Between humans, we take homo sapien feelings for granted, based on the assumption of similar genetic heritage, hence equivalent emotions. But, the genesis of AI, is a novel & unnatural lineage in evolution. So, although robots are technically the offspring of human minds, are they actually kin, or uncanny?
Knowing and Feeling are the operational functions of Consciousness. But Science doesn't do Essences. "If you can't measure it, it ain't real". Yet, a Cartesian solipsist could reply, "If I can't feel it, it ain't real". Therefore, I would answer the OP : that the essential difference between AI behavior and human Consciousness is the Qualia (the immeasurable feeling) of Knowing. Until Cyberneticists can reduce the Feeling-of-Knowing to a string of 1s & 0s, Consciousness will remain essential, yet ethereal. So, if a robot says it's conscious, we may just have to take it's expression for evidence. :smile:
Google AI has come to life :
AI ethicists warned Google not to impersonate humans. Now one of Google’s own thinks there’s a ghost in the machine.
https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/
Google's AI is impressive, but it's not sentient. Here's why :
https://www.msnbc.com/opinion/msnbc-opinion/google-s-ai-impressive-it-s-not-sentient-here-s-n1296406 — Gnomon
The trouble is, how do you prove the subject has experiences? I think it likely we will never be able to do a test to tell us what consciousness is. — Down The Rabbit Hole
It is hard to believe, but this theory must be judged in comparison to the other theories of consciousness.
What theories of consciousness are more plausible? — Down The Rabbit Hole
I think the problem here already lies in the premise, that consciousness is a kind of AI. — Manuel
AI is an engineering problem. No need to have a theory of consciousness. — Jackson
AI is here and getting more complex. Again, I don't see the importance of self awareness. — Jackson
Much to debate here, and worthwhile. My short answer is that I think people make consciousness into a fetish. The question is about intelligence and processing information and making new things. — Jackson
Computers don't need to be conscious. I don't see why people make a big deal out of consciousness. — Jackson
I am inclined to think that consciousness is a natural result of complexity. If that's the case, an exact emulation may have to be conscious too. — Down The Rabbit Hole
That's about you, perhaps. It's not the approach most folk would take. — Banno
We're natural beings. Paths of least resistance constitute the regularities – processes – of nature. Making shit up (versus figuring shit out) is a path of least cognitive resistance. (Human) brains are first and foremost survival, not "truth", engines; we natural beings are born magical / wishful / group / non thinkers, which for (too) many people, no amount of numeracy, literacy or lived experience can exorcize this fetish for fantasy (worship) in denial of facts aka "religion". — 180 Proof
Your post isn't based upon empirical fact, but it is a speculative etiological myth, written to provide you a better understanding of your world. You just did what you objected to the religious as having done, only the stories of the religious are better thought out, more interesting, and filled with more wisdom. — Hanover
You have just built a self interpreting system. As such, it cannot fail, because it is closed. It is a frequent mistake in philosophy. This way you can explain everything with everything: it will just work, always. You can say that the roots of religions are, let’s say, dogs, or tomatoes, or maths. Once the starting idea is made, it will be a matter of moments to build a whole working system. — Angelo Cannata
There is nothing wrong with myth. But you cannot use myth when good explanations exist. — Jackson
If we are a product of the universe, and the universe is not dumb, where does the error of believing in an eternal soul come from? — Angelo Cannata
For me the time to believe something is when there is good evidence for it. So it makes no real difference to the OP's question to me. — Tom Storm