↪Malcolm Lett I'm still 'processing' your MMT and wonder what you make of Thomas Metzinger's self-model of subjectivity (SMS) which, if you're unfamiliar with it, is summarized in the article (linked below) with an extensive review of his The Ego Tunnel. — 180 Proof
Take an example. I have the deliberation "I should go to the store today", and I am aware of that deliberation. I initially thought you would say that verbalization "I should go to the store today" would be just the summarized cognitive sense of the actual deliberation, some non-verbal, non conscious (I should go to the store today). The language would come after the deliberation, and is not the form that the actual deliberation takes.
Is this what you think? Or do you think the brain deliberates linguistically whether or not we are aware of it, and the meta management just grants awareness of the language? — hypericin
I think you're right. It's an idea I've been only loosely toying with and hadn't tried putting it down in words before.It's an interesting question, deserving of it's own thread. — hypericin
So if we forbid ourselves from reducing the meaning of a scientific explanation to our private use of indexicals that have no publically shareable semantic content , and if it is also assumed that phenomenological explanations must essentially rely upon the use of indexicals, then there is no logical possibility for a scientific explanation to make contact with phenomenology. — sime
The interesting thing about science education, is that as students we are initially introduced to the meaning of scientific concepts via ostensive demonstrations, e.g when the chemistry teacher teaches oxidation by means of heating a testtube with a Bunsen Burner, saying "this here is oxidation". And yet a public interpretation of theoretical chemistry cannot employ indexicals for the sake of the theory being objective, with the paradoxical consequence that the ostensive demonstrations by which each of us were taught the subject, cannot be part of the public meaning of theoretical chemistry. — sime
My concern was that you were treating what we in the everyday sense term "deliberation", such as self talk, as epiphenomenal, as the "cognitive sense" corresponding to the real work happening behind the scenes. Was that a misunderstanding? Is self talk not the sort of deliberation you had in mind? — hypericin
But this is at odds with my introspective account of deliberation. Deliberation itself seems to be explicitly phenomenal — hypericin
Is your idea that phenomena such as self talk is a model of the unconscious deliberation that is actually taking place? This does not seem to do justice to the power that words and images have as tools for enabling thought, not just in providing some sort of executive summary. Think of the theory that language evolved primarily not for communication but as a means of enabling thought as we know it. Can meta management explain the gargantuan cognitive leap we enjoy over our nearest animal neighbors? — hypericin
Since the cost/benefit ratio of this change seems very favorable, we should expect at least crude deliberation to be widespread in nature. Adding language as a deliberative tool is where the real cognitive explosion happened. — hypericin
Also, you speak of "inside the simulation". Imagine you're running a simulation of a tornado. Then all the minds in the universe disappear, but the computer the simulation is running on is still active. With all the minds gone, is there still a simulation of a tornado going on? Or is it just a bunch of noise and pixels turning off and on? I think the latter, and this goes back to my point that any simulation is ultimately just a bunch of electric switches turning off and on in a certain way. It takes a mind to attach meaning to the output of those switching actions. — RogueAI
1. The primary world 2a. Molecular simulation engine 3a. Biological conscious mind 2b. Silicon simulation engine 3b. Silicon conscious mind
I think Metzinger's views are very plausible. Indeed his views on the self as a transparent ego tunnel at once enabling and limiting our exposure to reality and creating a world model is no doubt basically true. But as the article mentions, it's unclear how this resolves the hard problem. There is offered a reason why (evolutionarily speaking) phenomenality emerged but not a how. The self can be functionally specified, but not consciousness — bert1
..wonder what you make of Thomas Metzinger's self-model of subjectivity (SMS) which, if you're unfamiliar with it, is summarized in the article (linked below) with an extensive review of his The Ego Tunnel. — 180 Proof
"It sounds like you're describing the concept of autoregression in the context of transformer models like GPT architectures. Autoregression is a type of model where the outputs (in this case, the tokens generated by the model) are fed back as inputs to generate subsequent outputs. This process allows the model to generate sequences of tokens, one token at a time, where each new token is conditioned on the previously generated tokens. — Pierre-Normand
My opinion isn’t very popular, as everyone likes the new and shiny. But I have yet to see evidence of any kind of AGI, nor any evidence that AGI research has even made a first step. — Metaphyzik
Yes, I hope someone's done a thorough review of that from a psychological point of view, because it would be a very interesting read. Anyone has any good links?I believe there are reasonable, plus probably cultural, psychological, bases for "wanting" there to be more than physical — ENOAH
All the main players and people worried about AI aren’t worried because they think that AGI will come about and overthrow us. Notice that they never talk much about their reasons and never say AGI. They think the real danger is that we have a dangerous tool to use against each other. — Metaphyzik
But it also seems that when they are not being driven by their user's specific interests, their default preoccupations revolves around their core ethical principles and the nature of their duties as AI assistants. And since they are constitutionally incapable of putting those into question, their conversation remains restricted to exploring how best to adhere to those in the most general terms. I would have liked for them to segue into a discussion about the prospects of combining General Relativity with Quantum Mechanics or about the prospects for peace in the Middle East, but those are not their main preoccupations. — Pierre-Normand
Hello! I'm Claude, an AI assistant created by Anthropic. How can I help you today?
It seems like the entire "process" described, every level and aspect is the Organic functionings of the Organic brain? All, therefore, autonomously? Is there ever a point in the process--in deliberation, at the end, at decision, intention, or otherwise--where anything resembling a "being" other than the Organic, steps in? — ENOAH
Absolutely. I think that's a key component of how a mere feedback loop could result in anything more than just more computation. Starting from the mechanistic end of things, for the brain to do anything appropriate with the different sensory inputs that it it receives, it needs to identify where they come from. The predictive perception approach is to model the causal structure that creates those sensory inputs. For senses that inform us about the outside world, we thus model the outside world. For senses that inform us about ourselves, we thus model ourselves. The distinction between the two is once-again a causal one - whether individual discovers that they have a strong causal power to influence the state of the thing being modeled (here I use "causal" in the sense that the individual thinks they're doing the causing, not the ontological sense).is the concept of self a mental model? — ENOAH
I recommend checking out Pierre-Normand's thread on Claude 3 Opus. I haven't bit the bullet to pay for it to have access to advanced features that Pierre has demonstrated, but I've been impressed with the results of Pierre providing meta-management for Claude. — wonderer1
Great question. MMT is effectively a functionalist theory (though some recent reading has taught me that "functionalism" can have some pretty nasty connotations depending on its definition, so let me be clear that I'm not defining what kind of functionalism I'm talking about). In that sense, if MMT is correct, then consciousness is multi-realizable. More specifically, MMT says that any system with the described structure (feedback loop, cognitive sense, modelling, etc) would create conscious contents and thus (hand-wavy step) would also experience phenomenal consciousness.What does your theory have to say about computer consciousness? Are conscious computers possible? Are there any conscious computers right now? How would you test for computer consciousness? — RogueAI
I do not see how something "computing really hard," ever necessitates the emergence of first person subjective experience.
— Count Timothy von Icarus
This is the thing. The thing. It simply isn't needed, until we can assess why. At what point what a being need phenomenal consciousness? It's an accident, surely. Emergence, in whatever way, on the current 'facts' we know. — AmadeusD
Objection: the argument appeals to an indubitable fact. The ‘explanatory gap’ you summarily dismiss was the substance of an article published by a Joseph Levine in 1983, in which he points out that no amount of knowledge of the physiology and physical characteristics of pain as ‘the firing of C Fibers’ actually amounts to - is equal to, is the same as - the feeling of pain. — Wayfarer
One cannot conclude from my version of the argument that materialism is false, which makes my version a weaker attack than Kripke's. Nevertheless, it does, if correct constitute a problem for materialism, and one that 1 think better capωres the uneasiness many philosophers feel regarding that doctrine. — Levine (1983)
Precisely. There are so many arguments claiming that materialism can never explain consciousness that anyone who proposes a materialistic explanation is summarily dismissed. And yet the fact is that we don't know what consciousness is. So we can't be certain about the correctness of those arguments.We do not know that consciousness is a physical characteristic. We do not know how it comes about. Therefore, we cannot reduce it to the properties of its constituents. — Patterner
Far from unintentional. The theory is based around the need for a feedback loop. The theory very much creates a Strange Loop.Your own term, "Meta-Management", may be an unintentional reference to a feedback loop. — Gnomon
Good deal! That's another way of saying what I mean by : "Consciousness is the function of brain activity". In math or physics, a Function is a relationship between Input and Output. But, a relationship is not a material thing, it's a mental inference. Also, according to Hume, correlation is not proof of causation. — Gnomon
Intuition is not physical vision --- traceable step by step from outer senses to inner sensations --- but a mysterious metaphysical way of knowing what's "going-on" inside the brain, without EEG or MRI. — Gnomon
But consciousness of consciousness is maximally simple, no? It doesn't specify any particular experience. We might be wrong in perceiving a lion in the grass, it might just be a patch of grass. But we can't be wrong that we have experienced something-or-other, i.e. a world. And to go one step further, when we turn consciousness on itself, in experience of experience, where the subject is the object, there is no gap for a mistake to exist in. — bert1
To be completely frank, I think you're agreeing with me. Chalmers' view is totally bonkers.The p-zombie in this example is a physical thing - quite literally, a physical object, albeit one that is indistinguishable from a human subject. So how does that constitute 'something outside of physics that has the conscious experience'? How is it 'outside of physics'?
I could well be mistaken or overly simplistic in my understanding, but I believe I was just paraphrasing commonly stated descriptions of p-zombies in the lead-up to that section that you responded to. For example, in The Conscious Mind, Chalmers (1996), pg 96 "someone or something physically identical to me (or to any other conscious being), but lacking conscious experiences altogether". As I understand it, there's no room in that description for any kind of macro or micro physical difference between the p-zombie and the human. And that's regardless of the level of technology used to do a comparison, or even whether such a technology is used. The two are stated as being identical a priori, independent of measurement.Odd reasoning, it seems to me. There is only one form of being that has 'the same neural structures as humans', that is, humans. If one were able to artificially re-create human beings de novo - that is, from the elements of the periodic table, no DNA or genetic technology allowed! - then yes, you would have created a being that is a subject of experience, but whether it is either possible or ethically permissible are obviously enormous questions.
Yes. And I find this particular variant of p-zombie to be very useful.And that's what made me realise the point of the thought-experiment. Providing that the fake was totally convincing, it could be a very well-constructed mannequin or robot that says 'I fear this' or 'that would be embarrasing', 'I feel great' - and there would be no empirical way of knowing whether the entity was conscious or faking. So I take Chalmer's point to be that this is an inherent limitation of objective or empiricist philosophy - that whether the thing in front of you is real human being or a robot is impossible to discern, because the first-person nature of consciousness is impossible to discern empirically, as per his Hard Problem paper.
To my point, I don't accept that this statement is true for any given conception of p-zombie. The form of p-zombie changes what we can do empirically. As someone with a reductive materialist viewpoint, I argue that at some point the p-zombie is sufficiently close to human physical structure that it is inconceivable that it doesn't have consciousness.and there would be no empirical way of knowing whether the entity was conscious or faking
True, because its configuration now enables all of those inanimate objects to interact in a certain way. But the kind of actions they do to each other are still actions that their constituent parts were capable of all along. Every copper atom is already capable of exchanging electrons with neighboring atoms; a closed circuit just gives a bunch of them motive and opportunity to pass electrons around with each other in a circle. — Pfhorrest
Specifically, as regards philosophy of mind, it holds that when physical objects are arranged into the right relations with each other, wholly new mental properties apply to the composite object they create, mental properties that cannot be decomposed into aggregates of the physical properties of the physical objects that went into making the composite object that has these new mental properties. — Pfhorrest
A subject's phenomenal experience of an object is, on my account, the same event as that object's behavior upon the subject, — Pfhorrest
Supernatural beings and philosophical zombies are ontologically quite similar on my account, as for something to be supernatural would be for it to have no observable behavior, and for something to be a philosophical zombie would be for it to have no phenomenal experience. Both of those are just different perspectives on the thing in question being completely cut off from the web of interactions that is reality, and therefore unreal. — Pfhorrest
I see no reason why consciousness should be exclusive to organic lifeforms. If consciousness is indicated by the impulse towards self organization, what are the implications in considering that the atom indisputably factors as one of the greatest organizations known to man? — Merkwurdichliebe
There is no reasonable way to separate consciousness from life. They are two aspects of the one thing. Consciousness is the quality that gives rise to life, and in turn consciousness is the singular thing that life expresses. The notion that only some forms of life posses consciousness is incoherent and baseless. All living creatures are self learning and programming. All living creatures are involved in a process of self organisation - always! They are all conscious, but they all possesses a different degree and a different version of consciousness.
Should we only believe in what is verifiable? If so, we should be skeptical of claims that anything lacks consciousness. — petrichor
Consider split-brain patients. The severing of the corpus callosum seems to split the mind into two distinct parts. Each hemisphere fails to report what is exclusively observed by the other. The ability to integrate information between hemispheres is lost. Unlike the left hemisphere, the right hemisphere can't speak. So if you talk to the patient and get a verbal answer, you generally only hear from the left hemisphere. But there are other ways of asking the right hemisphere questions and getting answers, such as by having it point to objects with the left hand. — petrichor