Yet, Tononi’s original IIT concepts and predictions do appear to be bearing out in various neurological studies. In 2013, Adenauer Casali and colleagues completed a study that showed it was possible to use the IIT framework within an EEG paradigm for measuring consciousness in some patients. — Gina Smith
Phi is based on the number and quality of interconnections a given entity has between bits of information. The resulting number — the Phi score — corresponds directly to how conscious it is.
The more connections, the more conscious an entity is, a factor quantifies as PHI
Consciousness, in this model, doesn’t rely on a network of information. It is the network. As such, it doesn’t discriminate based on whether the subject is organic or electronic.
Put simply, high PHI measure means more consciousness — more sentience — no matter who or what you are. — Gina Smith
This theory is not a serious scientific proposal. — Daemon
[bolding mine]Thus is a fascinating sentence:
"Note that these postulates are inferences that go from phenomenology to physics, not the other way around. This is because the existence of one’s consciousness and its other essential properties is certain, whereas the existence and properties of the physical world are conjectures, though very good ones, made from within our own consciousness."
It's Descartes 2.0.
"We can identify it in an abstract sense, but not in a practical sense, as we can with a manmade machine.
We have "brainoids" now, grown from adult human skin cells. But unless they are connected to sense organs, and yes, things like feet, they can't do what real brains do. There isn't anything for them to be conscious of."
I've read somewhere that they accept that a thermostat is conscious. A thermostat but not the whole brain? And the whole body is involved in consciousness!
What's the hypothesis and how would it be tested?
Why is it ok to consider their hypothesis as it is, when it seems to be fatally flawed from the outset? — Daemon
If all your sense organs stopped working, you would still be conscious. — RogueAI
Physicalism is teetering like a house of cards. Consciousness is primary. The physical world has been relegated to a conjecture (though a very good one). Soon, the parenthetical "though a very good one" will be gone. And then the conjecture of the physical world itself. Positing the existence of mind-independent stuff solves nothing and creates enormous problems. — RogueAI
How would you measure how much PHI a computer has? Does the number of transistors matter? Or how they're arranged? Or both? — RogueAI
— Daemon
I'm not sure how you could know that.
But in any case you are starting from a position where I previously had working sense organs. But suppose I had never had them: I don't think I'd ever have been conscious. And consider this from an evolutionary perspective: consciousness would never have developed at all without sensing, sense organs.
, but I think you're just pushing the Hard Problem to a different level: why does integrating information lead to conscious experience? — RogueAI
While there may well be a practical threshold for Φmax below which people do not report feeling much, this does not mean that consciousness has reached its absolute zero. Indeed, according to IIT, circuits as simple as a single photodiode constituted of a sensor and a memory element can have a minimum of experience (Oizumi, Albantakis et al. 2014).
For example, it may soon be possible to program a digital computer to behave in a manner identical to that of a human being for all extrinsic intents and purposes. However, from the intrinsic perspective the physical substrate carrying out the simulation in the computer—made of transistors switching on and off at a time scale of picoseconds—would not form a large complex of high Φmax, but break down into many mini-complexes of low Φmax each existing at the time scale of picoseconds. This is because in a digital computer there is no way to group physical transistors to constitute macro-elements with the same cause-effect power as neurons, and to connect them together such that they would specify the same intrinsically irreducible conceptual structure as the relevant neurons in our brain. Hence the brain is conscious and the computer is not - it would have zero Φ and be a perfect zombie. [25] This would hold even for a digital computer that were to simulate in every detail the working of every neuron of a human brain, such that what happens to the virtual neurons (the sequence of firing patterns and ultimately the behaviors they produce) is the same as what happens to the real neurons. On the other hand, a neuromorphic computer made of silicon could in principle be built to realize neuron-like macro-elements that would exist intrinsically and specify conceptual structures similar to ours.
Good question. Did you see what I said earlier about axioms and postulates? — frank
So we have distinct and exhaustive cause-effect repertoires. — frank
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.