• frank
    16k
    I'm going to describe IIT, based on the scholarpedia page.

    IIT, originated by Giulio Tonini, is an attempt to specify the system requirements for consciousness. It starts with "axioms", which are aspects of consciousness derived from phenomenology. Based on these axioms, it presents "postulates", or specs for a system that has these axiomatic attributes.

    In formulating the axioms, Tonini uses these criteria:

    1. About experience itself;
    2 Evident: they should be immediately given, not requiring derivation or proof;
    3 Essential: they should apply to all my experiences;
    4 Complete: there should be no other essential property characterizing my experiences;
    5 Consistent: it should not be possible to derive a contradiction among them; and
    6 Independent: it should not be possible to derive one axiom from another.

    Next: a closer look at the axioms:
  • jgill
    3.9k
    I looked into this briefly several years ago and recall that, apart from conceptual issues, the math is difficult to employ.
  • frank
    16k
    Difficult to employ, as in you can't get there from here?
  • jgill
    3.9k
    From Wiki: "The calculation of even a modestly-sized system's Φ Max {\displaystyle \Phi ^{\textrm {Max}}} {\displaystyle \Phi ^{\textrm {Max}}} is often computationally intractable,[6] so efforts have been made to develop heuristic or proxy measures of integrated information. For example, Masafumi Oizumi and colleagues have developed both Φ ∗ {\displaystyle \Phi ^{*}} {\displaystyle \Phi ^{*}}[7] and geometric integrated information or Φ G {\displaystyle \Phi ^{G}} {\displaystyle \Phi ^{G}},[8] which are practical approximations for integrated information. These are related to proxy measures developed earlier by Anil Seth and Adam Barrett.[9] However, none of these proxy measures have a mathematically proven relationship to the actual Φ Max {\displaystyle \Phi ^{\textrm {Max}}} {\displaystyle \Phi ^{\textrm {Max}}} value, which complicates the interpretation of analyses that use them. They can give qualitatively different results even for very small systems.[10]

    A significant computational challenge in calculating integrated information is finding the Minimum Information Partition of a neural system, which requires iterating through all possible network partitions. To solve this problem, Daniel Toker and Friedrich T. Sommer have shown that the spectral decomposition of the correlation matrix of a system's dynamics is a quick and robust proxy for the Minimum Information Partition.["
  • frank
    16k

    Yes, I'm aware of that problem. I want to understand the whole approach. Do you think the problem with calculating phi is insurmountable?
  • jgill
    3.9k
    Well, it sounds like people are working on it. I suppose I question attempting to apply math in this context. A lot depends on whether predictions are calculated and match reality. :chin:
  • frank
    16k
    :up:

    Tonini uses the axioms to specify what he wants a target system to support. The first is that consciousness is intrinsic, by which he means:

    Consciousness exists: each experience is actual—indeed, that my experience here and now exists (it is real) is the only fact I can be sure of immediately and absolutely. Moreover, my experience exists from its own intrinsic perspective, independent of external observers (it is intrinsically real or actual). — Tonini, Scholarpedia article posted in OP

    Elsewhere, Tonini says that Galileo took the observer out of science, and that we will now put it back in. It's in this light that we should understand this axiom. The emphasis here is on the view of the observer.

    Typically, neuroscientists observe consciousness. They record what a subject reports, which is an example of a behavioral correlate of consciousness (BCC), and link that up in some way to neuronal correlates (NCC). Tonini wants to go beyond that approach and just start with experiences as intrinsic.

    Is he warranted to do that? Does it matter?
  • Cuthbert
    1.1k
    It has MICE in it so it can't be all bad.
  • frank
    16k
    It has MICE in it so it can't be all bad.Cuthbert

    As long they didn't chew on the insulation.
  • jgill
    3.9k
    Tonini uses the axioms . . .frank

    It's Tononi. His system has as axiomatic the existence of consciousness. I agree and have the same perception of time. Both simply exist. And unraveling qualia or dissecting time seems wasted effort.
  • frank
    16k
    So you agree with Tonini. Cool.
  • Daemon
    591
    I'm interested to see the axioms.
  • jgill
    3.9k
    So you agree with Toninifrank

    It's Giulio Tononi. :roll:
  • frank
    16k
    I know. The i is right beside the o.
  • frank
    16k
    I'm interested to see the axioms.Daemon

    I'll scoot through the rest of them. That first one kind of sets the frame.
  • frank
    16k
    The rest of the axioms are as follows per Tononi:

    Composition

    Consciousness is structured: each experience is composed of multiple phenomenological distinctions, elementary or higher-order. For example, within one experience I may distinguish a book, a blue color, a blue book, the left side, a blue book on the left, and so on.

    Information

    Consciousness is specific: each experience is the particular way it is—being composed of a specific set of specific phenomenal distinctions—thereby differing from other possible experiences (differentiation). For example, an experience may include phenomenal distinctions specifying a large number of spatial locations, several positive concepts, such as a bedroom (as opposed to no bedroom), a bed (as opposed to no bed), a book (as opposed to no book), a blue color (as opposed to no blue), higher-order “bindings” of first-order distinctions, such as a blue book (as opposed to no blue book), as well as many negative concepts, such as no bird (as opposed to a bird), no bicycle (as opposed to a bicycle), no bush (as opposed to a bush), and so on. Similarly, an experience of pure darkness and silence is the particular way it is—it has the specific quality it has (no bedroom, no bed, no book, no blue, nor any other object, color, sound, thought, and so on). And being that way, it necessarily differs from a large number of alternative experiences I could have had but I am not actually having.

    Integration

    Consciousness is unified: each experience is irreducible to non-interdependent, disjoint subsets of phenomenal distinctions. Thus, I experience a whole visual scene, not the left side of the visual field independent of the right side (and vice versa). For example, the experience of seeing the word “BECAUSE” written in the middle of a blank page is irreducible to an experience of seeing “BE” on the left plus an experience of seeing “CAUSE” on the right. Similarly, seeing a blue book is irreducible to seeing a book without the color blue, plus the color blue without the book.

    Exclusion

    Consciousness is definite, in content and spatio-temporal grain: each experience has the set of phenomenal distinctions it has, neither less (a subset) nor more (a superset), and it flows at the speed it flows, neither faster nor slower. For example, the experience I am having is of seeing a body on a bed in a bedroom, a bookcase with books, one of which is a blue book, but I am not having an experience with less content—say, one lacking the phenomenal distinction blue/not blue, or colored/not colored; or with more content—say, one endowed with the additional phenomenal distinction high/low blood pressure.[2] Moreover, my experience flows at a particular speed—each experience encompassing say a hundred milliseconds or so—but I am not having an experience that encompasses just a few milliseconds or instead minutes or hours.[3]

    So
    1. Intrinsic, a single perspective
    2. Composition, a discernable structure
    3. Information, each experience is distinct
    4. Integration, experience is unified
    5. Exclusion, experience has a definite grain

    What will follow is postulates, or characteristics of a system that is conscious. These postulates match up with the axioms.
  • Daemon
    591
    This is enjoyable, thank you Frank. Off to beddybyes now.
  • jgill
    3.9k
    These postulates match up with the axioms.frank

    Axioms and postulates are generally considered the same things. Does Tononi distinguish between them?
  • frank
    16k
    Axioms and postulates are generally considered the same things. Does Tononi distinguish between them?jgill

    For the sake of distinguishing between the description of consciousness (axioms) and system requirements (postulates). Later.
  • frank
    16k
    Thus is a fascinating sentence:


    "Note that these postulates are inferences that go from phenomenology to physics, not the other way around. This is because the existence of one’s consciousness and its other essential properties is certain, whereas the existence and properties of the physical world are conjectures, though very good ones, made from within our own consciousness."

    It's Descartes 2.0.
  • frank
    16k
    @Isaac
    Can you treat a neuron like a logic gate?
  • Daemon
    591
    I know this one, ask me sir!!
  • frank
    16k
    I know this one, ask me sir!!Daemon

    Cool. Can you?
  • Daemon
    591
    You can treat a neuron as a logic gate, but that's not what it is. Here are some reasons why not, taken from various sections of The Idea of the Brain by Matthew Cobb:

    1. A neuron can secrete several different types of neurotransmitter into the synapse.
    2. Even in a simple circuit each neuron is connected to many other neurons both by chemical synapses and by what are called gap junctions.
    3. Neuronal activity can be altered by neuromodulators, neuropeptides and other compounds that are secreted alongside neurotransmitters and which function as relatively slow-acting mini-hormones, locally altering the activity of neighbouring neurons.
    4. The activity of each neuron is affected not only by its identity (that is by the genes that determine its position and function), but also by the previous activity of the neuron.
    5. Structures in the brain are not modules that are isolated from one another - they are not like the self-contained components of a machine...neurons and networks of neurons are interconnected and able to affect adjoining regions by changing not only the activity of neighbouring structures but also the patterns of gene expression.
  • frank
    16k
    4. The activity of each neuron is affected not only by its identity (that is by the genes that determine its position and function), but also by the previous activity of the neuron.Daemon

    Oh, this is why the theory emphasizes causality within the system itself.

    Thanks for the explanation.
  • frank
    16k
    The first postulate is supposed to explain the existence of the first axiom: that consciousness is intrinsic, or independent of an observer. IOW, you're conscious and have access to a certain POV whether anyone else is around or not.

    ITT says this requires a system that is causally open to it's environment to and from, but there is also causation internal to the system.

    1000px-Integrated_information_theory_postulates.jpg

    Or in Tononi's words:

    "To account for the intrinsic existence of experience, a system constituted of elements in a state must exist intrinsically (be actual): specifically, in order to exist, it must have cause-effect power, as there is no point in assuming that something exists if nothing can make a difference to it, or if it cannot make a difference to anything.[7] Moreover, to exist from its own intrinsic perspective, independent of external observers, a system of elements in a state must have cause-effect power upon itself, independent of extrinsic factors. Cause-effect power can be established by considering a cause-effect space with an axis for every possible state of the system in the past (causes) and future (effects). Within this space, it is enough to show that an “intervention” that sets the system in some initial state (cause), keeping the state of the elements outside the system fixed (background conditions), can lead with probability different from chance to its present state; conversely, setting the system to its present state leads with probability above chance to some other state (effect)."
  • Daemon
    591
    What interests me is what constitutes a "system". How is the boundary between the "inside" and the "outside" of the system established?
  • frank
    16k
    What interests me is what constitutes a "system". How is the boundary between the "inside" and the "outside" of the system established?Daemon

    Good question. In a YouTube lecture, Christof Koch emphasized that the hardware they're thinking of is neurons, period. So the boundary is the surface of the brain?
  • Daemon
    591
    But the relevant "system" is the whole body. As well as the neurons there is a blood supply, the biochemical bath the neurons are immersed in, the spine, the nervous system, the sense organs.
  • frank
    16k
    But the relevant "system" is the whole body.Daemon

    But there's no consciousness associated with your liver, in fact consciousness doesn't even require a cerebellum, but if we cut the brain in half, we get two conscious entities in one skull.

    I don't think they know exactly what the target hardware is. I don't know why it's right to zero in on neurons. What do you think?
  • fishfry
    3.4k
    IIT, originated by Giulio Tonini,frank

    Scott Aaronson debunkificated this a while back. David Chalmers shows up in the comment section.

    https://www.scottaaronson.com/blog/?p=1799
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.