Integrated Information Theory From Wiki: "The calculation of even a modestly-sized system's Φ Max {\displaystyle \Phi ^{\textrm {Max}}} {\displaystyle \Phi ^{\textrm {Max}}} is often computationally intractable,[6] so efforts have been made to develop heuristic or proxy measures of integrated information. For example, Masafumi Oizumi and colleagues have developed both Φ ∗ {\displaystyle \Phi ^{*}} {\displaystyle \Phi ^{*}}[7] and geometric integrated information or Φ G {\displaystyle \Phi ^{G}} {\displaystyle \Phi ^{G}},[8] which are practical approximations for integrated information. These are related to proxy measures developed earlier by Anil Seth and Adam Barrett.[9] However, none of these proxy measures have a mathematically proven relationship to the actual Φ Max {\displaystyle \Phi ^{\textrm {Max}}} {\displaystyle \Phi ^{\textrm {Max}}} value, which complicates the interpretation of analyses that use them. They can give qualitatively different results even for very small systems.[10]
A significant computational challenge in calculating integrated information is finding the Minimum Information Partition of a neural system, which requires iterating through all possible network partitions. To solve this problem, Daniel Toker and Friedrich T. Sommer have shown that the spectral decomposition of the correlation matrix of a system's dynamics is a quick and robust proxy for the Minimum Information Partition.["