• RogueAI
    2.8k
    My objection is to the idea that fundemental differences in external objects somehow do not exist or change within the object when conscious observation occurs.Count Timothy von Icarus

    Why do you assume reality is such that there exist external objects? I get why, I guess, but I think that assumption has to be argued for.

    How would computation work in an idealistic reality? Would that solve some of the confusion here?
  • Count Timothy von Icarus
    2.8k
    I found someone making the same argument.

    https://www.worldscientific.com/doi/10.1142/9789814295482_0004

    I did not realize that Collier was the advisor of Scott Mueller. I thought his dissertation "Asymmetry: The Foundation of Information," was excellent.

    Unfortunately, I feel like this is an article where the formalism hinders the argument more than it helps it. Formalization is great when it can demonstrate logical connections that are hard to follow otherwise, and even more so when it allows for practical calculations (Kolmogorov Complexity is a great example), but sometimes it can make it harder to grasp the core issue.

    Philosophy of information, being at times considered a sub branch of philosophy of mathematics, does seem quite big on formalism. This isn't necessarily a good thing, because people can agree on equations, or understand how to use them in some cases, while disagreeing on the conceptual underpinnings, or having contradictory understandings of the formalism.



    Why do you assume reality is such that there exist external objects? I get why, I guess, but I think that assumption has to be argued for

    Very briefly, the agreement of people and instruments on key facts about the seemingly external world suggests to me that such a world does exist. I know there are maneuvers around this, but I am not a fan of Berkeley's "God does it all," explanation. It seems to me that subjective idealism requires a level of skepticism that should also put the existence of other minds and the findings of empiricism in doubt, in which case it becomes only arbitrarily distinct from solipsism.

    How would computation work in an idealistic reality? Would that solve some of the confusion here?

    It would depend on the system. In Kastrupt's system, external objects are indeed external to us, they are just composed of mental substance. I don't think anything changes here. As individuals, we are dissociated parts of a mental whole, and the differences that give rise to information, and thus computation, DO exist externally.

    I think it still works the same way in something like Hegel's system, which is in some aspects foreshadows information theory. From Pinkard's "Hegel's Naturalism:"

    Thus, in Hegel’s rigorous (but rather obviously nonstandard) terminology, the finite is always that which is intelligible in terms of or is explained by its reference to something else, its “nega- tion.” For something, x, to be finite, means that it can only be grasped in terms of its contrast with y.
  • Gnomon
    3.8k
    ↪hypericin
    Is there such a thing as uninterpreted information? Put another way, how does anything constitute information until its been interpreted? I mean, the genetic information transmitted by DNA is interpreted by ribosomes. But in the non-organic realm, what sense does it make to speak of information at all? Sure, we can ascertain vast amounts of data about the Universe, which then constitutes information, but does the Universe itself constitute 'information' in any meaningful sense?
    Wayfarer
    In my own musings on the development of Information Theory, I take seriously the conclusion of quantum theorists that abstract analog Information is equivalent to Energy. If so, there can be both Potential Information (DNA) and Actual Information (protein). Any "uninterpreted information" would be like the Energy stored in Momentum or Position : it can be actualized in a "collision" that transforms Momentum into Action. That dynamic relationship works for both organic and non-organic aspects of Nature. Potential Energy (ability to do work) is the not-yet-activated Power of Position (relationship), as illustrated by gravity's changing force relative to a gravitational body.

    A recent development in Physics is the notion that Information is a basic property of the Universe. Ironically, the philosophical implication that idea is that the fundamental element of the world is something similar to an information-processing Mind. Tegmark has proposed that our Reality is an ongoing computation by that hypothetical (mysterious) mind. Unfortunately, his mathematical theory is idealistic and unverifiable by empirical means. So, it remains a philosophical conjecture of reasoning from abstractions like logical/mathematical structure (ratios). You can take it or leave it, as seems reasonable to you. But you can't prove or disprove it. Perhaps treat it as a 21st century myth. But more romantic minds might prefer to imagine the Cosmic Mind as dreaming the apparent world, instead of mechanical Nature computing the physical world. :smile:


    Information as a basic property of the universe :
    https://pubmed.ncbi.nlm.nih.gov/8734520/

    A Universe Built of Information :
    https://link.springer.com/chapter/10.1007/978-3-030-03633-1_13

    Physics Is Pointing Inexorably to Mind :
    https://blogs.scientificamerican.com/observations/physics-is-pointing-inexorably-to-mind/
  • Bret Bernhoft
    222
    Well, this is certainly quite something. I don't think I have ever thought this deeply before about computation. I believe that your explanation that computation is simply a description of how we perceive abstract objects to be is a compelling argument. It does seem that computation is very closely connected to causality, so I think that a very strong argument could be made that computation in this sense is identical to causation in the sense of a physical event having an effect on other physical events.

    Thank you for sharing. I believe that my own understanding of computation has been significantly improved.
  • Wayfarer
    22.6k
    It seems to me that subjective idealism requires a level of skepticism that should also put the existence of other minds and the findings of empiricism in doubt, in which case it becomes only arbitrarily distinct from solipsism....In Kastrup's system, external objects are indeed external to us, they are just composed of mental substanceCount Timothy von Icarus

    As I elaborate extensively in my new book, The Idea of the World, none of this implies solipsism. The mental universe exists in mind but not in your personal mind alone. Instead, it is a transpersonal field of mentation that presents itself to us as physicality—with its concreteness, solidity and definiteness—once our personal mental processes interact with it through observation. This mental universe is what physics is leading us to, not the hand-waving word games of information realism.Bernardo Kastrup

    I take seriously the conclusion of quantum theorists that abstract analog Information is equivalent to Energy.Gnomon

    I notice that the Information as a basic property of the Universe abstract says that 'Pure energy can perform no 'useful' (entropy reducing) work without a concomitant input of information' - but what is the source of that information? (I've found a brief profile of Tom Stonier here - quite an interesting fellow, but I am dubious that what he's saying really can be reduced to physics. There are any number of ID theorists who would exploit Stonier's observation by saying "well, you know who the source of that "information" must be" - not that I would endorse them. See The Argument from Biological Information.)

    As a passage from the Kastrup OP you link to says:

    Information is notoriously a polymorphic phenomenon and a polysemantic concept so, as an explicandum, it can be associated with several explanations, depending on the level of abstraction adopted and the cluster of requirements and desiderata orientating a theory....Information remains an elusive concept.

    And it is what Kastrup disputes as 'hand-waving word games'.
  • Count Timothy von Icarus
    2.8k


    I didn't mean to imply that Kastrup's ontology is at risk for solipsism, just that it is completely compatible with computation as causation, even if he doesn't think so.

    I think there is a hard and soft statement of this compatibility. The hard statement would be that information is the primordial, ontologically basic component of Kastrup's mental substance. The soft view would be that information is merely an epistemologically useful model of the basic elements of said substance, and computation is simply observably identical to causation, even if we think there is some sort of bare substratum of being that exists beneath that level of analysis.

    I did not find Kastrup's dismissal of information-based ontologies particularly strong. What he seems to be arguing against are the models that have come out of physics where fundamental particles are simply replaced by qubits. These are the same sort that Floridi defeats with far more detail in his book. However, something like Floridi's maximally portable ontology, in which "information" is well defined (even if it isn't in the rest of the book) is compatible with what Kastrup is proposing. It's a logical truth that any toy universe needs to have differences to be coherent. You can't even have a 2D plane if none of the points on said plane differ from each other in any respect.
  • Count Timothy von Icarus
    2.8k
    Let me try another explanation of the original point. Suppose we have ourselves a Le Place's Demon. This is a supercomputer that, given precise information about the locations and velocities of every particle in a closed system / universe, can accurately predict the future and retrodict the past.

    Let's ignore quantum difficulties for now.

    Suppose we have 7 people in a room. We have cut them off from the rest of reality using a magical forcefield, thus they exist in a closed system. They are playing poker. We want to bet on the hands, or maybe even the conversation they make, who goes to the bathroom when, etc.

    Well, with our Le Place's Demon, we can cheat, right? Just fire it up and have it evolve the system forward. It will create a fully accurate projection of the future.

    Thus, information is not created or destroyed, at least in one sense, in that a complete description of the state of the room at time T1 tells us exactly what it will look like at T2, T3, etc.

    However, I don't think this is where we should end the analysis. In order to create these predictions or retrodictions, the demon must complete step wise computations. That is, it needs to occupy a number of distinguishable states to produce its output. Perhaps this number of states is smaller than the number of distinguishable states the room passes through; it is possible the demon can take advantage of compression. Presumably, the Kolmogorov Complexity of a system can change over time. But this doesn't change the fact that the demon needs to move through these states to produce its output.

    If it is storing outputs from its computations in memory, it is creating new information during this process. Even if we model the demon as a Markov chain, it is still passing through these many states. And here is the crux of my argument, a full description of each of the states the demon passes through to evolve the system from time T to time T' would require more information than is used to describe either T or T' alone. If you say, "not true, T3 tells you all about T4 and T5," my response would be, "if that is the case, show me T5 without passing through any more states." If T is truly equivalent to T', it shouldn't be discernible from it. If it is discernible, then difference exists (Leibnitz Law), and so to new does information.

    That is, we cannot ignore the process of evolution, as is often done. Computation creates discernible differences across a time dimension, such that if we had a second Le Place's demon producing outputs about every state the first demon passes through, the output would be many times larger than the first's when it simply describes T' based on T.

    Two points here:

    1. I think this example explains why computation has to be thought of as existing abstractly outside of merely describing equivalencies between inputs and outputs for a given algorithm. Perhaps the equals sign might be better thought of as a "transformation" sign in some respects.

    2. If our demons perfectly describe all aspects of causation in our room, to the most fundamental observable level, and if this is accomplished via computation, then I don't see a huge leap in saying there is a sense in which the system also "computes itself," leaving aside arguments about intentionality.
  • RogueAI
    2.8k
    Suppose we have 7 people in a room. We have cut them off from the rest of reality using a magical forcefield, thus they exist in a closed system. They are playing poker. We want to bet on the hands, or maybe even the conversation they make, who goes to the bathroom when, etc.

    Well, with our Le Place's Demon, we can cheat, right? Just fire it up and have it evolve the system forward. It will create a fully accurate projection of the future.
    Count Timothy von Icarus

    Unless there are immaterial factors that go into decision making, like conscious states. I'm not convinced that if we evolve the system forward, we'll get the same result every time. You're assuming a form of strict materialism where knowledge of all the particles and forces = 100% knowledge of the people in the room.
  • Count Timothy von Icarus
    2.8k


    Yes, that is a conceit of Le Place's thought experiment. I don't mean to assert that this is a realistic experiment (the magic force field and all). I don't think this is material to the point though. I merely wanted to show how computation is indiscernible from what if often meant by "causation" when considering the classical systems that we normally encounter. I don't think we need to make any claims about ontology here; we can just consider empirically observed facts about the external world (which could be a mental substrate).

    If quantum mechanics truly is stochastic in nature, as it appears to be, then the Demon can't produce just one output for T' given T. It will need to produce many, many outputs and assign probabilities to each.

    If mind is non-physical, then presumably the demon can pinpoint the interaction between non-physical mind and the physical system. Maybe not though, perhaps Von Neumann's "Consciousness Causes Collapse," is the case. If that is so, I am not sure the Demon can do its job, I would need to think more about that.

    It would seem though that consciousness cannot cause arbitrary changes in systems, since the results of collapse can be predicted probabilistically. This being the case, the effects on the physical world would still be computable. We would just need to bracket "causation = computation" to the "physical" world we can observe intersubjectively.


    In a fundamentally stochastic system, computation still seems to be mirroring causation, just as a quantum computer rather than a classical one. Note though that this change kills the usefulness of our Demon. Even if our Demon can predict all possible outcomes and assign probabilities to them, the number of nearly equally likely states would multiply so rapidly that it would soon become a useless source of prediction.
  • GrahamJ
    36
    Even if we model the demon as a Markov chain, it is still passing through these many states. And here is the crux of my argument, a full description of each of the states the demon passes through to evolve the system from time T to time T' would require more information than is used to describe either T or T' alone. If you say, "not true, T3 tells you all about T4 and T5," my response would be, "if that is the case, show me T5 without passing through any more states." If T is truly equivalent to T', it shouldn't be discernible from it. If it is discernible, then difference exists (Leibnitz Law), and so to new does information.Count Timothy von Icarus

    Mathematician here. I think you're getting into trouble (in an interesting way). If the model is a discrete time Markov chain determined by a matrix P of transistion probabilities, with states v0, v1, .. at times T0,T1,... then you can calculate v1,v2,...,vn step by step, using v1 = P v0, v2 = P v1, etc. But you can also square P repeatedly, to get a high power of P, and go straight from v0 to vn. There is a lot of pre-computation, but once it's done you can fast-forward to states far in the future.

    That is, we cannot ignore the process of evolution, as is often done. Computation creates discernible differences across a time dimension, such that if we had a second Le Place's demon producing outputs about every state the first demon passes through, the output would be many times larger than the first's when it simply describes T' based on T.Count Timothy von Icarus

    Well, you can't ignore the process of evolution completely, but you can skip large chunks of time. Not sure where this leaves your point 2.

    (Some time ago I was thinking about Tonini's integrated information theory, and wondering if fast-forwarding would destroy consciousness. I don't want to get into the hard problem here.)
  • Count Timothy von Icarus
    2.8k


    Thanks. Perhaps I'm not fully understanding your point, but does this actually reduce the number of computations required or just the length of the algorithm needed to describe the transition from T1 to Tn?

    7^4 is simpler than writing 7 × 7 × 7 × 7, which is simpler than 7 + 7 + 7.... 343 times, but computing this in binary by flipping bits is going to require the same minimal number of steps. Certainly, some arithmetic seems cognitively automatic, but most arithmetic of any significance requires us to grab a pen and start breaking out the notation into manageable chunks, making the overall amount of computation required (at least somewhat) invariant to how the procedure is formalized.

    Information is substrate independent, so a process occuring faster in a model formed of a different substrate is also to be expected. I also think it is quite possible that computation which models parts of our world can be compressed, which would allow for a potential "fast forwarding". Indeed, I think the belief that one can make accurate predictions about the future from a model sort of presupposes this fact.

    Just some other thoughts:

    ----

    If anything, recognizing 7^2 is 49, 8^2 is 64, etc. as names/identities, i.e. making that fact part of long term memory, probably requires MORE computation/energy than doing difficult mental arithmetic. Elsewise, it seems we should have evolved to store all answers to problems we have worked out in long term memory, rather than relying on working memory, but maybe not, that's a complex issue. The idea that 7 squared is just a name for 49 might then be a bit of a cognitive illusion, long term storage being resource intensive in the big picture, but retrieval of facts from it being cheap.


    If we suppose that entities in the world have all the properties we can ever observe them to have at all times, even when those properties are immaterial to their current interactions (and thus unobserved and unobservable without changing the context), then it is understandable that a computation that accurately represents their evolution can be reduced in complexity.

    However, I can also imagine a position that says that properties only exist contextually. This view runs into problems if fast forwarding is possible, but I think you might be able to resolve these by looking at the different relationships that exist between you, the observer, the system you want to model, and your model. That is, different relationships MUST exist if you can tell your model/demon apart from your system in the first place, so this doesn't actually hurt the relational/contextual view, i.e, "a physical system is what it does."
  • GrahamJ
    36
    Thanks. Perhaps I'm not fully understanding your point, but does this actually reduce the number of computations required or just the length of the algorithm needed to describe the transition from T1 to Tn?Count Timothy von Icarus

    It might reduce or increase the number of computations required - that would depend on many details. Perhaps it doesn't matter to you that the computation doesn't go through time in small steps.

    One other thought: you might find the idea of functional information interesting. Eg https://www.nature.com/articles/423689a . Perhaps it is possible to come up with a notion of 'functional information processing' which would distinguish between arbitrary information processing (which you might call causation) and 'meaningful' information processing (which you might call computation).
  • EnPassant
    667
    George Boole is your man . Computation is an analogue of logic.
123Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment