Comments

  • This Forum & Physicalism


    This is not true at all. It is generally proposed in metaphysics, and supported by evidence, that a whole is greater than the sum of its parts. There is a logical fallacy, the composition fallacy, which results from what you propose.

    I was speaking to identity in that post. The words "parts and wholes" is misleading there. When we say an object has the trait of being a triangle, or that it instantiates the universal of a triangle, we aren't referring to any one of its angles, right?

    The concept of emergence and the composition fallacy doesn't apply to bundle theories of identity. A "trait" is not a stand in for a part of an object. For example, traits aren't parts in the sense that a liver is a part of a human body or a retina is part of an eye.

    A trait - that is a trope (nominalism) or the instantiation of a universal (realism) - applies to the emergent whole of an object. They have to do so to serve their purpose in propositions. For example, the emergent triangularity of a triangle is a trait. The slopes of the lines that compose it are not traits, they are parts (they interact with traits only insomuch as they effect the traits of the whole). The way I wrote that was misleading, but the context is the identity of indiscernibles.

    Traits are what allow propositions like "the bus is red," or "the ball is round" to have truth values. The sum total of an object's traits is not the sum total of its parts. It is the sum of all the predicates that can be attached to it. So an object that is "complex" but which is composed of "simple" parts still has the trait of being complex.

    So to rephrase it better, the question is "is a thing defined by the sum of all the true propositions that can be made about it, or does it have an essential thisness of being unique to it?"


    And this is nothing but nonsense. What could a "substratum of 'thisness'" possibly refer to? "

    Yes, that is the common rebuttal I mentioned. It sounds like absurd gobbledygook. Now, its supporters claim that all ontologies assert ontologically basic brute facts, and so this assertion is no different, but it seems mighty ad hoc to me. That this theory still has legs is more evidence of the problems competitors face than its explicit merits.

    You attempt to make the description, or the model, into the thing itself. But then all the various problems with the description, or model, where the model has inadequacies, are seen as issues within the thing itself, rather than issue with the description.

    This sort of "maps versus territory" question begging accusation is incredibly common on this forum. It's ironic because in the context it is normally delivered, re: mental models of real noumena versus the real noumena in itself, it is itself begging the question by assuming realism.

    As a realist, I still take the objection seriously, but I'm not totally sure how it applies here.


    This is not really true. Time is a constraint in thermodynamics, but thermodynamics is clearly not the ground for time, because time is an unknown feature. We cannot even adequately determine whether time is variable or constant. I think it's important to understand that the principles of thermodynamics are applicable to systems, and systems are human constructs. Attempts to apply thermodynamic principles to assumed natural systems are fraught with problems involving the definition of "system", along with attributes like "open", "closed", etc..

    The "thermodynamic arrow of time," refers to entropy vis-á-vis the universe as a whole. Wouldn't this be a non-arbitrary system.

    I agree with the point on systems otherwise. I don't think I understand what "time is an unknown feature," means here. Is this like the "unknown features" of machine learning?

    This is a mistaken notion which I commonly see on this forum. Definition really does not require difference. Definition is a form of description, and description is based in similarity, difference is not a requirement, but a detriment because it puts uncertainty into the comparison. So claiming that definition requires difference, only enforces my argument that this is proceeding in the wrong direction, putting emphasis on the uncertainty of difference rather than the certainty of sameness. A definition which is based solely in opposition (difference), like negative is opposed to positive for example, would be completely inapplicable without qualification. But then the qualification is what is really defining the thing that the definition is being applied to.

    The difference/similarity distinction is two heads of the same coin. I start with difference only because Hegel did and that's where my thinking was going.

    If you start with the idea of absolute, undifferentiated being, then difference is the key to definition. If you start with the idea of pure indefinite being, a chaotic pleroma of difference, then yes, similarity is the key principal.

    Hegel used both. In the illustration from sense certainty, we face a chaotic avalanche of difference in sensations. The present is marching ever forward so that any sensation connected to the "now" of the present is gone before it can be analyzed. This pure unanalyzable difference is meaningless. The similarities between the specific moments of conciousness help give birth to the analyzable world, a world of schemas, categories, and traits. However, these universals (in the generic, not realist sense) in turn shape our perception (something you see borne out in neuroscience). So the process of meaning is a circular process between specifics and universals, difference and similarity.

    You cannot make definitions if all you have access too is absolute difference or absolute similarity. Similarity alone cannot make up a definition. As Sausser said, "a one word language is impossible." If one term applies to everything equally, with no distinction, it carries no meaning. In the framework of Shannon Entropy, this would be a channel of nothing but infinite ones or infinite zeros. There is zero surprise in the message.

    For instance, you can define green things for a child by pointing to green things because they also see all sorts of things that aren't green. If, in some sort of insane experiment, you implant a green filter in their eyes so that all things appear to them only in shades of green, they aren't going to have a good understanding of what green is. Green for them has become synonymous with light, it has lost definition due to lack of differentiation.

    The interesting thing is that this doesn't just show up in thought experiments. Denying developing mammals access to certain types of stimuli (diagonal lines for instance) will profoundly retard their ability to discriminate between basic stimuli when they are removed from the controlled environment in adulthood.



    Can you explain how these folks get around the issues mentioned above though? The ones I am familiar with in this list have extremely varied views on the subject.

    Nietzsche's anti-platonist passages are spread out, and I'm not sure they represent a completed system, but he would appear to fall under the more austere versions of nominalism I talked about. Like I said, these avoid the problem of the identity of indiscernibles, but at the cost of potentially jettisoning truth values for propositions.

    Rorty is a prime example of the linguistic theories I mentioned. The complaint here is again about propositions. Analytical philosophers don't want to make propositions just about the truth values of verbal statements. Plus, many non-analytical philosophers still buy into realism, at least at the level of mathematics (the Quine–Putnam indispensability argument re: abstract mathematical entites have ontic status).

    On a side note, I honestly find it puzzling that eliminativists tend to like Rorty. Sure, it helps their claims about how lost we are in assuming conciousness has the depth we "think," we experience, but it also makes their work just statements about linguistic statements.

    I am familiar with Deleuze and to a lesser extent Heidegger on this subject. I have never seen how moving from ontological identity to ontological difference independent of a concept of identity fixes the problem of the identity of indiscernibles. It seems to me that it "solves" the problem by denying it exists.

    However, it does so in a way that makes me suspicious of begging the question. Sure, difference being ontologically more primitive than identity gets you out of the jam mentioned above by allowing you to point to the numerical difference of identical objects as ontologically basic, but it's always been unclear to me how this doesn't make prepositions about the traits of an object into mere brute facts. So in this sense, it's similar to the austere nominalism I was talking about before.

    Now I think Putnam might have something very different to say here vis-á-vis the multiple realizability of mental states, and what this says about the traits of objects as experienced, verses their ontological properties, but this doesn't really answer the proposition problem one way or the other. It does seem though, like it might deliver a similar blow to propositions as the linguistic models (e.g., Rorty). Propositions' truth values are now about people's experiences, which I suppose is still a step up from being just about people's words or fictions, and indeed should be fine for an idealist.
  • Esse Est Percipi
    I've never seen Bishop Berkeley's argument refuted satisfactorily. Generally, the take I've read in philosophy surveys and papers on epistemology and ontology is that, like Hume's attack on induction, it cannot be fully refuted. However, whereas plenty of ways around Hume's challenge have given us relative amounts of pragmatic hope in inductive inference, Berkeley's argument is seemingly impossible to address.

    Why is this the case?

    Because idealism, physicalism, dualism, etc. do not flow purely from deductive logic. Arguments for and against each always rely on empiricism, from the datum of experience. Experience always occurs in our subjective, first-person world of mental objects, and so it cannot refute Berkeley. Attempted refutations of the bishop always seem to reduce to so much argumentum ad lapidem.

    After we came out of the church, we stood talking for some time together of Bishop Berkeley’s ingenious sophistry to prove the non-existence of matter, and that every thing in the universe is merely ideal. I observed, that though we are satisfied his doctrine is not true, it is impossible to refute it. I never shall forget the alacrity with which Johnson answered, striking his foot with mighty force against alarge stone, till he rebounded from it, ‘I refute it THUS.’

    Of course, the experience of the reality of objects that makes us so sure they exist is necessarily phenomenal. You feel the certainty of the rock you kick as experience. Arguments against Berkeley from science will always have the relative weakness of being arguments from this sort of experience.

    The reason they still appear work for some people is that people often mistake the complexity and mathematical rigor of an abstraction as an indicator of its validity in a premise. So, the famous rock stomp is laughed off, but appeals to quarks and leptons seem sound. In the end though, these entities are multilevel abstractions made to explain the results of the subjective experiences of scientists reading instruments. This makes them essentially the same phenomenological sort of experience as kicking a stone.

    To be sure, quarks were proposed first as purely abstract entities, entities that did not really have being. They follow from the logic of the mathematics of symmetry. However, the symmetries in question were derived by observation.



    Hardly. To be is to be and to be perceived is to be and be perceived while in the state of perception allowing one to realize one is being perceived, even as perception - being uncaused - shifts from one being perceived to another awaiting perception in order to be.

    It does not proceed from logic that being exists without perception. The entire reason idealist ontologies have been around for so long is because empircle evidence can't ever tell you that being exists without perception. This is true by definition, since empiricism requires observation.

    Further, it is arguable that we can't truly conceive of being without perception in the same way we can't conceive of a square circle. We can say the words, but do they have meaning?

    How does the absence of thought enter thought as a concrete mental entity for us?

    Arguably all thoughts about being sans perception are simply thoughts that take on a third person viewpoint. But isn't this viewpoint just a common type of abstraction, something that is itself part of the first-person experience of mental life?Thus, these are thoughts about a mental abstraction experienced in first-person subjective experience, not thoughts about pure noumena.

    This issue is an even larger problem for a physicalists because one cannot posit the existance of a non-physical point of view, something like a God's eye point of view that observes all "as it is." That would be inventing something non-physical that exists. So, they have to start talking about being for non-living physical systems, e.g., "what being itself is for a nebula." Of course, physicalists often do posit a God's eye view and are just unaware of this supposition.

    So, it appears that being without perception might fail Hume's argument from conceivability.

    But even if you say, "sure I can think about pure being, unfiltered by perception, the noumena is in my mind right now," your argument still has the problem of begging the question. It assumes the very thing it sets out to prove. Even if being can occur without perception, it does not follow that it must do so (necessity versus contingency). What you need is evidence that being does exist outside perception, but such evidence is seemingly impossible to produce.

    The issue of evidence is sometimes handwaved by saying our sense of logic is just the result of evolution and this sort of difficulty is merely apparent for us, due to cognitive deficiency. This argument also fails.

    Appeals to evolution are appeals to science. However, if logic doesn't hold, then we have no reason to trust the logical/mathematical reasoning of science, nor our rules of inductive inference in the first place, in which case why would we deny first person experience its primacy in being when it is the only thing we can be sure of?
  • Ukraine Crisis

    keep Germans down

    Lol, maybe in the 1950s. The US has been trying to get Germany to build a larger military for years and was all smiles when they announced their huge defence surge in response to Russia. They would be the third highest military budget in the world by a wide margin if Japan wasn't also surging defense spending. None of this has to do with the US, which has tried to get them to spend more unsuccessfully for decades, and everything to do with Russia and China acting aggressive enough to scare them.


    ----

    On an unrelated note, the new narrative is hilarious. All the stalling out and counter attacks are actually part of a grand strategy.

    1648235439149m.jpg

    So now I guess Russia only attacked Kiev in suicidal air assaults while the AA network was up as a distraction. They sent over a hundred special forces members to get gunned down on video in Kharkiv to make the distraction believable. They are trying to avoid hitting humanitarian infrastructure, and the obvious way to do this is to shell residential neighborhoods for days on end.

    It's cool though, they have an endless number of replacements for their losses. That huge paper inventory of hardware. They do it just like the USA, tons of tanks neatly parked in rows and inventoried, stored in the desert where it never rains, with inspections cycling through and weatherization.

    IFBNLnBuZw.png

    _img_url5_8tf7azjvtd.jpeg

    Certainly they wouldn't throw them in a heap in an area known for snow and heavy rain, right?

    1648005231803.jpg
  • Rasmussen’s Paradox that Nothing Exists


    Didn't mean non-locality alone for contextuality, although some people do claim that non-locality demonstrates contextuality.

    I was thinking of more recent Bell-Wigner experiments, instead of just testing of Bell inequalities. Because the difference is that assuming non-locality might not be enough maintain an objective world, one where all observers can reconcile their recorded facts.

    Modulo the potential loopholes and accepting the photons’ status as observers, the violation of inequality (2) implies that at least one of the three assumptions of free choice, locality, and observer-independent facts must fail. The related no-go theorem by Frauchiger and Renner (5) rests on different assumptions, which do not explicitly include locality. While the precise interpretation of (5) within nonlocal theories is under debate (21), it seems that abandoning free choice and locality might not resolve the contradiction (5). A compelling way to accommodate our result is then to proclaim that facts of the world can only be established by a privileged observer—e.g., one that would have access to the “global wavefunction” in the many worlds interpretation (22) or Bohmian mechanics (23). Another option is to give up observer independence completely by considering facts only relative to observers (24), or by adopting an interpretation such as QBism, where quantum mechanics is just a tool that captures an agent’s subjective prediction of future measurement outcomes (25). This choice, however, requires us to embrace the possibility that different observers irreconcilably disagree about what happened in an experiment. A further interesting question is whether the conclusions drawn from Bell or Bell-Wigner tests change under relativistic conditions with non-inertial observers (26).

    https://www.science.org/doi/10.1126/sciadv.aaw9832
  • Rasmussen’s Paradox that Nothing Exists

    I'm not sure what the disagreement is.

    Again, question begging. There are no claims in the absence of thought

    I am not sure how propositions not existing without thought somehow implies question begging as regards the assertion that: "Whether being does/doesn't exist outside of thought won't make any discernable different for any observer/thinker"

    I thought I understood, but clearly didn't.



    Very interesting. Any recommendations on Pierce in terms of a starting point for a deeper read? I am familiar with his larger ideas but haven't done a deep study.

    I'll have to read the rest again more closely, but I can sort of see how it would work. The lens of symmetry is something I should look in to more. I've generally read more on classical systems, and systems is the operative word there. It'd be nice to find another way of thinking of things, because systems are simply arbitrary.

    The issue with the entire universe being its own frame comes up with systems too. I forgot the authors, but their term for the idea that all systems bleed into each other more than we admit was the "blobiverse," which I got a chuckle from.
  • Rasmussen’s Paradox that Nothing Exists


    Yeah, the means by which quantum states are decohered isn't a known quantity. Delayed choice quantum eraser experiments don't jive with any sort of simplistic explanation because if you want to say that a measurement causes collapse as it occurs, then the eraser is causing the wave/particle that goes through one or both slits to jump backwards in time and change its past behavior based on whether information about which slit it went through is erased or not. This is obviously not a popular interpretation.

    This is a decent point for Copenhagen, since it states that anything before final observation (the observation a person can actually look at, not an observation that then gets deleted) is basically meaningless. But if you're going to assume objective collapse on measurements explicitly at any point such measurement occurs, you have apparent time travel. Point being, it's not just apparatus + photon = decoherence, but consideration of the whole set up for final observation (to what point, Copenhagen leaves blank).

    Copenhagen is still the overwhelming plurality favorite of practicing physicists, perhaps because it handwaves metaphysical concerns. "Shut up and calculate," and all.

    Idealism <> antirealism, this is a false equivalence. Plenty of idealist ontologies have existent objects, they are just mental objects.

    The physical interactions of physical systems (e.g. apparatus & photon) decoheres quantum states.

    Quite literally something that is impossible to totally pin down because you will never make a purely physical observation, you will only make ones occurring in subjective experience, which is why consciousness causes collapse can live on. Almost all interpretations of quantum mechanics (OC aside) are empirically indistinguishable, as they all predict the same outcomes to quantum mechanical experiments, so there is no scientific claim to be made either way.
  • Rasmussen’s Paradox that Nothing Exists


    So does this lead one logically to identifying fundamental being with vagueness?

    That's my take. That's why I think, Boehme, while extremely mystical and esoteric, hits on an essential feature of reality. Definition requires difference.

    I'm not sure what the fundemental dichotomies would be. Pure supposition leads to a constellation of different essential opposites in the history of religious, philosohical, and esoteric thought. Order - chaos seems like it may be essential one. Both ends of the universe end up looking the same at maximum/minimum entropy. Perhaps mass is another. You have the massless proton not experiencing time, and the infinitely high mass black hole not experiencing time either.


    Hard to imagine. But the logic of this seems clear enough.

    Yeah. Quite the project.

    This is certainly true up to a point. But a quantum gravity theory of everything would have to be background independent, and so a model of an immanent point of view rather than a transcendent one.

    Haven't heard this. Why is this so? How does it deal with the apparent experimental confirmation of contextuality (i.e. the same thing observed can occur at different times for different observers). Perhaps the "science discovers there is no objective reality," headlines were a bit premature, but I think it's been repeated since 2020.

    But I suppose depending on the meaning of "background independent," it could work.
  • Linear Regression's Undue Influence in the Sciences


    They do have SSRN. But SSRN would be way more valuable if it hosted datasets. It's amazing how much data the US collects, how much it spends to collect it, and how horribly it is to access. States will have incredible amounts of data on students, teacher, local government expenses, demographics, crime, etc. and then it will also be cut up into one year csv files without all the IVs they recorded included, and with headers on the fields varying year to year because someone is just copy pasting them.

    Just scrubbing that stuff and putting it into a free clearinghouse SQL database would be huge. It's all public record, it just isn't easy to get. Then you could build live papers off of them. An R-type open-source data visualization/report builder would be the ideal, but for now at least, Microsoft makes Power BI free if you're sharing with everyone.

    Validity would get tackled a lot better if you could filter through 10 years of data, flip between states with varying measures of the same thing (e.g., "poverty), and add and subtract control variables with a mouse click, all in one report.
  • Rasmussen’s Paradox that Nothing Exists


    So, being is contingent on being?

    Right, that seems obvious. The less obvious thing is that being is contingent on at least some level of difference. Because if being is just being, pure, undifferentiated oneness, undefinable relative to anything except for its not being non-being (which has no trait), then it's not clear it is anything different from nothing. It'd be a traitless being defined by its not being a traitless nothing. So, you need things, plural, for being.

    I buy this argument, although its tangential to the argument I had been making. Some people do propose a completely undifferentiated being wholly defined by not being nothing. I think this is meaningless.

    Again, question begging. There are no claims in the absence of thought.

    Right, there are no claims in the absence of thought. However, I was responding to a post that included the proposition/claim that there is being in absence of thought. That is a claim.

    That isn't question begging unless you are claiming that the proffered proposition "I claim there is being in the absence of thought," is identical with the reality of being without thought. That seems silly on the face of it though, no? Propositions about things are not the things they are propositions about. "Theseus is standing," is not a standing Theseus.

    The claim to make that line work would be "all true propositions' targets exist," paired with "the proposition that being without thought exists and is true because if it were true evidence for or against it would not exist." That to me seems indeterminant.

    Right, conceivability necessitates though

    Yup, and thought necessitates conceivability.

    Conceivability marks a limit of human thought. It does not mark a limit of what is.

    Maybe some terminology would help here.

    Prima Facie Vs. Ideal Conceivability

    1. S is prima facie conceivable for a subject when S is conceivable for that subject on first appearances. That is, after some consideration the subject finds that S passes the tests that are criterial for conceivability. For example, one substantive notion of conceivability (a version of negative conceivability) holds that S is conceivable if no contradiction is detectable in the hypothesis expressed by S. Under this notion, S will be prima facie conceivable for a subject when that subject cannot (after consideration) detect any contradiction in the hypothesis expressed by S.

    2. The notion of ideal rational reflection remains to be clarified. One could try to define ideal conceivability in terms of the capacities of an ideal reasoner — a reasoner free of all contingent cognitive limitations. Using this notion, we could say that S is ideally conceivable if an ideal reasoner would find it to pass the relevant tests (if an ideal reasoner could not rule out the hypothesis expressed by S a priori, for example). A strategy like this is taken by Menzies (1998). One trouble is that it is not obvious that an ideal reasoner is possible or coherent. For example, it may be that for every possible reasoner, there is a more sophisticated possible reasoner.

    3. Alternatively, one can dispense with the notion of an ideal reasoner, and simply invoke the notion of undefeatability be better reasoning. Given this notion, we can say that S is ideally conceivable when there is a possible subject for whom S is prima facie conceivable, with justification that is undefeatable by better reasoning. The idea is that when prima facie conceivability falls short of ideal conceivability, then the claim that the relevant tests are passed will either be unjustified, or the justification will be defeatable by further reasoning. For ideal conceivability, one needs justification that cannot be rationally defeated.

    Definition 1 has problems because I've seen people claim they can absolutely conceive of four-sided triangles that are colorless and red. Definition 2 has problems that are debatable, so let's settle for Definition 3.

    The refutation of the proposition that "conceiving of being without thought is impossible," seems difficult due to aforementioned reasons. It requires conceiving/thinking about things existing with no conceiving/thinking. Maybe this can be done by making your mind go entirely blank? Meditation? Shooting up anesthesia? But arguably this would just be the absence of consciousness, during which you wouldn't be conceiving anything.

    The conceivability = metaphysical possibility axiom goes back to Hume, and unfortunately, I can't find a good summary. This gets at part of it.

    The corollary, "all metaphysical possibilities are conceivable," isn't necessarily the case. The issue though is that, if something is inconceivable under ideal conceivability ( round squares, etc.) then whether or not it is metaphysically possible or not, or indeed actual, its being so or not being so is equivalent for all observers, and so co-identical. In order for such things to meaningfully be, one must adopt a viewpoint akin to some sort of "God's eye view," that somehow can observer "all that is," by definition.

    This is why I say uncritical metaphysics has become a problem for physicalism, because in very many versions the God's eye view is posited, even as God may be denied.
  • Rasmussen’s Paradox that Nothing Exists


    proof and verifiability are not necessary conditions for being.

    That isn't what I am asserting. I'm talking about the contingencies of being.

    Proof and verifiability are necissary for saying that specific things have being. The claim being taken on is not "being exists," the claim is "being exists independent of X."

    But clearly being is contingent on some things existing, because if absolutely nothing exists, there is no being, unless you want to argue for a pure being as being, being-in-and-of-itself.

    However, it is not a requirement of logic that things that have always existed in nature do so only by logical necessity. So being may always exist in concert with thought, even if it isn't logically necessary that it does so (maybe, I'll get back to that). Indeed, how much of the fundemental elements of nature can be logically derived apriori? Not much.

    So the reverse claim, that being has existed even when X did not, doesn't follow from any necessity. Gravity for example, perhaps has always existed. Its being may be coterminous with all being, but it isn't so from necessity. But if a claim doesn't flow from logic then it needs empircle support, but empircle support in the absence of experience is definitionally impossible, and thus the problem of being without thought. It is indeterminate.

    If you merely claim being CAN exist without thought, as opposed to the claim that it did, in fact, exist without thought, you don't have the burden of observational data. However, since whether it did or did not makes no difference for all observers, it's arguable this distinction makes no real sense.

    Now, arguably, being without thought is actually inconceivable, and so depending on how much stock you put in the conceivability = existant argument, being without thought may fail that test. I don't find that line particularly fruitful. It's worth noting here that the argument isn't that you can't conceive of yourself not being, and the world going on, it's that such a conception is necissarily still a thought.

    By definition, being without conception can't be conceived. It's asking you to think non-thought, imagine without using imagination. That would be the crux of that claim.
  • Rasmussen’s Paradox that Nothing Exists


    Yeah, "the world before born," is not a world before thinking. Your examples all derive from experience. Hence, "tell me a time when you had direct access to noumena?"

    I did not formulate thinking = being, the formulation is "the reality or unreality of being without thinking is co-identical for all observers." That is, it is empirically unsupportable. Your evidence to counter this is all from observation.

    For a self proclaimed "methodological physicalist," who makes no ontological claims (a prior claim) you sure do seem pretty concerned with the ontological truth of physicalism.

    Notably, this same problem crops up at the heart of physics. Everett's original MWI does not imply any observable differences from Copenhagen. Neither did Bohm's, which was a plus in the climate he released it in. And, next to all of these, Conciousness Causes Collapse predicts the exact same observations.

    Later versions are trying to figure out if there is any sort of testable difference between them. So, the issue is not just trivial, there is a self similarity at the heart of the physical sciences where the limits of observation make ontological interpretations about the physical world's essential nature merely dogma, including vis-á-vis Conciousness Causes Collapse, which would imply no actuality without thought.

    Conciousness Causes Collapse is highly unpopular, but it's unpopular because it offends intuitions, not because it can be empircally undermined.
  • Rasmussen’s Paradox that Nothing Exists
    :rofl:



    The argument isn't generally "thoughts of being = being." It is:

    1. Thought presupposes being.
    2. Being does not presuppose thought; however,
    3. Being existing without thought is not a deducable necessity, a priori. It can be a contingency that nonetheless holds.
    4. Empiricism is, by definition silent on being without though (can't have evidence from experience about the unexperienced).
    5. So being without thought is unprovable and unverifiable.
    6. If being sprang into existence with thought, or if it pre-existed it, makes no difference for all experiencing beings. For a recent example: in "conciousness causes collapse," if the universe was held in super position for billions of years, existing as possibility, not actuality, and became actuality much later, we would expect to see the same things either way. The two situations would always and forever be co-identical. This is how someone with the logical chops of von Neumann could hold such a position.
    7. If the states of being without thought existing and not existing are always and forever coidentical for all observers, then there is no distinction and the two are actually the same because every property of X is necissarily a property of Y (Liebnitz Law).

    Ironically, the ocean/waves comparison is one often made in criticisms of physicalism. Physicalism, as it is for us necissarily, is a set of mental abstractions living within conciousness (waves within the ocean). The claims of Berkeley are more akin to claiming that the ocean can't exist without water, because without water, you can't ever observe an ocean.

    The refutations you have offered so far are argumentum ad lapidem, appeals to experienced datum.

    "After we came out of the church, we stood talking for some time together of Bishop Berkeley's ingenious sophistry to prove the non-existence of matter, and that every thing in the universe is merely ideal. I observed, that though we are satisfied his doctrine is not true, it is impossible to refute it. I never shall forget the alacrity with which Johnson answered, striking his foot with mighty force against a large stone, till he rebounded from it, "I refute it thus!"

    Johnson was clearly appealing to the felt concreteness of the stone to suggest that it could not be just a figment of imagination. Indeed, the felt concreteness of the world is probably the main reason why people intuitively reject the notion that reality unfolds in consciousness. If a truck hits you, you will hurt, even if you are an idealist.

    However, notice that appeals to concreteness, solidity, palpability and any other quality that we have come to associate with things outside consciousness are still appeals to phenomenality. After all, concreteness, solidity and palpability are qualities of experience. What else? A stone allegedly outside consciousness, in and by itself, is entirely abstract and has no qualities. If anything, by pointing to the felt concreteness of the stone Johnson was implicitly suggesting the primacy of experience over abstraction, which is eminently idealist.

    We have come to automatically interpret the felt concreteness of the world as evidence that the world is outside consciousness. But this is an unexamined artifact of subliminal thought-models. Our only access to the world is through sense perception, which is itself phenomenal. The notion that there is a world outside and independent of the phenomenal is an explanatory model, not an empirical fact. No phenomenal quality can be construed as direct evidence for something outside phenomenality.

    Likewise, appeals to what people have seen looking at stars are also appeals to phenomenal experience.

    If Berkey is to be refuted, it has to be a deductive argument. Since naive physicalism is humanity's default (e.g., God(s) creating material trees, rocks, people, etc.), arguably the flat disregard of Berkeley's argument because it is unintuitive represents one of the cognitive blind spots physicalists are otherwise so happy to point out when it comes to woo. So then, for the positivists, it became that talking of the necissarily unobservable, being as itself as an example, is simply metaphysical woo. It is certainly a metaphysical, ontological claim, as opposed to any sort of empircle one, that there exists such a thing.
  • This Forum & Physicalism


    The earth’s surface is measurably cooler where it is covered by a richer ecosystem

    I'm lost here. Doesn't this imply that life forms running in rewind would be increasing local entropy, and thus running against the grain of the now contracting universe that is headed towards ever decreasing entropy? That's what I was thinking of anyhow.

    On a side note, if you were an extra dimensional traveler watching our universe run in reverse, I wonder what the opposite of the Big Bang would be? "The Incredibly Slow Warm Up?"
  • This Forum & Physicalism


    What are those? I don't have an encyclopedic knowledge of metaphysics articles, those are the big ones I was aware of. How do they get around it?
  • Rasmussen’s Paradox that Nothing Exists


    Sorry, I just meant "your metaphysics,".in the broad, second person sense, as in "if you don't change your oil, your car will break," isn't about an individual car.

    So, the problem for claiming that objects are defined only by their characteristics (being red, being round, weighing 1 ounce, etc.) is that, on the face of it this seems to imply that objects that share all their traits have the same identity.

    Imagine two red balls that are exactly the same. If things don't have an essential thisness, but are defined only by their traits, then these two balls are the same ball. But this flies in the face of our prephilosophical intuition, because we see two balls.

    A common counter argument here is that the balls do not share all their traits because, there being two of them, they have the trait of being in different locations. One ball is above another, or to the north of another.

    The problem here is that such traits are derived from a thing's relation to other things. I can be above my table when on a ladder, and below my table when I crawl under it. If such derived traits are to be considered a part of a thing's identity, then it follows that I have a different identity when I am above my table as opposed to when I am beneath the table. This also violates our intuition, and makes identity a fairly useless concept for any formal descriptions of events.

    Those are the main two contradictions. A lot has been written on them but the essential problem remains.

    One way out of this bind is if you maintain that two identical objects are distinct if they never occupy the same space at the same time, and that this property of them never occupying the same space and the same time with other co-identical objects is itself a trait of the object. It is debatable if this sort of thing qualified as a universal or trope (realist vs nominalist definitions of traits), but showing why is a whole different line of problems.

    Another way out is to not buy into tropes or universals. Theories that do this claim that all propositions are actually just about either words or imaginations. These nominalist theories don't have a problem here, but they do have a problem with any sort of epistemological realism, because propositions are now not about objects

    People who claim that names are just sounds for things also have no problem here. This is a sort of super nominalism. These theories are also not very popular because they entail that statements about a thing being triangular, square, red, etc. have no truth value. Triangular is just a sound, it corresponds to no actual real concept. Such theories also generally have problems with modality and truth statements about things being probable.
  • Rasmussen’s Paradox that Nothing Exists


    Not including it opens your metaphysics up to a broad side of attacks that show your theory can't account for numerically different entities with identical properties. You have to say two red balls that are exactly the same are the same ball in two places (at least that is the argument, but it's better than it sounds).

    I will summarize when I have time because I find it pretty neat.
  • Rasmussen’s Paradox that Nothing Exists

    That's fine. It just means you're supposing the truth of something whose truth or falsity is necissarily always and forever in question, and whose truth or falistiy makes no difference to all observers, always and forever. The proposition is not a logical truth, and by definition can have no observable truthmaker.

    But on the upside, it won't have any practical input on your model vis-á-vis empircle vetting.

    I also see no reason to assume that being requires a substratum. I don't think it is even a coherent concept.

    It's not that being has a substratum, the theories posit that objects have a substratum, a thisness that universals or tropes attach to. Otherwise, how is an object not fully defined by its traits? (That objects are their traits is of course, popular too).



    Those examples are the product of thought. Did he write them without experiencing them, throwing them subjectively ex nihilo on to the page?

    Please tell me when you've ever had access to pure noumena? It is ostentatiously true that we experience this third person view of the physical world as a mental abstraction within first person experience. You are assuming that your map represents something more ontologically primitive than the substrate in which it exists (mental experience). This may very well be true, but it isn't proveable. Empiricism can by definition have nothing to say about the necissarily unobservable, which being without thought is by definition.

    Now there are still good reasons to go along with physicalism, notably the problems competitors also have, but this doesn't negate the fact that physicalism necissarily implies positing a model that exists in thought as being ontologically more primitive than the substrate it must exist in. Note, this point has absolutely nothing to do with realism. Plenty of idealist ontologies are realist vis-á-vis external objects.

    Edit: BTW, this issue is a pretty good explanation of the Hard Problem. We are asking that we derive our experiences from our abstractions, which are themselves just one facet of subjective experience. This is used as an argument against physicalism, but I think it's actually a good excuse for it. It shows that the Hard Problem is due to an epistemic limit, not a flaw in physicalism. Physicalism thus gets a boost from taking criticisms seriously and not hand waving them away.
  • Rasmussen’s Paradox that Nothing Exists

    Did Feuerbach ever give an example of such being without thought? Obviously he didn't, because anything he set down would obviously have been an object of thought.

    This is just asserting a bare substratum of being as brute fact, despite the fact that evidence of being without thought is impossible.

    Now, the logical positivists had a lot of bad ideas, such as asserting that talk of things existing while not observed is meaningless, but they potentially had a point about this "being as being" talk being meaningless. I am not willing to go that far. Rather, it is simply talk about something that is utterly unprovable and whose reality makes no difference to us. Arguably that fits the definition of meaningless; I don't think it does because such a thing can possibly be imagined, although arguably it cannot be imagined if we must necissarily imagine an observation of this being as being, not the thing itself.

    Why? Because the existence of being without thought is always and forever, necissarily the same for all observers if it exists or doesn't exist. This being as being, bare substratum of existence, etc.'s being is indiscernible from its not being.

    Now I anticipate the follow up of "does the moon exist when no one is looking at it." This is not a real problem for the point above. Copious amounts of observable evidence exists for the moon having been where it is for a very long while. If we didn't have a moon, and one showed up in the sky tomorrow night, we wouldn't assume it had always been there. A model of unobserved items behaving as they do when observed has plenty of support. It's also a thing that can only exist as an object of thought, a map, not a territory.

    The quote above is making a claim about the map being the territory. It's unprovable and unfalsifiable. Thought is contingent on being. Being's contingency on thought is simply an unanswerable questions whose answers are indiscernible from each other.

    This issue is incorrectly compared to solipsism. Solipsism is similar, in that it poses questions that cannot be answered, but the answers are discernable across observers. If solipsism is true, then acts of cruelty and kindness only effect one observer, but if solpsism is not true, there is a big difference for people on the receiving end of such acts. This is in contrast to being as being, where the answer necissarily causes no variance for any observers, ever.
  • Rasmussen’s Paradox that Nothing Exists


    A theoretical particle is by definition thinkable. To theorize is to think.

    Yeah, that was the point I was trying to make. Such a particle is thinkable, but unverifiable. It still has being in some sense in that it can be posited. It falls beyond the limits of empiricism to define being. The unthinkable, arguably, falls beyond the limits of being itself.

    The unthinkable cannot have being for us. The necissarily unthinkable (for all minds,) cannot have being period, unless you posit some sort of absolute God's eye view of existence as a ground, or some sort of unanalyzable bare substratum of being. The problem with positing the being of things that are (necissarily) unthinkable is that their being and not being will always be coidentical for all parties, and so it is unclear if the bare substratum of being posit is actually meaningful. To put it another way, if you can posit it, it is thinkable, so such things cannot even be posited directly.

    I guess I was not explicit enough: the points about unobservable things are in reference to the limits of what empircism can accept as having being. The points about things that are unthinkable are about what can even have the potential for being. The distinction between necissarily unobservable/unthinkable things and ones that are only contingently so, is the distinction between the hard limits of being, and simply the horizons of human knowledge.
  • This Forum & Physicalism


    we should start by defining "mysterious tendencies" as aspects of reality

    Seems to me that there being anything at all is plenty mysterious. It's the central question of philosophy and science, and answers have not been forthcoming.

    But if being itself is a mysterious tendency, how are you going to possibly define the term? You can't define "all that is" in opposition to anything else that is that is not simply a component part of being. You can also define being against the idea of nothing, zero, etc. but this is about as far as you can go.

    Definition requires difference. If you start at the very highest level of generality, you have no differences to use in definition and your project is doomed. Logically, it would make more sense to start down at the very smallest differences that can be discriminated. If you wanted to define visible colors, you work your way around something analogous to a digital color wheel, and tweak the various shades in small increments until you've laid out a map of all the discernable colors.

    Now, a whole is not always defined by its parts. We have emergence as a concept. However, a thing's parts are always, necissarily, part of the definition of the whole.

    Pragmatically, you can start wherever you can make meaningful distinctions since some differences are more relevant than others, and concepts from higher level definitions work for understanding lower level ones due to self similarity and fractal reoccurence.

    If you look at theories of parts and wholes in metaphysics, generally it is proposed that things are just the sum of their traits, and so traits are the logical unit of analysis. The primary opposing theories to this view hold that objects possess an essential haeccity, a substratum of "thisness." This substratum of bare being/identity makes a thing different from just its traits, and so neatly solves many problems of identity that come up when you posit that a thing is just the tropes/universals it possesses/instantiates. However, the substratum is unanalyzable, an ontic primitive, and so it can't be where your analysis starts, and is arguably a vaccuous concept entirely.

    Arguably, something along the line of Aristotlean substance is a third option, but I'm not convinced that this isn't just blending the two other theories, while still leaving the problem of haeccity unresolved. Aristotle claimed the identities of numerically distinct entities with the same substance are self explanatory. I am not sure it is.
  • This Forum & Physicalism


    Enformy (analogy to thermodynamics):
    In the Enformationism theory, Enformy is a hypothetical, holistic, metaphysical, natural trend, opposite to that of Entropy & Randomness, to produce Complexity & Progress. It is the mysterious tendency for aimless energy to occasionally create the stable, but temporary, patterns we call Matter, Life, and Mind.
    BothAnd Blog, post 28
    Note -- In thermodynamics, what I call "Enformy" (philosophical concept) is known as "Negentropy" (physical term).

    I always find it interesting to think of what happens if you flip the process.

    Thermodynamics is the ground for time. Without the thermodynamic arrow of time pointing towards entropy, we don't have a clear direction for time. The laws of physics work as well backwards as forwards. When we talk of relativity and time slowing down or speeding up, we're really just talking about the relative durations of certain oscillations under the varying effects of gravity, at least as far as the measurement of this effect is concerned.

    So, if we flip the direction of entropy, we have a universe tending towards order. Life and other complex self-organizing systems emerge and begin increasing local entropy. Life forms slowly devolve into less and less complex organisms, sucking up entropy and breaking down complexity. Genomes act as one way membranes purging information about the enviornment. Ecosystems are broken down bit by bit. Vertebrates are replaced by single celled bacteria. Eventually, life is driven to extinction by the unending trend towards order.

    The takeaway for me is that complexity only exists in the gap between order and chaos, on the fringes. It's very dialectical that way. Being and nothing stand in contradiction, so we have becoming, the continuous transition of being into the nothingness of the past. According to Penrose, when your reach either end of the entropy scale, the heat death of the universe, or the pre-Big Bang singularity state, the formal mathematical descriptions becomes increasingly identical. So then, we also have order and chaos standing in contradiction, undefinable in their absolute states, and so we get complexity within becoming as a secondary synthesis.
  • How do we solve a problem like Putin? Five leading writers on Russia have their say.
    Depends on who "we" are. If we're Russians, the answer is something along the lines of the mass protests that chased Mubarak out of office in Egypt. These are not without major risks, but neither is living under a dictatorship with a collapsing economy.

    For the rest of the world, the answer is something more similar to Cold War containment doctrine. Essentially, a long term strategy to isolate them and thus impede their ability to wage war. The current sanctions regime is a good example if it actually holds long term.

    A drastic step up in investment for European nuclear, wind, tidal, and solar power would be the real key. Nuclear energy represents the current best bet for green energy and energy independence from dictators, and yet public opinion, largely shaped by myths and flaws in the nuclear technology of 60 years ago, remains a major obstacle.

    The risk here in terms of grand strategy is that an isolated Russia essentially becomes a vassal of / resource market for China. China has its own significant issues with aggression, lack of political freedoms, etc. China has generally been far more rational than Russia, and less aggressive, but, because it has a dictatorship system without a clear successor to Xi, this could change. And whereas the Russian military is demonstrating that it is not particularly competent, the threat posed by the Chinese military is less clear.

    Of course, a major barrier for any of this is various liberal democracies voting for their own pseudo-strong men who have their own aggressive policies they want to implement.
  • This Forum & Physicalism


    philosophies just cannot be summed up into aphorisms

    This might explain the success of physicalism. It gets a full treatment in science classes. And while these classes don't get into ontology, the abstract systems of physicalism are generally presented as being "what there is."

    Since philosophy isn't generally taught at the pre-collegiate level, and even then normally isn't a prerequisite for most majors, no alternative is laid out. Science and physicalism also get conflated, the ontology for the epistemology, an interpretation of the results for the results themselves.
  • Awareness & Consciousness
    Depends. Sometimes interchangeably, sometimes not.

    Conciousness is often defined more basically. Something is concious if it is "something for which there is something to be like." That is, there is a first person perspective corresponding to it.

    Awarenesses can better denote meta cognition. When you zone out while driving, experiments suggest that sights and sounds still pass through conciousness, but they aren't kept in memory very long, and little executive function is dedicated to them. Metacognition is being aware of your own thoughts and perceptions, and awareness might be better term here. Cats might be concious of the noise of a mouse, but are they aware of the fact that they, a subjective viewpoint, are experiencing hearing a mouse?

    However, in normal language this definition is not clear cut. It's arbitrary.
  • Does reality require an observer?

    An optical illusion might be the better analogy.

    Thinking you observe an objective reality is a persistent, seemingly inherited habit of thought. It takes careful observation and experimentation to see the illusion.

    https://www.science.org/doi/10.1126/sciadv.aaw9832

    It's a habit that makes sense from an evolutionary perspective. Sensory data has to seem different enough from imagination, prediction, and other mental processes to make you take the immediate actions you need to take to survive and reproduce. So it makes sense that types of information more immediately salient to threats or opportunities to aquire necessary resources should aquire a unique flavor that causes us to prioritize them.

    Cooperative species like humans also need to keep in synch enough to cooperate, and an inclination towards positing an objective world helps here too.

    This appears endemic. After all, realism didn't appear when we started getting evidence of the brain/behavior/experience link, it shows up in the earliest records of culture. The recognition of subjectivity as such seems to be the latter development, something hidden by a cognitive blind spot.
  • Rasmussen’s Paradox that Nothing Exists

    Maybe I wasn't being clear. I was thinking of entities that are necessarily unthinkable, not contingently so. For example, a theoretical particle that exerts no observable influence on any part of the universe, versus a rock floating through space that no one ever looks at, but could see if they did look at it.

    In this first case, the point is simply that empiricism does set the limits of what can be said to be as what can be observed. This is not to imply that this observation cannot be indirect, or reliant on multiple levels of abstraction and inference, as is the case for quarks and leptons. This is true for the ultimate sense of being "unobservable," that is, being necissarily unobservable versus contingently so, not in the sense of "not having been observed to date," or "possibly not being observed in the future."

    The unthinkable (as in unthinkable for all minds, past, present, and future, necissarily as opposed to contingently unthinkable) obviously can't be observed, and has the added lack of reality of being unable to be imagined or deduced since it can't, by definition, occur as an object of thought. It seems empiricism and idealism must declare such things to be meaningless or lacking in any ontic status.

    I don't see what this is supposed to show. One might argue that if thinking and being are the same then we should be able, a priori, to deduce all that is.

    It's hard for me to imagine the contents of mind existing outside of experience. If everything is thinking, I'm not sure what reality would look like, unless perceiving is covered under "thinking?"

    I'm not sure if the claim that all things should be deducable holds for most idealist ontologies, which do posit that all reality is mental, but perhaps not that all things are thinking. Modern idealist ontologies generally suppose that mental objects behave in the manor we observe in the "physical" sciences. These behaviors/properties cannot be deduced, but require inductive inference to derive.

    The arguments I normally see for these ontologies is ironically that physicalists introduce too much epistemological baggage and speculation by positing that a set of mental abstractions (i.e., the explanatory model of the physical world we experience in thought), which are necissarily a sort of second order type of mental object, is actually what should hold the primary ontic status, over and above the more concrete and accessible world of subjectivity, of which mental abstractions are just a part.

    I say ironic because in the modern context idealists are generally stereotyped as the more speculative, less analytical types, but the newer forms are generally arguing from Occam's Razor, the formal logical consistency of the ontology, and parsimony.

    I'm not ready to switch camps or anything, but it is something that idealism has just one ontological primitive, experience, and computational/informational ontology also has just one ontological primitive, whilst physicalism now has an absolute zoo of them, hordes of particles of virtual and non-virtual varieties, various forces and fields, all of which are only accessible using multiple layers of abstraction. The two aforementioned ontologies also have the benefit of looking at the Hard Problem and the role of the observer in physics and saying "well yeah, that's what you'd expect."
  • Rasmussen’s Paradox that Nothing Exists



    This is sort of all aside the point, because my comment was specifically about the reference to things that can never be thought of, not things that we didn't think of until X point in time.

    Whenever we discover something new, something previously unknown, we have an example of something that is but was until then not thought and not perceived.

    I get what you are getting at here but this:

    It does not come into existence when it is perceived, it already was, we simply become aware of it. In fact, at the astronomic level it may no longer exist. What we perceive is what was but no longer is.

    does not follow as a necessity. It's simply a good assumption that needs to be confirmed. We think things we perceived for the first time already existed because empircle analysis tends to allow us to find evidence of its prior existance.

    So, while we didn't have observations of bacteria for much of our recorded history, but we had a huge historical record to look back on as evidence that bacteria existed before we knew about them. The same holds for electrons, gravity, etc. In cases where this evidence is not as clear cut, we have the general pattern of having found such evidence in the past to support our inference that the newly discovered entity didn't just appear when we perceived it.

    On the other hand, we generally don't have good data to suggest that things tend to appear ex nihilo, at least not at the scale of every day objects (quantum foam arguably being a notable counter example, and quantum mechanics being an example where repeated experimental evidence suggests that observation does cause a state to exist that did not exist prior to observation).

    That said, all these inferences are the results of prior experience and the shared experiences of others. They aren't the products of deduction. It is a guideline based on past experience itself, the results of observation.

    If a second, pink moon appeared tomorrow and began orbiting the Earth, it would not make sense to assume it had always been there despite it being observed for the first time just recently.
  • Non-Physical Reality


    I think you are mixing up your concepts here. Things that are observed are not in superposition, they have distinct values. Superposition is pre- observation, including in Objective Collapse. Only in MWI do observed particles remain in superposition.

    Second, Objective Collapse does predict different outcomes from QM. That is what makes it exciting, different forms should be testable in the fairly near future. GRW has already been tested, it just happened to be falsified by the results. You may be thinking of Pilot Wave theory, which does not differ in its predictions.

    Objective collapse also has nothing to do with concious observers.

    The problem of concious observers seems unfalsifiable and unprovable. To know the results of any experiment requires someone to see the read outs of the test. However, this problem isn't unique to QM, it's true of all empircism.

    On a related note, there is an argument to be made here that physicalism is tying itself into knots throwing ad hoc explanations of the apparent absence of an objective reality independent of observation (i.e. one observers observation's change what another will observe) at the wall hoping something sticks. However, if idealist ontologies have a leg up in this arena, they still seem to have a problem with explaining why the extrinsic mental representations of concious organisms take the forms they do, so they have their own major problems even if they can deal with QM and the Hard Problem more clearly.

    Since recent experiments have left even less doubt of this "quantum weirdness," it seems now that the world is non-local and not objective. This isn't a problem for scientific inquiry, but it does show the relevance of Humple's Dilemma.
  • Rasmussen’s Paradox that Nothing Exists


    Progress toward the Absolute, according to Hegel was completed by Hegel.

    I don't think this is a correct interpretation. The Phenomenology and Greater and Lesser Logics serve in defining Absolute Knowing, and indeed it's arguable that the main goal of the Phenomenology is to bring the reader to the standpoint of Absolute Knowing, the place where philosophy transitions from "love of wisdom," to actual "wisdom," the elevation of speculation to science. This concept of Absolute Knowing is not identical with the Absolute. Hegel has a number of "Absolutes" that exist within his "circle of circles," systems (e.g. the Concept, Absolute Concept).

    I don't think, although some commentators argue this, that it represents an omniscient knowledge. Nor does it represent some sort of special enlightenment Hegel had (normally a position of critics).

    For one, the Preface to PhS is explicitly a call for a shift in the current prevailing perspectives of the day, but the Absolute represents a process occuring across being and would be inclusive of all concious beings. Towards the end of the Preface, Hegel reflects explicitly on the intuition that his work won't be immediately grasped by many, and so the process of realization of the Absolute won't be completed.

    The problem with the omniscience version of Absolute Knowing is that it appears to be contradicted throughout Hegel.

    The problem with the narrow view of knowledge in a broad sense essentially being completed with Hegel is that it totally ignores his "truth is the whole" epistemology, which obviously is incomplete at the time of his writing. It also clashes with his commentary on the sciences of his day


    A good working definition I've found is:

    Hegel's doctrine of thought,
    philosophic thought, is given in the category of absolute knowledge, which is arrived at through the procedure of the Phenomenology of Spirit. The conception is thus based directly upon our actual knowing experience, and claims to give us an account of thought as it essentially is. Thought, as here defined, is genuinely objective, transcending the relativity of individual experiences and being the determination of things as they are in themselves. But this s is not to say that reality is identical with abstract cognition.

    For thought finds its capacity to express the real in the fact that its universals are always the syntheses of differences, and not the blank universals of purely formal logic. Actual living thought includes within itself the data of so-called intuitive perception, of feeling, of volition, of cognition, and it is adequately conceived of only as this unifying principle of experience; it is the living unity of mind, the one reason which appears in every mental activity.

    Therefore, when Hegel teaches that thought is conterminous with the real, he is simply stating the doctrine that experience and reality are one.

    I like Hackett's treatment more but it's not so neatly summarized.



    Anyhow, that's all tangential. I think the main point would be, what reality would a thing have that can't be thought?

    thinking has content. It is not just the movement of thought thinking itself. What there is, being, is not limited by what has been thought. If there are limits to human thought, that is, if we are not omniscient, then the limits of thought are not the limits of being.


    Physicalism is necissarily an ontology where an abstraction (physical reality) is accepted as more basic than perception. However, for any knower, subjective experience is obviously going to be more appear ontologically primitive than the abstraction of the physical world.

    Now we know appearances can be deceiving, and we have plenty of good reasons to believe in physicalism, but this does pose a problem for physicalism in that it has to reduce what by all appearances is something more ontologically primitive (subjective experience) and fit it satisfactorily into an abstraction that is itself necissarily a facet of subjective experience. Hence, we have the "Hard Problem," where it appears to be impossible to derive the experiences of the subject from the abstraction the subject experiences (the model of the physical world).

    If those sentences seem circular, it's because the attempted reduction is circular.

    Upon reflection, the Hard Problem shouldn't be at all suprising, because it's essentially demanding that an abstraction somehow account for sensation despite the fact that thinking through an abstraction is itself a sensation (and just one type of sensation).

    Now if something can't be thought (and thus also can't be perceived) it's hard to see what sort of being it can have. It can't exist as part of subjective experience obviously, but it also can't exist as a physical entity that has any sort of support for its existence, as empircle evidence for its existance would be impossible since it cannot be thought of.

    To be sure, there might be things that humans can't think of that aliens or later forms of life can. There might also be things we can't observe currently that new technologies will let us observe in the future. I don't know though if there can be things that exist which can never be observed or thought of by anyone however. The existence of such things would, for all observers, forever be identical with their non-existence. But if two things are definitionally identical, as they must be for all observers in this case, than every trait held by state X (the thing's being) must also be held by state Y (the thing's not being) for all targets of the proposition. This seems to violate Liebnitz' law of identity. The thing could only be as an object within a third person abstraction of "absolute/noumenal being," but such abstractions by definition only contain things that can be thought.
  • Rasmussen’s Paradox that Nothing Exists


    It follows from this "must" that if something cannot be explained it must not exist. It might be argued that even though there are things that cannot be explained now they must still have an explanation that in time can be provided. But this assumes that there are no limits to human knowledge. Such metaphysical privileging should not be accepted on faith.

    Hegel's theory is not only about the movement in time, but in place. It is Eurocentric. In addition, our thinking is not simply in terms of forms of thought, but in terms of specific concepts that change. Hegel knew nothing of relativity or quantum mechanics, both of which shape our thinking in ways that they could not have shaped his understanding of reality.

    I'm not sure this is really a fair assessment. Hegel was writing before Darwin's theory of evolution, but his theories are certainly applicable to biology as a whole (see: below). Hegel likely wouldn't have had too much of a problem with QM or relativity. His vision of progress towards to Absolute as historical in human history doesn't have to shift that much to incorporate contemporary theories of life, particularly ones centered around biosemiotics (Hegel is a precursor of semiotics to some degree), information, and life as a self organizing far from equilibrium system.

    Hegel was a German philosopher who was a major figure in the philosophical movement known as German idealism. In this study I will argue that Hegel’s philosophy has similarity to the self-organization theories of Prigogine and Kauffman, and is therefore an idea in advance of its times.

    The development of thought and thing is at the core of Hegel’s work. In The Phenomenology of Mind, he tackles the development of recognition and being, subject and object, and self and other, - from simple to complex forms. In The Science of Logic, Hegel deals with the progress of categories from abstract to concrete, - and pure being to absolute idea. In The Philosophy of Nature, his interest is in how nature evolves through the mechanism of self-organization. Hegel was writing before Darwin proposed the theory of evolution, and his dialectic is aimed at analyzing and describing development in the logical sense. The common feature of these works is their analysis of the fundamental structures by which order is generated..


    In Hegel’s view, nature develops logically. Nature itself is a system of self-organization through the random motion of the contingent.

    Hegel would like to say that the basis of life is the non-equilibrium self-referential structure. In more modern terminology, we could interpret this as meaning that the first organism emerged from interaction between high polymers.

    Kaneko proposes a model of complex systems biology, which I will argue, Hegel was proposing in his metaphysics 200 years ago. Kaneko conceptualizes life as a living system that develops when interaction between the elements in a system is sufficiently strong. Living creatures exhibit flexibility and plasticity through fluctuations in these elements. Complex systems biology uses a dynamical systems approach to explain how living things acquire diversity, stability and spontaneity.

    https://journals.isss.org/index.php/proceedings59th/article/view/2658#:~:text=In%20Hegel's%20view%2C%20nature%20develops,%2Dequilibrium%20self%2Dreferential%20structure.

    I've seen Hegel get a fair amount of play in cognitive/neuroscience and biology journals recently, and in philosophy of science across fields. My results might be biased because I sought these types of takes out, but I remember reading Incomplete Nature, The Vital Question, and Synch, and sections of What is Real? and Chaos, and thinking, "this sounds very Hegelian." And sure enough, when I looked, people had already published the papers I was thinking of writing, with the benefit of all the context, knowledge, and prestige of actually having done a PhD in the relevant area.

    The dialectical is potentially a more holistic model through which to view Kuhn's work on paradigm shifts and scientific revolutions. Shifts tend to arise when contradictions are identified and either the paradigm sublates these or is overturned, but elements of it always survive.

    It also might be a better model for analyzing developments in human institutions than natural selection, since extinction isn't a concern in the same way, and neither is competition and cooperation really the same dynamically. Fukuyama's End of History is a good example.
  • Rasmussen’s Paradox that Nothing Exists


    In my systems science/hierarchy theory view, the whole is produced by what it produces. The whole shapes its parts - it contributes the downward-acting constraints. But the parts then construct the whole - they contribute the upward-building material being, the suitably shaped "atomic" components.

    So it is a bootstrapping or cybernetic causal model. And if it sounds unlikely, it is at least less unlikely than creatio ex nihilo.

    I've come to a similar conclusion, but you've stayed in two paragraphs what had always taken me two pages, so I very much appreciate it.



    This is an excellent point. My struggle with this line is that realism vis-á-vis the independent existence of external objects, and the idea of there being such a thing as accessible being-as-itself (i.e., being not altered by the faculties of human thought, Kant's noumenal) is the default of human intuition, and may itself be the result of a cognitive blind spot.

    If you look at early human societies, or cultures that existed into the 19th-20th centuries with very low levels of technological development and little to no exposure to modern science, generally what you find is realism. It is a realism with a lot of supernatural elements, but still a fairly familiar world when compared to idealism.

    Certainly, this realism has been challenged by advances in philosophy and the sciences alike (e.g., Kant's transcendental or Quine's points on epistemology in philosophy, most notably quantum weirdness, particularly delayed Wigner's Friend experiments in the sciences). Arguably these challenges have already been met after some readjustments. My fear, is that my strong predilection towards realism is actually the result of a cognitive blind spot, not evidence that such realism is actually warranted.



    In fact energy isn’t the ground level of physicalist ontology anymore. The modelling has moved on to information-entropy as the dichotomy that best captures the wholeness of reality’s foundations. So a structuralist account is replacing a materialist account.

    I am a big fan of information theories, but this might be overselling its adoption in physics. I will admit that I'm not super up to date on recent papers, but it seems like through the late 2010s there were still a lot of people writing off information-based ontologies using either Bell's "information about what?" argument or calling it essentially crypto-logical-positivism (i.e. a way to slip in "only observations exist). I think both these critiques misunderstand the theories at a fundemental level, but they still seem fairly prevalent.



    In a phenomenological-hermeneutical jargon, these norms constitute a horizon, a perspective in which we can make anything intelligible to ourselves.

    This part of the quote reminds me of Quine's web of belief. For example, that you can get around relativity and keep absolute time and space if you're willing to accept shrinking and growing measurement tools and objects as real facets of the world.
  • The Unequivocal Triumph Of Neuroscience - On Consciousness

    Wouldn't viruses be non-living things that store genetic history? (Supposing they don't fall under the definition of living things).

    It is an amazing fact that life stores information in such a way, but it's also a tautology that only life does this, right?

    I can see myriad reasons to keep synthetic entities that live in computers out of biology. You need a whole different set of skills to study them. They don't (currently) behave the same way. A computer virus is at best analogous to a prion, not a virus.

    That said, I don't think they are as wholly dependent on intentional human action as people like to think. To be sure, they are obligate parasites/mutualists, but the same can be said of all the living things in the human biome that don't exist in other species.

    Countries have officially stated that an attack that takes out IT infrastructure will be considered an act of war, up to the level of nuclear retaliation in extreme cases. The Stuxnet attack destroyed physical infrastructure in same same manner that a bombing run would have. In Belarus, anti-Putin/Lukashanko partisans have fire bombed rail switching stations to stop the movement of men and material to Ukraine, but the more effective action has been the hacking of rail signals, which forced the trains to move only during the day, and slowly, to avoid collisions.

    Millions of humans would be at risk of starvation or dying from lack of medicines without the supply chains organized via the internet. The rapid removal of the internet would result in a widespread economic depression that would dwarf the great depression. Realistically, there isn't any stopping the connections through which a digital entity can reproduce across the world.

    People seek psychotherapy and even commit suicide over addictions to social media or video games. They can't cut these things, whatever they are, off from replicating. Huge efforts are spent to track down and destroy "revenge porn," and media associated with child sexual abuse. Humans who copy these bits are subject to harsh prison sentences, violence, and ostracism, but the bits keep replicating despite our best efforts.

    A paper I read trumpeting some of the benefits of looking at digital information as more similar to living things pointed out the lack of imagination in modern IT crime methods. To be sure, these people deserve punishment, but you don't try to get rid of an infection by targeting at the host level. Algorithms to detect and destroy such media that replicated themselves across servers, hunting its prey as it goes, is going to be way more effective.

    Nor are programs now restricted in physical space. The internet allows them to traverse the world, quite literally at the speed of light. With the advent of smart consumer products, internet connections will become ubiquitous, something found in cars, thermostats, refrigerators, dish washers, etc.

    Currently, most digital entities lack an important adaptive component that would make them more similar to life. Machine learning is making this a more prevalent factor though. Self-replication is already an old hat for computer viruses. Once they gain the ability to learn from the environment and change their structure in response, I expect we'll see a lot more life-like behavior from self replicators living in the "wild" of the internet. Likely a sort of reverse domestication.

    The line between the living and the digital is also blurring at the physical level. You now have Unicode text files and JPG images being written to DNA. A bunch of companies promising DNA hard drives have sprung up. DNA gives you robust long term storage at extremely low sizes (as low as a petabyte per cubic ml, although current efforts yield just 165 terabytes at that size).

    It's hard to say when we'll get the really cool sci-fi stuff, but it's certainly an area progressing exponentially. 90% of the world's data was created in about the last two years. We rapidly went from one hour of video going up to YouTube for every hour to 500 hours.

    So, interestingly enough, our data is now a major contributor to global warming, and reducing the risks of global warming is probably going to require new data techniques that will make data entities even more life like (AI controlled thermostats to reduce carbon use, self driving subscription car services to reduce vehicle size, DNA storage for data, etc.)

    It'll be pretty wild. Hopefully the stuff doesn't get the ability to physically reproduce independently anytime soon; that sounds like a sci-fi apocalypse waiting to happen. I can almost here the creepy transhumanist bioterrorist ranting about how "man was always meant just to be the womb for the machine," in the Hollywood version of mammals' replacement by a newly ascendant form of life.
  • The Unequivocal Triumph Of Neuroscience - On Consciousness


    I can't speak for The Information since I've only read a short bit of it, but his other book Chaos, is pretty good. He keeps the narrative hopping along pretty well, despite it being an easy topic to get bogged down in.

    It's light on the mathematics and jumps around in journalistic snippets a good deal, but weaves these stories together into coverage of different different areas of chaos theory.

    I almost got The Information, but I ended up swapping it out for the Ascent of Information by Scharf (also haven't started that yet).
  • The Unequivocal Triumph Of Neuroscience - On Consciousness

    It would certainly be philosophy. I think people have come around more on the idea that philosophy is still essential for science. Paradigm shifts in science are almost always shifts in philosophy, in how we think about the same data. Day to day, "regular" science within a paradigm is simply accepting a prevailing set of philosohical assumptions and either setting them aside for now to solve a more tractable problem, or attempting to justify said assumptions.

    Tests of Bell's Theorem and non-locality have been called "experimental metaphysics." Tests to verify Objective Collapse vs Pilot Wave vs Holographic Universe vs Many Worlds are essentially experimental ontology.

    The two are closely related, but there is some friction. This is apparent when biosemiotics is at its worst, in articles where the same system has all of its parts put into every role in the semiotic triad, with rebuttals flying around based solely on whether or not Pierce is being interpreted correctly or not.

    Such appeals are anathema to science; the great minds of science are often wrong. Whether a theorists is being accurately interpreted is more of a question for philosophy journals, the point of concern in sciences should be more the predictive and explanatory power of research. The worst cases of this phenomena can be seen in old Marxist journals, where the words of the great prophet seem to take on the weight of the Koran.

    But very notably, this is not anything unique to biosemiotics, that's just the topic here. Physics had an even worse problem with "Copenhagen" being the received dogma, the only way to interpret quantum mechanics, and some elevated Bohr to a sort of prophet status too.

    Young physicists risked having their careers destroyed if they published in quantum foundations, the study of the ontological interpretation of QM. People were literally hounded out of their jobs for questioning the orthodoxy of the day. It's perhaps the legacy of this craziness that results in us having such a wild mishmash of QM theories today. No one wants to go to hard in taking down a theory, lest they be accused of continuing the sins of the past.


    Yeah, physicists can't agree on any interpretation of quantum phenomena. That's why I mentioned the informational approaches as just one group of theories. Field theory is incredibly successful, but has the major issue of one of its core theoretical predictions being so wildly off the mark that field theorists themselves have called it "the worst prediction in the history of science."

    https://en.m.wikipedia.org/wiki/Cosmological_constant_problem

    This is the sort of problem all theories have though. Either they are incredibly counter intuitive, or they predict things that don't happen, usually both.
  • The Unequivocal Triumph Of Neuroscience - On Consciousness


    Transmitting meaning' to knowers requires that there be a knowing subject. Rational subjects can draw conclusions based on inference. I don't see how that is relevant

    I don't have time to go into all the responses right away, but I think the point is particularly relevant. What the passage seemed to be suggesting is that DNA, letters, mathematical symbols, etc. are unique in their ability to store meaning, particularly meaning that can somehow represent violations of physical laws.

    The point here is that such meaning can be derived as the result of signals from non-living systems. Indeed, sometimes the random noise is the signal. For example, when I used to work on combat training sims for National Guard units deploying to Afghanistan, I might spend a day driving around high clearance 4x4 trails on the base with our comms equipment listening not for the music we left playing on a channel, but for static. The random noise was the signal; it held information on the boundaries of where the equipment worked. That's all I cared about at the time, but the boundaries of the equipment were also irregular. The density of the forest, hills, etc. effected the signal, so the noise also contained information on the landscape.

    The point is that meaning isn't contingent on something inherit to the symbol being used. The quoted part above rightly mentions the "rational subject," drawing inferences, whereas the idea I don't necissarily agree with is the symbols themselves holding any special role here. The symbols only gain the special properties attributed to them due to their role in computation.

    More to the point, I don't think symbols recording things inaccurately or in ways that violate physical laws is anything special at all. Such violations are the natural result of any computation that uses compressed information that has lost fidelity and is subject to error. It's also natural to computation that uses simplification to reduce energy use.

    Organisms necissarily take in a tiny fraction of the information they have access too. Recording just the entire phase space of one mole of hydrogen in one liter of gas takes an incredible amount of storage, not to mention you'd need sensory organs able to discriminate between different microstates to record such information.

    Organisms have sensory systems that bring in an extremely small amount of information about the environment, with selection of what information can even be discriminated based on natural selection. The default in complex organisms with nervous systems is to then subject this small amount of information to a bunch of computational analysis and also compression for storage. Low levels of discrimination results in a lot of information in the enviornment being treated as synonymous. (This happens at the level of bacteria too, the cell membrane is designed to treat much of the variance in the environment as identical.)

    For example, if you saw this text in one shade of red on a background of a different, but similar enough shade of red, one that the human vision system would be incapable of discriminating from the first shade, it'd be meaningless, despite the underlying code representing the text being almost identical aside from a small slice of HTML code for the background color.

    Organisms have to be very selective about what information they bring into sensory systems because information is energy. This means they have to rely on computation. Computational systems don't necissarily follow the laws of nature. They can compress and abstract information, formal relations, etc. to make more simplistic models of the world. When they do this, they are going to produce inaccurate models of the world that violate physics.

    Newtonian physics is an example of a system mostly developed by the study of non-biological objects. It's the rational observers who derive the model, not anything intrinsically meaningful about the objects. It ends up violating the laws of physics, but it took a long time to recognize this because human sensory systems don't bring in much data from the very small scales at which Newtonian physics breaks down.

    A machine learning algorithm is also able to generate inaccurate pictures of physics from data.

    To sum up: The whole phenomena of this disconnect between symbols and reality doesn't suggest a black box cut to me at all. It's a logical consequence of data compression and how computation works that computations can be inaccurate. Natural selection won't eliminate inaccurate computation necissarily, the models employed by organisms to navigate the enviornment just need to be "good enough." Natural selection will never push life into an optimal computational structure for representing the world (i.e., computation will always be inaccurate) due to the fact that the minimum algorithmic entropy (Kolmogorov complexity) for representing X is not computable due to logical contradiction (e.g., halting problems with respect to computers).
  • Ukraine Crisis

    You keep setting up these fake contradictions for yourself. "Biological weapons are not particularly good as military weapons," and "biological weapons can be highly effective at killing civilians," are not mutually contradictory.

    Weaponized anthrax would work great at causing mass death used the right way, e.g. feeding it into the ventilation system of a crowded building, crop dusting a crowded sports stadium, etc.

    It does not work particularly well when the target is dispersed military units who are outdoors, particularly if they are expecting an attack. Your crop duster will easily be shot down by MANPADs or your shell will disperse way less effectively out doors. You might kill some people, but a conventional shell likely would have done the trick too.

    I suppose biological weapons could have a very effective military use, but the scenario there would have to be something like sneaking a spy into the enemy organization and having them somehow get access to a barracks air conditioning system, or something of that nature.
  • The Unequivocal Triumph Of Neuroscience - On Consciousness


    Ha, sorry, that should read:

    "That is, they lack an essential thisness of identity unique to them."

    They fail the criteria of Leibnitz Law for indiscernibility, which is:

    If, for every property F, object x has F if and only if object y has F, then x is identical to y. Or in the notation of symbolic logic:

    ∀F(Fx ↔ Fy) → x=y.

    It's similar to an easier to understand problem in metaphysics. Bundle theories, the metaphysical theories of objects where an object is totally defined by the tropes or universals it possesses (tropes in the case of nominalists, universals in the case for realists) have a hard problem with multiple instances of completely identical objects. They are numerically distinct, yet identical in all their traits. Think of two green balls that are exactly alike. If they are alike in every way, shape, color, chemical composition, etc., how can they be two different things?

    You can claim that they do have different properties, properties like Ball A being north of Ball B, or Ball B being below Ball A. The problem here is that these properties are all derived properties; they are contingent on a relationship. Such properties do not define a thing. If they did, then your identity would change as you move, such that the "you" inside your home is not the "you" once you leave, due to a shift in derived properties (note: this whole issue is only problem for bundle theories that posit that an object is just a collection of traits). So, bundle theorists seem to be stuck saying Ball A and Ball B are actually the same ball, appearing in two places. This is a blow to bundle theory, but some will still maintain that always having different spatio-temporal location is enough to maintain a discernible identity as far as Leibnitz Law in concerned.

    Now in particle physics, we have the same problem; particles are completely identical. Worse still, while classical objects don't share the same location, that is not always the case on quantum scales. Thus, it ends up appearing that particles have no identity at all outside the type of particle they are. Or, if they are just excitations in a field, they are all the same excitation.

    Lately though, there has been some debate as to how indiscernible particles really are. In some cases, they may not be fully indiscernible, the jury is out.

    For the most part though, we are told not to assume that an electron we trapped in a box will remain the same electron when we open the box. Or, another proposed way to look at it is to say there is only one electron. The electron is not affected by time, and so it can be everywhere at once.

    For a bit more detail:

    French & Redhead’s proof is based on the assumption that when we consider a set of n particles of the same type, any property of the ith particle can be represented by an operator of the form Oi = I(1) ⊗ I(2) ⊗ … ⊗ O(i) ⊗ … ⊗ I(n), where O is a Hermitian operator acting on the single-particle Hilbert space ℋ. Now it is easy to prove that the expectation values of two such operators Oi and Oj calculated for symmetric and antisymmetric states are identical. Similarly, it can be proved that the probabilities of revealing any value of observables of the above type conditional upon any measurement outcome previously revealed are the same for all n particles.


    The original paper.
    An easier write up .

    Now, keep this lack of haecceity in mind and think of how different particles might be seen to function very much like the way letters function in a text (a "T" is always a T; the specific T is meaningless, only its role in a word matters). You get a view of reality more similar to how we tend to think of language, than the view we get of particles as tiny balls bouncing around (which is itself just an abstraction).
  • The Unequivocal Triumph Of Neuroscience - On Consciousness

    It's a hard concept to wrap your mind around. These theories look at information as the ontological basis for reality. Information is more real than electrons; electrons are just an abstraction we've created to understand how the equations we use in physics make sense. We think of them as little balls with charge, but that's not actually what they are in any current physical theory.

    A big place people get confused is the difference between information when defined as what is transmitted in a channel for things like web traffic, morse code, etc. versus information as essentially ontic. The same concepts are in play, but in information ontology, we're looking at the maximum amount of information that can be extracted by any observation for all systems.

    The Shannon Entropy of a signal in a channel can be much less than the total Boltzmann entropy of the channel; in fact, it almost always will be. There is a lot of confusion around Shannon and Boltzmann entropy, because the two are the same equation aside from one using log2 and one using the natural log. They aren't the same thing though, as Shannon Entropy doesn't follow some of the other laws of thermodynamics.

    That leads to even more confusion around information ontology. People are used to thinking of the Shannon Entropy as a portion of a physical channel associated with a signal, something that is less than the Boltzmann entropy.

    With information ontology though, we're not talking about that. We're talking about the information content of things (the surface of space time, fundamental particles, etc.). This article on the information content of the observable universe might help:

    https://aip.scitation.org/doi/full/10.1063/5.0064475

    These radical theories are based on the principle that information is physical, the information is registered by physical systems, and all physical systems can register information.25 Accordingly, there is a given amount of information stored in the universe, regardless whether it is observed or not. The proposed existence of this information imposes some fundamental questions about it: “Why is there information stored in the universe and where is it?” and “How much information is stored in the universe?” Let us deal with these questions in detail.

    To answer the first question, let us imagine an observer tracking and analyzing a random elementary particle. Let us assume that this particle is a free electron moving in the vacuum of space, but the observer has no prior knowledge of the particle and its properties. Upon tracking the particle and commencing the studies, the observer will determine, via meticulous measurements, that the particle has a mass of 9.109 × 10–31 kg, charge of −1.602 × 10–19 C, and a spin of 1/2. If the examined particle was already known or theoretically predicted, then the observer would be able to match its properties to an electron, in this case, and to confirm that what was observed/detected was indeed an electron. The key aspect here is the fact that by undertaking the observations and performing the measurements, the observer did not create any information. The three degrees of freedom that describe the electron, any electron anywhere in the universe, or any elementary particle, were already embedded somewhere, most likely in the particle itself. This is equivalent to saying that particles and elementary particles store information about themselves, or by extrapolation, there is an information content stored in the matter of the universe. Due to the mass-energy-information equivalence principle,22 we postulate that information can only be stored in particles that are stable and have a non-zero rest mass, while interaction/force carrier bosons can only transfer information via waveform. Hence, in this work, we are only examining the information content stored in the matter particles that make up the observable universe, but it is important to mention that information could also be stored in other forms, including on the surface of the space–time fabric itself, according to the holographic principle.13

    Now, in all quantum theories I know of, particles lack haecceity. That is, they lack an essential thisness of identity unique to them; this is sort of the opposite of substratum theories in metaphysics. That alone is a tough concept, but this is a fairly good, accessible article on it: https://nautil.us/quantum-mechanics-is-putting-human-identity-on-trial-3977/

    Without haecceity, all a particle is, is the information it carries. That is the first essential thing for information ontology. The second is the aforementioned holographic principal, discovered in black hole research.

    The information of a system is represented by its area, not its volume. Experimental results and the mathematics of how information works has convinced some people that information is more ontologically basic than fundamental particles or quantum fields. This is a decent intro: https://www.scientificamerican.com/article/information-in-the-holographic-univ/

    So, when you say: "the information going through the optic nerve is just electrical impulses," you are correct (if we simplify how sight works considerably). The pattern of action potentials in the optic nerve is a signal, mostly carried by electrical currents. What information ontology is saying is that, when you look very closely, at the most basic level, you will not find electrons. What you will find is information representing electrons; information is the basement of ontological entities, it doesn't go any deeper.
  • The Unequivocal Triumph Of Neuroscience - On Consciousness


    HOST: So let's start at the very beginning. In a nutshell, what is the holographic principle?

    HEADRICK: Well, as you said, the holographic principle is the idea that the universe around us, which we are used to thinking of as being three dimensional — we have three dimensions of space — is actually at a more fundamental level two dimensional and that everything we see that's going on around us in three dimensions is actually happening in a two-dimensional space.

    HOST: Great. So let's break it down even further. This two-dimensional plane, what's it made of? It's made of what you call information?

    HEADRICK: Right. So similarly to the bits and bytes that live on a compact disc, which encode, for example, a piece of music — on this plane, that's where the bits that fundamentally make up our universe live. That's where they're encoded and what they're encoding is what we see going on around us in three dimensions.

    HOST: And when you say information, can you give me an example of a piece of information or unit of information?

    HEADRICK: The concept of information is very general. When we're talking about computers, we think of bits and bytes and megabytes and so on. An example in physics of information would be, for example, the positions and velocities of physical objects.

    HOST: And so you're saying that this information on a two-dimensional plane encodes for our three-dimensional universe?

    HEADRICK: Exactly. Like in the compact disc example, it encodes some piece of music. In this case, it encodes what's going on in our universe.

    HOST: You're now working on a big project with scientists around the world funded by the Simons Foundation to use the holographic principle to reconcile general relativity with quantum mechanics.

    HEADRICK: The problem of combining quantum mechanics and Einstein's theory of relativity is one of the hardest problems in physics. So quantum mechanics is a theory that is usually used to describe things happening at very small scales, like atoms and nuclei, and so on. Einstein's theory of relativity is used to describe gravity and the universe on large scales.

    As theoretical physicists, we're not satisfied to have two different theories. We need one, unified theory which encompasses both, and that's a very hard problem that theoretical physicists have been working on for the better part of the last hundred years. It turns out that this idea of the holographic principle or the universe is a hologram, although at first, it might seem like a completely random idea, it actually helps us to solve some of the thorniest puzzles that arise when you try to combine quantum mechanics and general relativity. That's why we're excited about and that's why we continue to study it.

    https://www.brandeis.edu/now/2018/november/thetake-podcast-hologram.html

    The problem is that, like Many Worlds, the idea seems nuts initially. Like physicists are putting us on. In fact, the holographic world and many worlds (which might coexist) is more taking the logical conclusions of extremely accurate equations that help us predict the world seriously, rather than adding ad hoc components to the theory to make them "make sense" with our pre-scientific, pre-philosophical intuitions (e.g., adding wave collapse to avoid multiple worlds, assuming the reality of space as it seems intuitively to us, etc.)

    When you've described this electrical and mechanical process, there isn't anything left for "information" to do.

    The corollary of this argument, the one that information ontology rests on is the question "once you've recorded all the information about an object, what else is there?" If the information about the object is the only thing you can show to exist, then the next step is to cut out the unnecessary metaphysics and posit that physical things are information.

    I said it's too much to get into because without taking time to understand both the holographic principal and the formalism and experiments that led to "it from bit" being posited in the first place, you're not going understand why obviously very bright people embraced such incredibly counterintuitive ideas.
  • Can Theists Reject Dualism?
    I believe they can. In Behemist views of God, God creates the world as a necissary condition for Its existence. A thing can not be without something that it is not (see: Heraclitus' semiotics of opposites, Hegel's being vs. nothing contradiction = becoming).

    God must create the world, and must create differences within the world by which definition and meaning can exist.

    This whole problem of meaning without difference is generally not as acute for the realist-Platonist, who sees eternal forms of meaning as eternal, but you do see Platonist systems that embrace forms coming into being as well (see: the Gnostic "birth" of the Aeons from the Monad and Barbelõ, the emanations from Ain Soph in Kabbalah, the progression of the Forms from "the Good.")

    In Boehme, Hegel, and others, God evolves and is defined in the immanent world through the processes of reality. For Hegel, the Absolute, as being wholly knowing and experiencing Itself as Its self, comes into being through progressive cycles of reification.

    This would be a God made of one substance with the immanent world of being, but it isn't quite pantheism in the traditional sense. Everything is part of God, but God is not contained in each thing. This fully realized God also as an eternal aspect, in that a being with perfect memory and perfect foresight essentially sits outside time, as it can relive any period as it happened, or foresee any future. In Kabbalah, the entire Torah is sometimes seen as the name of God, but some go further, all of reality is simply the name of God, God's definition.

    More traditional pantheism also does not need to be dualist.

    Various forms of idealism don't need to be dualist either. In these systems perception is not of real objects, but is controlled by God.

Count Timothy von Icarus

Start FollowingSend a Message