Comments

  • What is computation? Does computation = causation


    So something changes in the computer when it is observed or is computation just in the mind of the observer? If the latter, why is it not the same for all universals, e.g., six rocks are not "six" until observed, a triangle isn't a triangle until it is observed, etc? I.e., nominalism.



    I don't think I agree. It seems difficult to have information be mind independent but not computation. I won't comment on the status of such things in theoretical "mindless universes," but in the real universe meaning, at least at the level of reference to something external to the system, absolutely seems to exist sans observers, e.g. ribosomes are presumably not conscious but can read code that refers to something other than itself, and they in turn follow the algorithm laid out in the code to manufacturer a protein.

    DNA computers organized to produce solutions to Hamiltonian path problems don't behave physically different from DNA in cells at a basic level, so it's hard to see what the difference would be.
  • Time and Boundaries


    You might be interested in information theoretic, holographic principal-based workarounds for this problem if you're not already aware of them. Since information is only exchanged across any systems' (however defined) 2D surface, we can model them purely relationally. One interpretation of this is that information content is relative between systems, with these relationships formalized using the concept of symmetry and group theory. Example: for many enzyme reactions, a chemicals' being composed of isotopes or not is indiscernible for both systems and thus irrelevant to describing the interaction. This was best expressed in brilliant dissertation that made it into Springer Frontiers and got rave reviews, before the author seemingly disappeared, which is a shame.

    Verdal's book sort of goes with this, in his explanation of information only existing relationally between parts of the universe, but he seems to reverse on this later in the book to use the old "amount of bits stored by each particle," calculation to make some points about quantum information.

    I think the arbitrary nature of system boundaries is akin to other problems in the sciences and even humanities. For example, in semiotic analysis/communications, a physical entity, say a group of neurons, might act as object, symbol, and interpretant during the process, depending on the level of analysis that is used. But at a certain part, the ability of any one component to convey aspects of the total message breaks down. E.g., a single logic gate can't hold the number "8," itself. Certain relationships only exist at higher levels of emergence, like your example of shared electrons.

    Causation, in such models, would likely be interpreted in terms of computation or information exchange, and I'd argue that current theories of computation and communications would actually make it extremely difficult to differentiate these two models at the formal level.

    IMO, something like the concept of levels of abstraction in computer science is needed for this sort of problem, but I can't fathom how to formalize it in a manner that isn't arbitrary.

    Subjective is fine. Entropy is subjective (see the Gibbs Paradox) but not arbitrary. Arbitrariness seems like a problem however.
  • Time and Boundaries


    A world line in an objects' 3D path rendered with a time dimension, nothing more. A world line can also be used to describe the history of a path for an observer. We talk about time in statistical mechanics all the time. Even in a model of quantum foundations like consistent histories, where there is no one true state of affairs at time T, a classical history emerges from decoherence/collapse. Physicists don't talk about time in SR/GR because you need to specify which types of time you are referring to. This doesn't disprove the reality of an arrow of time or local becoming, except inasmuch as philosophers have used the model to construct paradoxes, or pseudoparadoxes depending on who you ask, that call them into question.

    The funny thing is that the alleged paradoxes and the arguments that allegedly rebut them haven't really moved since the 1940s; they just get restated. Someone who wants to refute Davies can cite Gödel or Robb who were actually replying to people in their time... and so maybe time is illusory or circular...

    The things you mentioned don't have anything to do with history being a "cognitive illusion." The apparent "arrow of time," is one of the big questions in physics, not something that has been solved and written off as illusory by any means. Some physicists speculate that time is somehow "illusory," although the nature of this illusion is generally fairly nuanced and not grounded in cognitive science. When they do so, they tend to be doing more philosophy than physics, although the use of specialized terms certainly confuses this fact.

    That time, and thus history, can't flow and that things do not "move" "forwards" and "backwards" in time is more well established. These are bad analogies that lead to apparent paradox. So, "forward flowing of history," is probably best to avoid.
  • Time and Boundaries
    I am genuinely curious about this widespread world of physics where cause is not referenced. I read a lot of physics and causes are mentioned constantly. Things like do-calculus were invented for the natural sciences. Bayesian inference is generally couched in causal language. The Routledge Guide to Philosophy of Physics, which is an excellent reference guide BTW, mentions cause 787 times, causal 586 times. Some of these references are indeed arguments against cause, but not most. In general, arguments against causation are nuanced, and not eliminitivist at any rate.

    "Cause isn't in mathematical equations," certainly isn't taken as gospel in the philosophy of causation (I'm currently in the middle of "Causation: A Users Guide). Why can mathematics not represent causes, but it can represent state changes and processes with a defined start and end point?

    Where I've seen arguments against cause related to physics, it's been in popular science books in the context of arguments for a block universe. The block universe is hardly something all physicists accept, and if authors are putting their best arguments for such a view into their books, they seem to have more motivations in philosophy than in physics. To be sure, this is partly because debates on the nature of causation generally aren't considered a topic for physics articles, and one's popular science books are a good place to get into more speculative discussions.

    But I certainly don't see the "cause is antiquated," view writ large on the natural sciences as a whole, or even just physics. Instruction on elements of physics being time symmetric is not an argument that physics itself is time symmetric, it demonstrably is not.

    I would be less skeptical of the block universe if the motivation behind some key arguments for it didn't seem to come from philosophers' anxiety over how their propositions could have truth values given some form of presentism. Davies, who I generally like, goes for one of these. It's frustrating because these are presented with an air of certitude (he says something like "one must be a solipsist to disagree") when in fact there is by no means only one way to view SR vis-á-vis the reality of local becoming. These examples amount to attacks on the Newtonian time the audience is expected to be familiar with, and then propose the block universe as the only solution (Putnam does something similar). The issue can also be resolved by seeing time as degenerate in SR, with time bifurcating into co-ordinate time and proper time . This distinction gets muddled in many retellings of twin paradoxes though.

    Of the views on time I like best in modern physics is the view that events in the past exist, and exist(ed) just at the local time they occured, while "now" is defined locally by the simultaneity of local interacting processes. I see no reason to jettison the overwhelming empirical evidence for time's passage when there exists fully coherent models that don't require eternalism.

    Cause is trickier because people mean many things by cause. Just like time now has to be split into many different types of precisely defined time (and even these might not be enough, some physicists think Minkowski Spacetime is doomed as a flawed model), we probably need some sort of precisely formulated definition of causality. In the philosophy of physics, the transfer of conserved quantities is the leading definition of causation from what I've seen, but there are information theoretic definitions too.
  • Time and Boundaries


    In respect to "why does this sky diver fall down?" I feel like "there is a universal law such that... and as you can see, this is just one such example of said law," explains more than "the acceleration causes acceleration." That maybe gets lost in some formulations though.

    When I was a kid I thought Newton just got hit with an apple one day and realized that things fall down and thought, "did these jokers really not pick that up earlier?'
  • What is computation? Does computation = causation


    some mind has to observe the computational process in order for computation to occur.
    I think most computation is unobserved though. Is it enough to see the final output of a computational process?

    Suppose I run a nightly data job for a dashboard report. It's automated, so on any given night no one observes the job occuring, since it happens on a server in some regional data center through a virtual machine.

    Are these just physical changes until someone checks the report in the morning, and then they become computation? Do the physical changes retroactively become computation? Or are they computation because I observed setting them up, or maybe because the aggregate CPU usage for the data center was observed by an employee during the night shift, and my job was a small component of this?
  • Time and Boundaries


    I'll be honest, I couldn't totally follow this. I would caution against any model where time "flows." Time is the dimension in which change occurs. Without time change is meaningless.

    I think Aristotle's response to Zeno's Arrow is instructive here. Zeno asks us to imagine an arrow shot from a bow in flight. Imagine it frozen for an instant. Is it moving? No. At any one frozen instant the arrow is never moving. So where are change and motion in the world?

    Aristotle's response is that this is simply a fallacy of composition. Time is the dimension in which the arrow changes its position relative to other objects. Looking for motion in frozen instants is obviously doomed, but it's because the example defines its topic wrong.

    I point out this old example because it is astounding how many arguments against the reality of time's passage are essentially Zeno's Arrow dressed up with references to the natural sciences and mathematics that don't play any direct role in the issue at hand, the biggest red herring being appealing to Cantor re: the denseness problem, which isn't actually a problem and is simply Achilles and the Hare dressed up in imposing mathematics.

    Time cannot flow because for time to move in any direction results in a paradox. If time is flowing, you need a second time dimension through which the flow of time can occur, and then a third dimension for that second time's passage, and so on (See the "A and B series" arguments for more on this).

    Some philosophers have bitten the bullet and accepted either the non-existence of time, change, and motion based on this problem, or infinitely regressing time dimensions, but there is actually no need to do this. I would recommend R.T.W. Arthur's "The Reality of Time Flow - Local Becoming in Modern Physics," on this point.



    One can say that footprints are caused by feet, or that they are caused by gravity, or both. Or one could talk about the relative hardness and resilience of feet and wet sand... But physicists talk more about interaction and the limits of interaction being the light cone. An interaction changes two things at once - an atom absorbs a photon and its energy is increased. one does not wish to say that the photon caused the increase in energy more so than the atom caused the absorption of the photon - it is a single event - a single interaction, and the observation thereof is another interaction.

    Do they? It seems like I come across references to "causes" regularly in physics. Seeming violations of SR/GR are often presumed to have some sort of explanation precisely because they "violate causality."

    You see such references all the time, e.g., "what is causing galaxies to deviate from the predictions of our models?" Such causes get posited as new elements of a model a in many subfields uncovering the nature of these causes becomes a major, or the major topic of research, e.g. dark matter and dark energy.

    The arguments against causation in physics I can recall have all tended to be in the context of arguments for the block universe view.

    And then there is the matter of origins: we extrapolate the expanding observable universe backwards in time and come to a singularity, that we call the Big Bang - the beginning of space, time, and energy. And because of the physicists demand that cause must precede effect in time, there can be no cause of the beginning. The story has to stop at the limits of the equations. To speak of a cause of time and space in this sense is to reject the physicists meaning such as it is, and resort to Prime Mover type talk.

    I don't know if this is necessarily the case. Black Hole Cosmology, while speculative, posits a cause of the Big Bangs, and of many Big Bangs for that matter. Discussion of the Past Hypothesis in particular seems to center around cause.

    I'm somewhat skeptical of block universe models, not the least because very sloppy thought experiments that misunderstand proper time in SR seem to be extremely influential based on how a number of physicists have decided to argue for the idea in popular science texts.

    Becoming being a local phenomena is not a refutation of becoming though. Areas of physics are time reversible, physics as a whole is not. You will never see ripples converge on a puddle and a rock shoot out of it, nor will you see radiation converge on source of radiation to be absorbed, billiard balls leap from their pockets and rearrange themselves into a triangle, etc.
  • What is computation? Does computation = causation

    Is there anything in a mindless universe? Or anything we can say about one? By definition, no one will ever observe such a thing.

    Given a mindless universe, could universals/abstract objects exist? I would tend to think not, but that's pretty far afield.

    But you're not saying only minded things compute, right?
  • What is computation? Does computation = causation

    And transforming numbers from one form to another, like the transformation of all information, requires work. This work of transforming information from one form to another is called "computation". Does that sound reasonable?

    It does. And this is the main problem I have with current abstract conceptions of computation, this work is largely ignored. To be sure, it shows up in the classification of computational complexity and in formalism to some degree, but these are more exceptions.

    What makes computers special is that they are not bound by physical, causal reality. It is as if, in them, the informational component of reality broke free of the physical component.

    I'm not sure about this. In theory, a computer can compute anything a Turing Machine can, in actuality they need their inputs in a very precise format.

    Both digital computers and brains only function in this dynamic fashion within a very narrow band of environmental settings. The brain is particularly fragile.

    A human mathematician will not be able to compute algorithms thrown her way if we do something like project the inputs onto a screen with an orange background and use a, for her, shade of orange font that is indistinguishable from the background. All the information is there, but not the computation. The same is true for infrared light, audio signals outside the range of the human ear, etc.

    Likewise, a digital computer needs its information to come in through an even narrower band of acceptable signals. Algorithms must be properly coded for the software in use, signals must come in through a very specific physical channel, etc. A digital computer takes in very little information from the enviornment without specialized attachments, cameras, microphones, etc. An unplugged digital computer acts not unlike a rock.

    So I think the unique thing about either is that, given they exist in the narrow band of environments where they will function properly, and given information reaches them in formats they can use effectively, they can do all these wonderful things. How is this? My guess is that it comes down to the ability to discern between small differences. This is also what instruments do for humans and computers, allow for greater discernablity.

    With a rock, the way the system responds to most inputs is largely identical. Information exists relationally, defined by the amount of difference one system can discerned about another. Complexity and computational dynamism seem tied to how well a system can discerned between differences in some other system. Zap most physical objects with the signals coming out of an Ethernet cable and the result will be almost identical regardless of what information was coming out of the cable. Not so for our computer. Give humans a bunch of CDs with different information encoded on them and they will be unable to distinguish any difference unless they use specialized instruments.

    The key, or at least part of it, is to be able to undergo different state changes based on a much wider array of discernablity for at least some subset of possible medium used as inputs. A rock can have tons of state changes, just hear it up enough, but it can't respond differently to most inputs.
  • What is computation? Does computation = causation


    I think the miscommunication here is that you are thinking of conscious computation, thinking about adding figures together.

    I was referring to how neurons carry out computations by sending electrical and chemical signals that result in state changes.

    Seeing green for example, doesn't occur just because a light wave hits the eye. People with damage to the occipital lobe often lose the ability to experience vision, even if their eyes are completely fine. They neither see nor dream/visualize. Most of the information received at the eye is discarded early in processing, and processing is what creates the world of vision that we experience.

    In some sense, they do still see, via the phenomena of blind sight, but they have no conscious experience of color.
  • What is computation? Does computation = causation
    Here is a demonstration of the problem I should have led with.

    Suppose we have a document 150 pages long. Each page contains either just blank spaces or the same symbol repeated over and over. We have pages for every letter of the alphabet, uppercase and lowercase, plus punctuation marks and mathematical equations.

    We also have an algorithm that shuffles these symbols together, working through all possible interations of the pages. Given 2,000 characters per page, and no limits on our algorithm's output, this will produce 2,000^150 pages. Each of the pages is then assembled into all possible 150 page books (simply because books are easier to visualize) made by this process.

    This output will include the pages of every novel ever written by a human being, plus many yet to be written. Aside from that, it will produce many near exact replicas of existing works, for example, War and Peace with an alternate ending where Napoleon wins the war. It will include papers that would revolutionize the sciences, a paper explaining a cure for most cancers, correct predictions for the next 5 US Presidential races, etc. The books will also contain an accurate prediction of your future somewhere in their contents. George R. R. Martin's The Winds of Winter will even be somewhere in there (provided it is ever finished).

    It will also produce a ton of nonsense. The number of 150 page books produced will outnumber estimates of particles in the visible universe by many orders of magnitude.

    If algorithms are just names for specifying abstract objects, then you can create all this with basic programming skills on a desktop computer. The algorithm would just be a highly compressed version of all the items listed above.

    But since the output includes mathematical notation, the output also includes all sorts of algorithms. This would include algorithms and proofs specifying every abstract object ever defined by man, plus myriad others. It would also include an algorithm for an even larger random symbol shuffling algorithm, which in turn, if computed, would produce an even larger symbol shuffling algorithm, and so on, like reverse Russian nesting dolls.

    If algorithms are just names, a relatively bare bones symbol shuffling algorithm is almost godlike in it's ability to name almost everything.

    Two points this brings out to me:

    1. Negativity is very important in information. We don't just care about what something is, we care about what it is not. The Kolmogorov Complexity of an object, i.e. the shortest string that can encode said object, is crucially "the shortest string that can define an object and just that object." Otherwise, a random bit generator is the shortest description of all classically encodeable objects.

    2. Second, we have to recall that information, and thus computation, is necessarily relational. A paper that tells you how to cure cancer generated by a random symbol shuffler is useless. It would indeed be remarkable to find a coherent page from such a process because there are many more ways to generate incoherent pages than coherent ones (maybe, more of that later). But likewise, there are many more ways to write about incorrect ways to cure cancer than there are actually effective methods, and so such a page is less likely to be useful than one published by a renowned quack.

    Leaving aside the physical components of the hypothetical computer and output system here, all the outputs of such an algorithm can tell you is "what is the randomization process being used to mix the symbols." A great example of substrate independence. This , is why I think information has to be defined in terms of underlying probabilities.

    The information content of the output can't be measured based on the "meanings" of the symbols. To see why, consider that in this seemingly infinite library would be books explaining step by step ways to decode seemingly random strings of text and symbols (the majority of the output) into coherent messages. Following these methods, incoherent pages might become coherent, while coherent ones become nonsense. Exact replicas of messages on some other page might be decoded from a different page. A string might have very many coherent ways it can be decoded. The only way to make sense of this is through the underlying probability distribution.

    ---

    Another thing I always think about when I ponder this example is: "how many characters would need to be on each page in such an algorithm before every discernable human thought has been encoded in the output." Obviously human language can be recursive, which allows for a larger number of discernible messages, but at a certain point levels of recursivity would become indiscernible.

    Obviously it's not a very small number, but I'd imagine it's also a far cry from 2,000.
  • What is computation? Does computation = causation

    I am now seeing that was not a good example. The quantities you perceive are irrelevant. I referenced cognition because the most popular models of how the brain works are computational. I only meant to point out that in this view, seeing anything is the result of computation. The computational component of seeing things in the world is most easily traced back to the system that generates the observers' perspective being computational.

    Obviously not everyone thinks computational neuroscience is a good way to model the brain, let alone consciousness, but I figured its well known enough to be a good example.

    If you want to think of rocks computing, you have to think more abstractly. Computers are such that a given state C1 is going to produce an output C2. Rocks change states all the time, for instance, they get hotter and colder throughout the day. You can take the changing states of the rock to be functioning like logic gates.

    In theory, you could compute anything a digital computer can by setting up enough rocks in relation to one another such that heat transfer between them will change their states in such a way that they mimic the behavior of logic gates in microprocessors vis-á-vis their state changes. Rather than electrical current, you'd be using heat. Of course, to make this system compute what you want it to compute, you'll have to be selective in the composition of your rocks as well.

    It's probably easier to think of how you can spell out any phrase with small rocks. Just line them up in the shape of the letters. Your rocks are now storing information.

    But of course, they were already storing information. The locations of the rocks when you found them tells you something about prior events. For another example, foot prints store information about the path you took to get to these rocks.

    Information is isomorphic. You could spell out a message with the rocks, then take a Polaroid of said message. Then you could scan the Polaroid and send it to a friend as an email. Your message, which is represented by some of the information that defined each system, remains in each transition. Information is substrate independent. Computation, the manipulation of information, is the same way.

    This brings up the question of why computers and brains seem so different in their ability to compute so many different things so readily in comparison to rocks or systems of pipes with pressure valves. I would like to bracket that conversation though.

    I will start a new thread on that because I think discernablity between different inputs in the key concept there, but it isn't relevant to "what computation is."



    If pancomputationalism seems nonsensical, the best way to see where the idea is coming from is to try to define what a computer is in physical terms and how it differs from other systems.



    Do you really wish to promote the possibilities that exist in relation to a model to the status of objective reality, given the fact that possibilities aren't scientifically testable or observable?

    I'm not sure what this is supposed to mean, possibilities already seem fundemental to understanding physics. Possibilities are essential to understanding entropy, the heat carrying capacities of metals, etc. The number of potential states a system can be in given certain macro constraints is at the core of thermodynamics and statistical mechanics. Quantitative theories of information on which a large part of our understanding of genetics rest also are based on a possibilities.

    For any one specific message the distribution of signals one receives is always just the very signals that one actually did receive. Every observation for every variable occurs with probability = 1. However, a message can only be informative in how it differs from the possibilities of what could have been received.

    But as Bertrand Russell pointed out, the notion of causality is objectively redundant. e.g, what does the notion of causality add to a description of the Earth orbiting the Sun?

    It's "objectively redundant," because he is begging the question, assuming what he sets out to prove in his premise. He assumes a full description of a system doesn't involve explaining causation. The fact that "if you've said everything that can be said in terms of describing a system from time T1 to time T2, you've said everything there is to say," is trivial. The argument against cause here comes fully from the implicit premise that cause is properly absent from a description of the physical world.

    Certainly an explanation of "why does the Earth rotate around the Sun," adds something here, no? Russell denied the existence of time's passage and in some more flippant remarks on Zeno's arrow, appears to deny that change and motion exist. I don't want to get into unpacking the bad assumptions that get him there, but obviously in such a view cause can't amount to much because what is cause without change?

    I don't find it to be an attractive position though.
  • What is computation? Does computation = causation
    I suppose the arbitrary system boundary problem and semiotic circle problem both are akin to the concept of levels of abstraction in computer science. The problem is that there is no clearly defined level of abstraction like there is in CS.

    I don't know if this is simply a lack of knowledge about the way the world works, or a more fundamental problem where an observer within a system cannot clearly delineate its levels of abstraction on principle. I will have to think about that one.
  • What is computation? Does computation = causation


    The easiest way to conceptualize how rocks act as computers is to think of them modeling something simple, like a single logic gate.

    In terms of grouping rocks together, it's probably easier to conceptualize how the cognition of "there are two rocks over there," and "there are 12 rocks over there," requires some sort of computational process to produce the thought "there are 14 rocks in total."



    Wouldn't physics generally be answering the question of "if nature acts in such-and-such a fashion how will nature respond?" In general, scientific models are supposed to be about "the way the world is," not games. I don't think such interpretations were ever particularly popular with practicing scientists, hence why the Copenhagen interpretation of QM, which is very close to logical positivism, had to be enforced from above by strict censorship and pressure campaigns.

    I wouldn't agree that mathematics necessarily has anything to do with goals.

    Lambda calculus doesn't come with the thought experiment baggage of Turing Machines but is able to do all the same things vis-á-vis computation. I think it would be a mistake to misconstrue the framing Turing gives to the machine with something essential to it. In any case, classical computing wouldn't be equivalent to causation in the physical world. Something like ZX calculus would be the model.





    I would say in the other way: if you think that computation and causation are equivalent, then you think that mathematics and physics are equivalent.

    Certainly that's a hypothesis that's been raised from a number of angles (Tegmark, Wheeler, etc.) I don't think that's a necessary implication though. Not all forms of mathematics appear to be instantiated in the physical world. Mathematics is the study of relationships. The physical world observably instantiates some such relationships.

    Indeed, most forms of the hypothesis that physics is somehow equivalent with mathematics are explicitly finitist. Infinites and infinitesimals are said not to exist, but clearly they are part of mathematics, so the two aren't fully equivalent.

    Saying computation is causation is simply saying that one thing entailing another in the physical world follows the same logic as computation in mathematics. One doesn't reduce to the other, they are just different ways of looking at the same thing, i.e., necessary stepwise relationship where states proceed from one another in an ordered fashion.

    In an algorithm you have initial conditions, your inputs. The algorithm then progresses in a logically prescribed manner through various states until is reaches the output of the process. In physical systems, you have initial conditions which progress in a logically prescribed manner until the process ends.

    Of course, "systems" and "processes" are arbitrarily defined in physics. Any one process can be an input for another process, one system is merely a part of another system, etc. However, this mirrors mathematics, where inputs are also arbitrarily selected.

    These brackets might be artificial, but my argument is that the step wise progression of computation is not. Equivalences of two different functions are not a shared identity. Rather, through a process one can become identical to the other. Such becoming, the continual passing away of one state into another is the hallmark of our world and I think it's been a serious mistake to dismiss it as illusory and that this mistake owes to a seriously calcified view of mathematical objects tracing back to Pythagoras.
  • Bernard Gert’s answer to the question “But what makes it moral?”


    This is where I see the "no true Scotsman," 20/20 hindsight problem coming in. It's easy to say now that all sorts of prior norms were irrational. However, if that was the case, that history is filled with generation after generation of human beings embracing irrational norms, why is it that we think we now have the ability to determine such norms? Where did this new found rationality come from?


    I'd imagine plenty of current behaviors, e.g., our treatment of psychiatric drugs with massive systemic side effects whose mechanism of action is extremely poorly understood, or the industrial production of conscious animals for consumption will someday fit into the human sacrifice bucket of things future philosophers will say rational people wouldn't agree to. Which of course leaves the question: "then why did people follow those norms?"

    This is a problem for "harm" based moralities too. To be sure, we can posit and idealized world where agents agree to follow moral principles before they enter the world, perhaps from behind some "viel of ignorance." And in such a world things would be much better, provided people actually follow the rules. But of course, collective action problems and externalities exist because the logic of some systems is that one agent can benefit from cheating on a norm, while the norm is unlikely to collapse from just a handful of agents cheating, making the cheaters net beneficiaries of cheating.

    More to the point, in the real world, people carry out terrorist attacks. One country invades another. The whole point of the military, its duty, is specifically to cause the appropriate amount of harm to any invader to get them to leave.
  • Bernard Gert’s answer to the question “But what makes it moral?”


    Gert's “normative” notion of morality requires that these rules/ideals be acceptable by all rational agents. He identified 10 rules (and 4 ideals, if I remember correctly) that satisfy this normative constraint (they do not seem to include e.g. rules against cannibalism or prostitution but they seem to exclude rules about human sacrifice or slavery).

    Do you mean include rules about human sacrifice and slavery?

    If you really thought human sacrifice meant the difference between famine and a good harvest, isn't human sacrifice rational? There it is merely an information constraint that changes the nature of such a behavior.

    We might abhor slavery, but military conscription, a form of temporary bondage, is seen as essential to virtually all states.
  • What is computation? Does computation = causation


    There is no such thing as causation that goes wrong. It does what it does, it is infallible by its nature. Whereas at every step, the possibility of error hovers over computation.

    True, but is there such thing as computation that goes wrong in the abstract sense? Can the square root of 75 ever not be 8 2/3rds + a set of trailing decimals? The very fact that we can tell definitively when computation has gone wrong is telling us something. If we think causation follows a certain logic, e.g., "causes precede their effects," we are putting logic posterior to to cause. But just because we can have flawed reasoning and be fooled by invalid arguments, it does not follow that logical entailment "can go wrong."

    When computation goes wrong in the "real world," it's generally the case that we want a physical system to act in a certain way such that it computes X but we have actually not set it up such that the system actually does this.

    Causal processes don't inherently require continuous energy input. Strike the cue ball, and the billiards will take care of themselves. Whereas in a computational process, to proceed requires energy at every step. Cut the power, occlude the cerebral artery, and the computation comes to a screeching halt.

    I was coming from the suprisingly mainstream understanding in physics that all physical systems compute. It is actually incredibly difficult to define "computer" in such a way that just our digital and mechanical computers, or things like brains, are computers, but the Earth's atmosphere or a quasar is not,without appealing to subjective semantic meaning or arbitrary criteria not grounded in the physics of those systems. The same problem shows up even clearer with "information." Example: a dry riverbed is the soil encoding information about the passage of water in the past.

    The SEP article referenced in the OP has a good coverage of this problem; to date no definition of computation based in physics has successfully avoided the possibility of pancomputationalism. After all, pipes filled with steam and precise pressure valves can be set up to do anything a microprocessor can. There are innumerable ways to instantiate our digital computers, some ways are just not efficient.

    In this sense, all computation does require energy. Energy is still being exchanged between the balls on a billiard table just like a mechanical computer will keep having its gears turn and produce an output, even without more energy entering the system.

    I do have a thought experiment that I think helps explain why digital computers or brains seem so different from say, rocks, but I will put that in another thread because that conversation is more: "what is special about the things we naively want to call computers."

    ---

    As to the other points, look at it this way. If you accept that there are laws of physics that all physical entities obey, without variance, then any given set of physical interactions is entailed by those laws. That is, if I give you a set of initial conditions, you can evolve the system forward with perfect accuracy because later states of the system are entailed by previous ones.

    All a digital computers does is follow these physical laws. We set it up in such a way that given X inputs it produces Y outputs. Hardware faliure isn't a problem for this view in that if hardware fails, that was entailed by prior physical states of the system.

    If the state of a computer C2 follows from a prior state C1, what do we call the process by which C1 becomes C2? Computation. Abstractly, this is also what we call the process of turning something like 10 ÷ 2 into 5.

    What do we call the phenomena where by a physical system in state S1 becomes S2 due to physical interactions defined by the laws of physics and their entailments? Causation.

    The mistake I mean to point out is that we generally take 10÷2 to be the same thing as 5. Even adamant mathematical Platonists seem to be nominalists about computation. An algorithm that specifies a given object, say a number, "is just a name for that number." My point is that this obviously is not the case in reality. Put very simply, dividing a pile of rocks into two piles of five requires something . To be sure, our groups of physical items into discrete systems is arbitrary, but this doesn't change the fact that even if you reduce computation down to its barest bones, pure binary, 1 or 0, i.e., the minimal discernable difference, even simple arithmetic MUST proceed stepwise.

    Communication would seem to require encoding, transmission, and decoding. A causal process sandwiched between two computational ones?

    Sure, but doesn't computation require all of that. Computer memory is just a way for a past state of a system to communicate with a future state. When you use a pencil for arithmetic, you are writing down transformations to refer to later.

    We might be the recipient of a message transmitted onto a screen, but an important sense our eyes send signals, communications, the the visual cortex for computational processing. A group of neurons firing in a given pattern can act as part of a signal/message to another part of the brain, but also be involved in computation themselves.

    This is what I call the semiotic circle problem. In describing something as simple as seeing a short text message, it seems like components of a system, in this case a human brain, must act as interpretant, sign, and object to other components, depending on what level of analysis one is using. What's more, obviously at the level of a handful of neurons, the ability to trace the message breaks down, as no one neuron or logic gate contains a full description of any element of a message.

    Even in systems modeled as Markov chains, prior system states can be seen as sending messages to future ones. The two concepts are discernible, but often not very. I will look for the paper I saw that spells this out formally.
  • Bernard Gert’s answer to the question “But what makes it moral?”


    Yeah, I only meant to contrast here. The problem with the "objective" frame is that human reasoning is deeply embed in culture.

    Think about therapy aimed at my "curing" homosexuality. Presumably rational people embraced that earlier. Generations of rational scholars and philosophers embraced slavery and serfdom because that's the system they were used to.

    "How else do we get the crops in, avoid famines, and stave off foreign invasion?" even has a rationalist, pragmatist ring to it, and is arguably true for the earliest states in the chaotic Bronze Age period.

    I would argue that there clearly is not an objective morality accessible to all with reason. Appeals to this ahistorical objectivity are thus "ought" claims.

    Hegel acknowledges this problem. Roman law legalized slavery and made wives and children property of the male head of household. This represents an internal contradiction that will be resolved historically. It is a contradiction because the state and law itself exists to promote human freedom; that is their raison d'etre. They are intersubjective reason as historical process.

    Preferences have to be suitable for grounding morality. Humans must be essentially rational and the world must be rational for an understandable morality to exist. Gert's system also presupposes this, but it leaves our top crucial facts. First, that morality evolves as a historical process. Second, that rationality is instantiated at higher levels of emergence than the individual, in the state and in civil society. Example: game theory and emergent processes in economics.

    Morality cannot exist sans culture or sans the preferences of a given people with a given era. It never has.
  • The Hard Problem of Consciousness & the Fundamental Abstraction
    Computational models of neuroscience appear to be the standard model today. I would argue that computation itself precludes reduction. Computation involves information, which is at its root, discernablity. Information can be defined between parts of a system and the whole system, but also emerges from differences between wholes.

    For example, if a detective shows up at a murder scene and touches a coffee mug and feels that it is hot, she knows that someone has made the coffee recently. This information comes from the fact that the heat of the mug is not in equilibrium with the enviornment.

    Complete information about the mug, knowledge about the exact positions and momentum of all the molocules making up the "mug system," tells you nothing about this variance. It only emerges when you contrast the momentum of molocules in the mug with those outside. Further, you have to understand that the universe started in a low entropy state and is advancing to a higher one to understand that this fluctuation can't be due to chance.

    If you had full information about the entire crime seen, you would still need to break the system down into arbitrary subsystems to understand the variance.

    If computational models of consciousness are accurate, a full mapping, down to the atomic level, of a brain would still not let you accurately predict someone's behavior. For that, you need information on their surroundings. Minds don't exist in a void. They appear to be very fragile and vanish in most environments. Minds exist in a small range of possible environments as an interaction with them.

    A phenomena is emergent just in case you would need full information at the micro level about it to understand its current state and the origins of that state. The higher the level of emergence the larger the system needed to define the phenomena. In the case of understanding consciousness, which is molded by culture and language, you need a very large system indeed.

    For example, a full information brain scan of a person having a conversation in Japanese would not give you the information required to speak Japanese.
  • What is computation? Does computation = causation
    I am a fan of Deutsch but I have never understood how his theory of quantum computation works with MWI. I get that MWI is observably indistinguishable from other interpretations of QM, and so his explanation is fine in that sense. However, conceptually, the idea that copies of a quantum computer in other universe's can help increase our information about the results of an algorithm in our universe seems off. This implies that information is crossing between universes and that one has a causal relationship with the other. This seems to conflict with a core tenant of MWI, that observers, because they are also entangled, only observe one part of the wave function.

    But perhaps it is the other possibilities that matter in the same way that possible states drive thermodynamics. I have to think about that more.

    I believe I have heard Deutsch use the common explanation of particles "storing information." I know I have heard this from Vlatko Vedral, Max Tegmark, Ben Schumacher and Paul Davies. This appears to be somewhat mainstream and I think it fundamentally misunderstands the logic and mathematics of information.

    A particle can only carry information inasmuch as it varies from other particles and measurements of the "void." If this was not the case, they wouldn't hold information, e.g., an electron can't store/instantiate information in universe where every measurement shows charge identical to an electron, at least not in terms of EM charge.

    You can transfer information via the quantum afterglow of photons without transferring energy. The void appears to be seething with observables. The general push in ontic quantum computation models unfortunately seems to have fallen back into problems with prior models by just replacing the old fundamentals with "information."

    The much less common assertion that virtual particles and QCD condensates don't have information is even more obviously off. If they didn't produce observable differences, information, then how could we know about them and how could their existence spawn books and papers on them. I only see this position in older papers though.
  • Consciousness is a Precondition of Being


    Right. The dominant schema used for this issue has been to posit two distinct modes of being, the subjective and objective. The subjective is said to emerge from the objective.

    In this view, objective being must preceed or be simultaneous with subjective being, as there can be no entity without objective being that has subjective being.

    The main problem I see with this schema is that there is a strong tendency to describe the objective world in terms of what it would "look like" for a subjective observer that, contradictorily, lacks objective being. This is the "view from nowhere," "view from everywhere," or "God's eye view."

    The problem with it is practical, not necessarily philosophical. For example, it took so long for physicists to propose an adequate solution for Maxwell's Demon because they kept uncritically positing a demon that can observe and store information in memory without possessing any physical/objective memory storage medium. This problem shows up everywhere when we talk about "fundemental differences/information" instead of relative indiscernibility based on which system is interacting with which other. Example: enzymes can generally not distinguish between a chemical composed of isotopes and one that is not. For their interactions, these differences do not exist.

    IMO, there is something missing in this schema. It takes abstractions that exist as part of mental life to be more fundamental than the rest of mental life. However, these abstractions are just parts of mental life, formed from subjective observation and reasoning. A full explanation needs to also explain how the reasoning subject constructs the model and the bridge between the model of the objective that is an element of subjective life and the external world simpliciter. In general, I think this requires subsuming the subjective and objective into a larger whole, not one subsuming the other, as in physicalism and many forms of idealism.

    However, assuming the primacy of one or the other is certainly pragmatically useful (see most models in the natural sciences, phenomenology, some aspects of psychology, etc.).
  • Consciousness is a Precondition of Being


    Like I said, this is thinking of it psychologically. My 11 month old son experiences sensation, he does not have any concept of being as such. Such a concept necessarily implies an understanding of non-being, the idea that one can meaningfully specify "that which does not exist." Otherwise "being" applies equally to all things and is contentless.

    My friends' toddler children also seem to lack any sense of being as a concept. A similar thing seems to crop up when stroke victims describe their experiences. When I recall deep sleep dreams, they are generally in a strange way linguistic and repetitive, but also contentless.

    Sensation is prior to other parts of consciousness because presumably infants in the womb, dogs, toads, etc. experience sensation. Being doesn't come into it in that a dog's sensation probably lacks any distinction between what it experiences and remembers experiencing and things' existence or non-existence "of themselves."

    The whole concept of appearances versus reality requires that one have been fooled by their senses before. Otherwise, wouldn't the naive point of view be "what you see is what there is." Sort of how babies lack object permanence. How does a baby in the womb have a concept of what is and what is not? But they appear to have sensation.
  • Will the lack of AI Alignment will be the end of humanity?
    AI doesn't need to fire off nuclear weapons to be extremely disruptive. It simply has to be able to do a variety of middle class jobs or allow those jobs to be done with significantly fewer people.

    AI had the ability to dramatically shift the share of all income that comes from capital (i.e., earnings from legal ownership of an asset). Already, the labor share of income in modern economies has been declining steadily for half a century, the same period during which median real wage growth flatlined.

    With AI, one family could own thousands of self driving trucks. Each one replaced a middle class job. Their collective income becomes returns of capital instead of wages. If one coder can do the work of 8 with AI, AI can replace many hospital admin jobs and make diagnosis quicker, AI can do the legal research of a team of attorneys, etc. more and more jobs can be replaced and the income derived from that work directed to fewer and fewer people.

    Human ability tends to be on a roughly normal distribution. Income tends to be on a distribution that is weighted to the top, but still somewhat normal. Wealth, total networth, is generally on a power law distribution due to cumulative returns on capital. If you have $2 million, you can earn $80,000 in risk free interest at today's rates.

    In the US already, the bottom 50% own 3% of all wealth. The top 10% own 90+% of stocks and bonds, with the top 1% outpacing the bottom 9% in that slice of the distribution.

    AI can radically destabilize politics and economies simply by throwing formerly middle and upper-middle class families into the growing masses of people whose labor is largely irrelevant. The Left has not caught up to this reality. They still want to use the old Marxist language of elites oppressing the masses to extract their labor. The future is more one where elites find the masses irrelevant to their production. The masses will only be important as consumers.

    Further, you don't need AIs launching nukes to be scared of how they effect warfare. AI will allow very small professional forces to wipe the floor with much larger armies without automation.

    Look up autonomous spotting drones and autonomous artillery. This stuff is already here. A swarm of drones using highly efficient emergent search algorithm a can swarms over a front line or be fired into an area by rocket. Data from across the battle space helps direct them, air dropped seismic detectors, satalites, high altitude balloons, etc. The drones find targets and pass the location back into a fire control queue. Here an AI helps prioritize fire missions and selects appropriate munitions for each target. From the queue, signals go out that in turn have an autonomous system aim itself and fire a smart munition directly onto the target.

    Active protection and interception will become essential to the survival of any ground forces and these all will rely heavily on AI. E.g., a drone spots and ATGM launch which in turn guides an IFV mounted laser system to turn and fire in time to destroy it.

    Insurgents, a bunch of guys with rifles and basic anti-tank weapons, will be chewed apart by drones swarming overhead with thermal imaging, radar, seismic data, etc. and utilizing autonomous targeting systems.

    The main point is, you won't need large masses of people to wage a war. 40,000 well trained professionals with a lot of autonomous systems may be able to defeat a 20th century style mass mobilized army of 4 million.

    The rate and accuracy of fire missions possible with tech currently in development alone will dramatically change warfare. Ground engagements will mostly be about whose spotter network can gain air superiority. Soldiers will probably rarely shoot their service rifles by 2050. Why shoot and reveal your position when you can mark a target with your optic/fire control and have your IFV fire off an autonomous 60mm mortar on target or you can have a UGV blast it with 20mm airburst?


    You don't need AGI to make warfare radically different, just automation that increases fire mission speed and accuracy five fold, which seems quite possible.
  • Consciousness is a Precondition of Being
    I think there might be some overthinking in this thread. In English it is common to talk about anything in the external world as "objects," or "systems." Increasingly though, these boundaries are seen as arbitrary. "Entities" is a bit more ambiguous. "Beings," is almost always referring to conscious agents. A being is a system/object but not all objects are beings in everyday English; being is closer to "person."

    The headline "scientists discover beings from outside the solar system," implies alien life not meteors passing through our neighborhood. The common usage of the distinction is simply based on "does it have first person subjective experiences."

    As for sleeping people, consciousness doesn't disappear with sleep. People have REM and deep sleep dreams, even if they can't remember them later. Someone having night terrors and trying to put out a non-existent fire in their bed is obviously conscious in some sense even though they are asleep in important other ways. Someone with sleep paralysis acts like an unmoving object even though they are awake and panicking.

    Because the "Hard Problem," is indeed hard, I don't think there is actually a useful criteria for telling beings and objects apart with this everyday terminology.

    I would also argue against the "terminator" hypothesis that persons (or beings) cease to exist at the moment of death. George Washington is still President Washington. We can meaningfully talk about dead Christians or dead Muslims in a terrorist attack. We can talk about the Austrian dead on the Isonzo even though the people are dead and Austrian Empire is no more.

    Certain elements of identity survive a person's biological life. This only makes sense from a purely physical view. Most of the matter that encodes information about one's identity exists in the brains of other people, not the self. Identity is created by the interaction of self and environment, and is encoded more in the former than the latter. So death leaves most of the physical elements instantiating identity quite intact, and this is why propositions about the attitudes held by dead people can have truth values.
  • Bernard Gert’s answer to the question “But what makes it moral?”


    I prefer Hegel's definition in the Philosophy of Right. For Hegel, morality is the abstract understanding of "the good," held by rational subjects. Morality is not particular, nor should we try to make it drive particular actions by invoking universal moral laws about how all people should act given X.

    He uses the term "ethical life" to describe how one lives morally in a specific role at a specific time. I like this differentiation because it is able to take account of differences in customs and situations. Right action depends on where and when you are and who you are. What is required of a fire fighter is different from what is required of a mechanic during a fire. Duties and responsibilities are key elements in morality and they are particular to an individual.

    Part of the goal of ethical life is happiness. This must be the case as a rational subject wouldn't choose that life otherwise.

    Rationality drives morality in Hegel, but his theory is also able to account for the fact that what is considered moral by most, presumably rational, people changes dramatically over time. Morality expresses itself as a dynamic historical process , progressing as internal contradictions in a society are resolved. Focusing purely on a universal morality, as opposed to this "ethical life" leads to falling into the is/ought trap.

    The ethics of any time are emergent, they don't come from the "rational individual." The society is the substance, the individuals are its accidents. Because human beings are rational, society progresses towards human freedom, but they still act within society.


    For a simple example, "all rational people" might not agree on any number of customs where one individual has to do something to show respect in some symbolic way to another. But it might be moral in some situations to avoid needlessly offending someone. Moreover, depending on your specific role, you response to the same situations should be different. A police officer and a priest shouldn't necessarily respond to some situations the same way.

    No doubt, some of the customs we take for granted today will one day be seen as cancel worthy in the future by rational people.
  • Consciousness is a Precondition of Being


    Why “consciousness” is given such primacy is puzzling at times, especially when you take a serious look at how we live as human beings in our daily lives.
    Why would that matter? Would consciousness being more essential make more sense if we lived a different way?

    I can see the argument for consciousness being primary. If you think of it psychologically, consciousness, as sensation, is prior to the abstraction of being and of the recognition of the external world as external.

    "Being" presupposes non-being, it's an incoherent concept otherwise, but consciousness as simply sensation precedes any such distinctions.


    So while sleeping or comatose, a person is just a "thing", and not a "being", like a sofa or toilet?

    Try treating them as either and they'll quickly disabuse you of the idea that they aren't conscious.



    There is, for example, a real realm of possibility, but none of its inventory actually exists.

    Right, and possibility is plenty efficient. The presence of unrealized possible states is what defines the entropy of a system, thermodynamics, the entire idea of phase space. It's essential for calculating the heat capacity of metals, etc.
  • What is computation? Does computation = causation
    As to the conservation of information:

    If I have perfect information about a billiard table and can predict an upcoming shot, it does not follow that state S1 before the shot and state S2 after the shot are the same thing or indistinguishable for me. If earlier states of the table, or the universe, are such that perfect information about S1 would allow you to perfectly predict S2, it still seems that recording all the information in the universe at one instant (S1) is not the same thing as recording all the states of the universe (S1 to S max).

    A lot of arguments against the passage of time and existence of change rely on/are motivated by the unwillingness to see computation as anything but something we experience due to being limited beings.

    I have a thought experiment that makes this clearer I will try to dig up.

    Additionally, even if you don't buy that argument, while the universe is in a low entropy state, it seems like it should certainly have a lower Kolmogorov Complexity because, given fewer possible microstates, the description does not need to be as long to describe which microstate the universe is actually in.
  • Ukraine Crisis


    Western countries didn't give Ukraine tens of billions in aid before Russia invaded because they did not think Russia would actually invade, how Ukraine faired in said invasion isn't the main question there.

    You are conflating the question: "will Russia invade," with "will Russia prevail in annexing Ukraine or a large portion of it, and/or destroying the government and replacing the leadership with one it chooses."

    Those are two different questions. There is a very simple explanation for why governments wouldn't spend in excess of 100% of their annual defense budgets on hardware for another nation before a war actually started.

    This is leaving aside the fact that public announcements of equipment deliveries are based on political/diplomatic considerations and may have little to do with when announcements were decided internally. If Ukrainians crews are using the Challenger 2 and Leopards in the early spring you'll know that the plans were in action months ago.

    If I had to guess, the string of announcements in tanks and IFVs is an attempt to get Russia to rethink a spring offensive, although I don't think anyone thinks this is likely to be successful
  • Ukraine Crisis


    Why would countries have sent Ukraine weaponry back in 2008? It had a pro-Russian government through 2014, and there was a path towards soft "annexation" ala Belarus for Russia.

    Western countries didn't give Ukraine much by way of military aid pre-invasion over fears that it would provoke Russia. I think it's quite fair to see the West was caught flat footed at the outset of the war, without good plans for what it should do in the event of a full scale invasion. The Germans and French vocally disagreed with the US about the threat of war, and even top level US diplomats seemed skeptical about an actual war right up until the invasion.

    Hence, things taking time. The other problem was that the West wasn't sure:

    A. How likely aid was to provoke Russian escalation.
    B. How likely Ukraine was to collapse. You don't want to give them a ton of equipment only to see the Russians take control of it (see: the collapse of the ANA and US aid becoming Taliban material).

    Ukraine's long term survival didn't seem obvious until later in the spring of last year.

    Anyhow, as far as the comparison, while it is clear now that Russia and the UK could have likely eventually defeated Germany, it was far from clear in 1941-43. US policy makers certainly didn't think it was clear, and even after it entered the war it planned on having to invade Europe with a much larger force, leaving Japan until later. But aid flows were still slow. Point being, even in a case where they obviously wanted to get aid onto the battlefield ASAP, it took over a year to get things in gear.
  • What is your ontology?
    B07R46CXQM.01._SCLZZZZZZZ_SX500_.jpg

    Oh, and I'm definitely not an eternalist. I was almost won over by the sheer weight of popular science writers who frame externalism as a "sure thing," but the above convinced me a lot of the arguments in favor of externalism are based on misunderstandings.

    Generally, these fall into two big groups. First, misunderstandings of relativity and assumptions that it precludes local becoming, based on misunderstanding what proper time is in Minkoski space-time. Second, the influence of Russel's work on the subject, which itself is heavily influenced by his understanding of Cantor's work on continuous lines. More importantly, he misses Aristotle's perfect refutation of Zeno's Arrow (i.e. an arrow is never moving in any frozen instant of its flight and so movement is an illusion). This is a fallacy of composition. Time is the dimension across which change occurs; it cannot exist "in a moment" but is emergent from change. Thus, visualizing space-time as a "block" is fraught because time isn't a dimension you can move back and forth in, nor something that can "flow," it is simply the dimension across which local change occurs.

    I am normally pretty open to multiple theories and understand at least why they are popular, but the argument that change is illusory strikes me as ridiculous and I don't see how it became mainstream.

    Anyhow, the book is excellent even if you don't accept its argument. Full primer on the philosophy of time and has an excellent summation of the differences between Newton and Liebnitz' philosophy of physics and how they influence views today. Made me realize I have a lot more affinity with Liebnitz/ Aristotle than Newton / Plato.

    These Springer Frontiers books are generally fantastic. They're also academic publications, so horrendously expensive if you don't have a school membership. But, that's what LibGen is for, although I wish you could tip the authors.
  • What is your ontology?
    I don't have a developed ontology. I feel like I'm doomed to remain in this state. I recall when I first got more interested in the topic, and read the Routledge guide to metaphysics, thinking the counter arguments against each view, on each topic, seemed pretty damning, leaving little untouched.

    I am willing to say the world operates logically, without contradiction. Knowledge is impossible otherwise and nature appears to follow rules that can be represented by mathematics.

    If I had to pick an ontology that seems most enticing it would be those inspired by Boehme and Hegel, where existence comes about through logical necessity, as the result of the resolution of contradictions (dialectical unfolding). I'm interested in attempts to formalize this (e.g. Lawvere with dialectical, the unity of opposites/ adjoint modalities, and how this might be paired with categorical quantum foundations), but they're a bit over my head in many cases.

    I have an intuition that the idea of discrete objects that are the sum of their parts is deeply flawed, an illusion foisted on us by how evolution shaped our sensory systems and cognitive capabilities. The constituent parts of things seem to be impossible to describe without the whole (i.e., with fundemental "particles", the field is essential and an object cannot be understood as discrete bits). This would suggest an ontology somewhat similar to mathematical formalism, a thing "is what it does," it is defined by its relations. These relationships can and do change over time, leading to new rules of interactions (this implies some sort of "hard" emergence but since everything is relational, I don't think this runs into the problems hard emergence faces when it considers wholes as composed of discrete "bits."

    I'm also highly skeptical of identity as anything other than a pragmatic definition, and embrace a circular, fallibalist, "the truth is the whole," epistemology.

    As to things existing sans consciousness, I do believe things exist that are not conscious, and that an external world accessible by multiple agents exists. However, I'm not sure if things existing "of themselves," is a coherent notion, e.g. that a universe of just two identical glass spheres floating in space can exist. If existence is defined relationally, then it is unclear which relationships can meaningfully exist sans agents.

    I'm a physicalists vis-á-vis minds, i.e. brains interacting with bodies and the enviornment gives rise to consciousness, but more agnostic about the ultimate origins of consciousness. Frankly, I don't think any current explanations come close to explaining why certain patterns of information flow/physical interactions should give rise to first person experience while most do not, and most solutions either simply appeal to mysterious "complexity," or attempt to dodge the question in various ways. Given the proliferation of different theories, I think it's fair to say that the study of the origins of consciousness is a field in crisis, not one on the cusp of wrapping things up. But I have no reason to think consciousness is not a natural phenomenon, so I go back and forth on this.
  • Ukraine Crisis


    You might be interested to look at historical analyses of US Lend Lease aid to the UK, and later the USSR. The aid was critical, more so to the UK, eventually supplying a large proportion of all UK material and a substantial proportion of food as well.

    It's hard to argue that the US was "drip feeding" the Brits or Soviets. All internal documentation suggests that the Western allies were extremely concerned that the Soviets would capitulate, even into early 1943, when the threat of Soviey collapse had long passed. However, in both cases, aid to either was fairly anemic over the first year. Setting up supply chains for the amount of material needed for a major war is not easy.

    It's even harder today given the amount of training required to operate and maintain modern MBTs and IFVs. The quickest way to get vehicles out would be to set up maintenance facilities in Poland and have them staffed by NATO personnel. This takes time too though, getting building leases, giving assignment orders, moving heavy machinery. You also need a plan for moving equipment within Ukraine without getting it blown up.

    It makes sense to stockpile some stuff in Poland and not to turn it over yet, as the material can be neatly laid out in warehouses without fear of Russian missile attacks. The deployment of Patriot Missiles to Ukraine, itself not easy, was probably a prerequisite for defining high value shipments.

    Point being, people underestimate how long this stuff takes. Even if the decision to go ahead with giving out Challenger 2s was mostly made almost a year ago, it still might take until now to get them out. For one, it makes sense to train people abroad where missile strikes can't hit the equipment.

    If the prior Lend Lease is any indication, peak flows won't start until late 2023 or later.

    I would not be surprised if the M1 is deployed, but perhaps it won't be. It's an absolutely atrocious fuel hog and hard to maintain. The US uses specialized refueling vehicles and crews to make the huge demand work. The thing is even heavier than the giant Merkava, and less than ideal for Ukrainian mud.
  • Why is the Hard Problem of Consciousness so hard?


    Exactly. Depending on how one conceives of God, God could be physical. For example, God as the self-perceiving, omniscient universe experiencing itself, something that comes into effect immanently (e.g. a hive brain organism that encompasses all the mass energy in the universe into itself after having started as one of many intelligent species), is totally conceivable in physical terms.

    The other thing to consider is that the "conservative" position in modern physics has generally been to embrace eternal, timeless laws of physics that exist outside of reality and are unchanged by anything physical. This conception itself comes from Newton and Liebnitz' religious intuitions, but is now perhaps more associated with militant atheism than religion. The thing is, this supposes the existent of eternal Platonic laws, something that seems at odds with physicalism.

    I'm also not sure that idealism necessarily opens the door to the supernatural anymore than physicalism. There are plenty of naturalist flavors of idealism. Idealism simply entails that mentation is fundemental. The natural sciences can still be said to describe all that can be known about that mentation.
  • Does Quantum Mechanics require complex numbers?
    Funny, I've seen the argument that even real numbers may need to be excluded from QM.

    The first paper I recall with this argument was Paul Davies', "Universe From Bit." The argument runs like this:

    1. Real numbers with infinite decimals require infinite bits to encode.
    2. Our universe appears to be made up of a finite number of bits.
    3.If the universe "computes itself," as many have suggested, then beam splitter experiments with a very large number of photons would have to encode numbers so small (with so many digits) that doing so would appear to require more information then exists in the entire universe.
    4. If the "universe computes the universe," (Landauer) then the reals aren't going to be real. And with very large experiments, you might even be able to test this proposition.
    5. The general acceptance of inviolable Platonic "laws of physics" that are invariant and do not depend on physical reality is the result of the religious inclinations of early pioneers in physics. That is, the modern idea of physical laws, now embraced by many atheists and now the "conservative" position in physics, is the result of the religious instincts of individuals such as Newton.

    Notably, theories of quantum gravity also tend to look to a finitist view of the universe.

    However, and I may be totally over my head here, it seems like the objections to the reality of the Reals had to do with their infinite/infinitesimal nature. So, would it be possible to have a physics that requires complex numbers, i, but not the Reals?
  • Ukraine Crisis

    Another strawman. They aren't fighting a war to improve their press freedom index, they are fighting because Russia, a state that perpetrated a massive genocide against them in the 1930s, and another large scale repression after WWII, invaded them and has been raping and pillaging in areas they take.
  • Ukraine Crisis

    If either Russia or Ukraine's public estimates of each other's losses were reflective of reality, we'd see a lot fewer functional units able to engage in operations.


    Before the war, Ukraine was at 68 on the Press Freedom Index, Russia at 51, "problematic" (3/5) versus "very serious," (5/5).

    Ukraine fairs worse on corruption indexes though relative to Russia, but Russia still does worse. Suprisingly though, Ukraine actually has fairly low inequality, which I always found surprising, but it's also significantly poorer than Russia.
  • Ukraine Crisis
    So, it seems like some of the videos of people complaining about poor treatment, lack of supplies, living outside, etc. are genuine. The Russian MoD has tacitly affirmed this by responding to some of the worst cases, where quite old or legally blind people were mobilized. But the latest batch, which seem much angrier and name commanders as responsible, appear like they are likely staged.

    The same guy appears in several. Mixed in with the poorly equipped conscripts are guys in balaclavas who seem better equipped, and many have Wagner patches. It seems like an attempt to foment dissent, or at least simulate it for public consumption.

    Prior videos were downplayed and accused of being fake in the pro-Russian milblogger space, but these are being boosted by Wagner and Kadyrovite aligned accounts.

    Obviously there is more explicit infighting, with Kradyov calling out Lapin, and now the response:

    "Kadyrov said that I should be demoted to the rank of private, stripped of all awards and sent with a machine gun in my hands to the front line to wash away my shame with blood. And all this for the fact that I allegedly holed up in Lugansk, 150 kilometers from my units. Well, I took an example from Kadyrov, who sits thousands of kilometers away from his units on a luxurious sofa, sitting on this sofa he has repeatedly taken Kyiv and even prepared for an attack on Poland*.* I didn’t have such a luxurious sofa in Luhansk and I gave orders not on Tik-Tok broadcasts, but via special communications."

    It's a bit worrying because the open infighting and attempts to win public support suggest that leaders now feel they need some sort of wider support outside their status in Putin's regime. I think it is an underplayed risk that Putin might be toppled, or simply die or be disabled by health issues, and that even more reckless and hardline leaders take control, or that there is an actual fight for power given there is no clear successor, especially as Putin's popularity falls.
  • Ukraine Crisis
    https://twitter.com/wartranslated/status/1577752565839806472

    There are others of these. Conscripts complaining of being given no training, no equipment, no food, and being sent to live outside on the border, old, sometimes non-functional weapons, and no armor.

    Speed running the Russian Revolution, no joke. "Lets just force a bunch of men to leave their jobs and families, give them weapons, and then send them to live unsupplied in squalor. Then order them to fight for us. What could go wrong!?"

    IDK, maybe it's all an advanced psyop, I'm sure that's the explanation being given either way. Or there are Wagner patches, so maybe it's a hit job in an internal fight with Russia.
  • Ukraine Crisis

    IDK, it seems impossible to sell that as a war aim now. What is Putin going to say? "We had to do this to stop Ukraine from joining NATO. So this is a success. Ukraine is now only on fast track application status for NATO, something they lacked before, and two of our wealthy neighbors joined NATO, but sometimes you need to break a few eggs to make an omelet, right? Oh, and we annexed new areas into Russia but then lost them. And I guess 60-80,000 Russians died and we totally drained down our military stockpiles and got economically isolated from the world and lost our main trading partners. But all in all it worked out because Ukraine isn't in NATO, even though they are closer to joining now than before, and now have a well supplied military of a million men with service experience."

    I mean, I wish Putin would just resign and flee with his billions, but I can see why, from his perspective, he can't withdraw, because the entire thing is a humiliating disaster.
  • Where Do The Profits Go?

    This falsely assumes the interests of the firm are contained in a proper reading of a balance sheet
    You have a point. That's why I mentioned economic profits, which include implicit costs, not just what goes on the balance sheet. A statement of net position obviously doesn't tell you everything you need to make business decisions, but it does tell you things you need to know to keep a business from running long term loses and thus having to close.

    As for Lehman Brothers, yes stockholders absolutely lost more financially in direct relation to the bankruptcy. Bondholders and other creditors get paid first in a liquidation, which if I recall did not meet all liabilities, so shareholders got wiped out.

    In the larger context employees might have lost more because they ended up losing their jobs right has the economy went south for an extended period. But had Lehman gone bankrupt in a bubble, their financial losses would have less, since finding similar work would have been much easier.

    This doesn't get into the fact that there are non-financial costs related to losing your job like stress, or social costs if you really liked your job and coworkers. It doesn't get at the larger point that, for a large firm like Lehman Brothers, the major shareholders are all extremely wealthy, and can thus absorb losses better. However, in general owners lose more when a company goes belly up because an asset they have invested in becomes worthless, while workers lose only a stream of income, which might be replaced fairly easily depending on the industry.

    But the Lehman example is why I said labor should likely have more of a say in larger firms. It's quite a different situation from an experienced lawyer or plumber starting their own business, having to invest in all the tools and overhead, and hiring half a dozen or so assistants. Mandating the the owners of firms turn over control of their firm as soon as they higher any help would create a powerful incentive for businesses not to expand, which isn't what you want.

Count Timothy von Icarus

Start FollowingSend a Message