There is no such thing as causation that goes wrong. It does what it does, it is infallible by its nature. Whereas at every step, the possibility of error hovers over computation.
True, but is there such thing as computation that goes wrong in the abstract sense? Can the square root of 75 ever not be 8 2/3rds + a set of trailing decimals? The very fact that we can tell definitively when computation has gone wrong is telling us something. If we think causation follows a certain logic, e.g., "causes precede their effects," we are putting logic posterior to to cause. But just because we can have flawed reasoning and be fooled by invalid arguments, it does not follow that logical entailment "can go wrong."
When computation goes wrong in the "real world," it's generally the case that we want a physical system to act in a certain way such that it computes X but we have actually not set it up such that the system actually does this.
Causal processes don't inherently require continuous energy input. Strike the cue ball, and the billiards will take care of themselves. Whereas in a computational process, to proceed requires energy at every step. Cut the power, occlude the cerebral artery, and the computation comes to a screeching halt.
I was coming from the suprisingly mainstream understanding in physics that all physical systems compute. It is actually incredibly difficult to define "computer" in such a way that just our digital and mechanical computers, or things like brains, are computers, but the Earth's atmosphere or a quasar is not,without appealing to subjective semantic meaning or arbitrary criteria not grounded in the physics of those systems. The same problem shows up even clearer with "information." Example: a dry riverbed is the soil encoding information about the passage of water in the past.
The SEP article referenced in the OP has a good coverage of this problem; to date no definition of computation based in physics has successfully avoided the possibility of pancomputationalism. After all, pipes filled with steam and precise pressure valves can be set up to do anything a microprocessor can. There are innumerable ways to instantiate our digital computers, some ways are just not efficient.
In this sense, all computation does require energy. Energy is still being exchanged between the balls on a billiard table just like a mechanical computer will keep having its gears turn and produce an output, even without more energy entering the system.
I do have a thought experiment that I think helps explain why digital computers or brains seem so different from say, rocks, but I will put that in another thread because that conversation is more: "what is special about the things we naively want to call computers."
---
As to the other points, look at it this way. If you accept that there are laws of physics that all physical entities obey, without variance, then any given set of physical interactions is entailed by those laws. That is, if I give you a set of initial conditions, you can evolve the system forward with perfect accuracy because later states of the system are entailed by previous ones.
All a digital computers does is follow these physical laws. We set it up in such a way that given X inputs it produces Y outputs. Hardware faliure isn't a problem for this view in that if hardware fails, that was entailed by prior physical states of the system.
If the state of a computer C2 follows from a prior state C1, what do we call the process by which C1 becomes C2? Computation. Abstractly, this is also what we call the process of turning something like 10 ÷ 2 into 5.
What do we call the phenomena where by a physical system in state S1 becomes S2 due to physical interactions defined by the laws of physics and their entailments? Causation.
The mistake I mean to point out is that we generally take 10÷2 to be the same thing as 5. Even adamant mathematical Platonists seem to be nominalists about computation. An algorithm that specifies a given object, say a number, "is just a name for that number." My point is that this obviously is not the case in reality. Put very simply, dividing a pile of rocks into two piles of five requires
something . To be sure, our groups of physical items into discrete systems is arbitrary, but this doesn't change the fact that even if you reduce computation down to its barest bones, pure binary, 1 or 0, i.e., the minimal discernable difference, even simple arithmetic MUST proceed stepwise.
Communication would seem to require encoding, transmission, and decoding. A causal process sandwiched between two computational ones?
Sure, but doesn't computation require all of that. Computer memory is just a way for a past state of a system to communicate with a future state. When you use a pencil for arithmetic, you are writing down transformations to refer to later.
We might be the recipient of a message transmitted onto a screen, but an important sense our eyes send signals, communications, the the visual cortex for computational processing. A group of neurons firing in a given pattern can act as part of a signal/message to another part of the brain, but also be involved in computation themselves.
This is what I call the semiotic circle problem. In describing something as simple as seeing a short text message, it seems like components of a system, in this case a human brain, must act as interpretant, sign, and object to other components, depending on what level of analysis one is using. What's more, obviously at the level of a handful of neurons, the ability to trace the message breaks down, as no one neuron or logic gate contains a full description of any element of a message.
Even in systems modeled as Markov chains, prior system states can be seen as sending messages to future ones. The two concepts are discernible, but often not very. I will look for the paper I saw that spells this out formally.