I'll have to check it out. Seems like similar conclusions to Hoffman's as far as the noumenal world is concerned, at least in that the reality of the 'view from nowhere' would be an objectless world, quite indescribable to us. One of the more interesting theories Hoffman had was that elements of how we perceive three dimensional space might be an artifact of an error correcting code used in perception. This might explain how we have observations supporting the Holographic Principal, but don't experience a two dimensional world, or at least it explains it more plausibly than some of the other ones I've heard.
I do think his arguments on how evolution shaped perception do a good job explaining why we insist on thinking about very small things as just paired down versions of the sorts of "medium sized objects" we have around us. We have evidence that some of our "deepest" measures of physical reality are subjective at heart (e.g., entropy re: the Gibbs Paradox), and this subjectivity flows from thinking about small objects as just being shrunk down large objects. For example, if haecceity is simply a property of the statistical tendency of very large collections of small things not to become completely indistinguishable from one another, then a great deal of how we measure potential microstates, the use of extensive formulas, is arbitrary and the result of how we evolved to deal with very large collections of "stuff," not how that stuff actually appears to work at the scales we are investigating. So, we also get non-extensive forms of entropy, Tsallis entropy, etc. and end up with situations where entropy is different if we know we have mixed two different gases than it is if we do not know this fact (i.e. subjective). But this seems like a problem when we are defining time using this same measure.
The main analogy used in Against Reality is that of a computer desktop screen. On a desktop, you see icons representing a trashcan you can drag files into. You rearrange files in folders. All this is completely unlike how the changes are actually processed within the computer, as adjustments of microscopic logic gates. We experience an email dragged into a trash can. We can also pull the email back out of the can. In reality, all that goes on is a very large number of changes in a few different types of logic gates relative to others. The idea is that our senses similarly provide us with a useful "desktop" interface. The "real world" might be as far from our sensory models as the electrical activity of a microprocessor is from dropping one file into another. In which case, a lot of our deep scientific problems might stem from the projection of the logic of our "desktop interfaces," into the world.
For Hoffman it is ludicrous to talk of "neurons giving rise to minds," as such because "neurons" only exist in the minds of human beings. Our idea of neurons might have some relation to the noumenal, but it is by no means very direct.
Having read Becker's What is Real? on quantum foundations just before The Case Against Reality, I found this to be more reasonable than I might have before. After all, we have this huge issue of an arbitrary classical/quantum divide hanging over our sciences that seems less and less supportable. We also have all sorts of holes that have emerged in the conventional model of relativistic space-time, such that space-time looks to be in similar shape to Newtonian space and time circa the later 19th century (i.e. ready for replacement).
We know our laws are just gross approximations. Newton's Laws don't actually describe how physics works, they describe an idealized system with only two bodies, where those bodies do not have composite parts and are in isolation. The issue is that we mistake predictive power and usefulness for veracity, which is falling into the very same trick as when we assume our senses must report the world "as it is," because they are useful. But selection for usefulness is a different criteria than selection for truth.
Indeed, in Deacon's Steps to A Science of Biosemiotics its pointed out that life has to filter out truth, as actual representations of the truth would entail bringing in so much entropy into the organism that it would dissolve.
My own personal view is that the 'view from nowhere" or "God's eye view" is a sort of demon haunting the sciences. It makes no sense in the context of our physics. Things only have and exchange information in relation to other things. We need descriptions that have this baked into the cake. A thing is what it is for something else. Fichte was totally right to write off the "thing in itself," as nonsense.
We also need to recognize that, within a single phenomena we want to examine, a thing might fit into any of the three points of a semiotic triangle. Vision might have the object seen as the referent, the patterns of photoreceptor activation sent down the optic nerve as symbol, and the brain as interpretant. But within the context of the entire phenomena it may be that a pattern of neuronal activation in one region of the brain is the referent, the symbol is another pattern of activation within neurons connecting the referent area to another processing area, and the interpretant is this new processing area, which may itself be a symbol or referent in another set of parallel relationships. These relationships, and the definition of systems themselves, are necessarily arbitrary subjective abstractions. That's fine, it turns out entropy is too. It will still pay to have a way to formalize these subjective relationships somehow.
Information, when defined as the "difference that makes a difference," changes in context. Objects that may be synonyms in one type of relationship might be distinct in another.
I feel like this is sort of inevitable. You have to assume the noumenal exists from the outset. Otherwise you can always just see science as a description of how mental objects interact. You can't fully exorcise radical skepticism, but plenty of good works have been done to talk people down off the ledge of skepticism (the Phenomenology of Spirit being the example I always look to).
Kastrupt's Idea of the World, while advancing a less than convincing idealist ontology, has a very succinct overview of the seemingly intractable problems facing realism and particularly physicalism. But the interesting thing to me is that the same arguments he uses can be easily flipped around to show how, assuming that physicalism is true, we would still have these same intractable issues anyhow.
Science will always have a problem explaining the characteristics of subjective experience because you're asking a set of abstractions that exist as a subset of the world of experience to explain all of experience. Its asking something ontologically derivative to explain its ontological primitive. However, it would seem to hold that this would be a problem even if physicalism were the case, so in a way it's a helpful explanation of physicslism's apparent failings and only damaging to certain formulations of naive realism.
This seems like a real benefit. Parts of the Hard Problem can be chalked up to the same sorts of epistemological issues that keep solipsism and radical skepticism viable. This I find far more convincing than arguments that try to tell you that experience isn't actually real to deal with the issue, which always seemed like a giant bait and switch to me.