Ha, so you were more correct about Hoffman than I. I had read about 90% of The Case Against Reality by this point, before getting distracted by Wilczek's The Lightness of Being.
In the first 90% of the book, Hoffman is making an argument against physicalism from a physicalist perspective. He keeps pointing out his commitment to the existence of the noumenal. I got distracted because, like many popular science books, it turned into a literature review of famous studies, showing how each supported his propositions. Good backup to produce, but it can get a bit dull, especially after the opening arguments had been so interesting.
In the last small bit of the book he radically switches gears and proposes a totally idealist ontology. It's a mathematical model that has finite conscious agents as its ontological primitive. Each agent posseses a measurable space of different possible experiences and decisions it can make, which can be described probabilistically. This allows theories based on the model to be tested empirically. Decisions by any agent change future options for decisions it will have, and agents interact by changing each other's options and experienced, so these units are Markov Kernels in the model.
A key point to recall here is that, while of course we observe unconscious things like rocks, i.e.things that are not conscious agents, the fact is
everything we observe is unconscious. When we observe another person, we observe our icon of them. The icon is not sentient, it is our representation.
The ontology recalls "It From Bit," in some ways to, with the idea that complex conscious agents with a wide menu of possible experiences and decisions would be composed of simpler agents. For example, the odd behavior and experiences of people with split brains is because a more unitary agent has been separated into something closer to two agents in many ways. At the ontological bottom of the model would be agents with just a binary selection for actions, and binary information inputs. This allows for the possibility of a neat tie in to information theoretic versions of physics and the participatory universe concept.
His claim is that such a model can make empirical predictions, and could serve as a basis for working up to our laws of physics, and describing evolutionary biology. That's a big task, but I see how it seems at least possible in theory.
In the last bit, he gets into the concept of an infinite conscious agent that could be described mathematically. This infinite agent would have little in common with the anthropomorphic deities of many religions. Such an entity could actually be described mathematically, but it would not be omniscient and omnipresent, etc. Rather, it has an infinite potential number of experiences and actions. He poses the possibility of a scientific theology of mathematical theories about such infinites. This, to me, sounds like the Absolute, and he does mention Hegel by name in the chapter.
Neat stuff. A bit jarring to have it at the end, although I get why he did it that way. If he started with it many people would drop the book, and there is value in his critique of current models even if you think his model is nonsense. It's a little disappointing though because I'd rather have more material on the theory at the end, but I also see how creating such material is incredibly difficult. It's likely a task akin to the heroic (and somewhat successful) attempts to rebuild the laws of physics without any reference to numbers, totally in terms of relationships. If anything, his idea requires an even larger rework of how we think about things, but the payoff could be considerable if it allows us to make new breakthroughs in the sciences.
So, you were right, I was wrong. I do still like his initial framing, which keeps the noumena, more in some ways. It seems way more accessible to the public at large.
Side note;
Like Katsrup, he denies that AI could be sentiment. I don't know why this bothers me so much. But it seems like, if we get AI that can pass a Turing Test, which we may well get to this century (GPT-4 is coming soon), this standpoint is going to open a huge philosophical can of worms. I don't see why an idealist ontology that can allow for new life being created through sexual reproduction should necessarily have such issues with life being created synthetically. Because, even if we never get to fully digital AI, the idea of hybots, AI that uses both neural tissue and silicon chips, could get us there. We already have basic hybots, for example, a small robot that moves using rat neurons. But then you'll end up with the question of "how much of the entity needs to be composed of neurons versus silicone for it to be sentiment," which seems like it will lead to arbitrary cut offs that don't make sense.
All in all, I am not totally convinced by his solution to the problem he diagnoses, but I like it way more than the Katsrup model in the Idea of the World. I need to take a break from these sorts of books, they're going to end up converting me.