It sounds like the part of my model that still hasn’t gotten through to you is my differentiation between appetites and desires or intentions, which is analogous to the difference between sensation and perception or belief. — Pfhorrest
No, I get that bit.
these are not claims about any particulars of human psychology or neurology, these are just different concepts. — Pfhorrest
Yes they are, each of those processes takes place in a brain. Sensation>perception>belief, and appetite> desire>intention are directional, staged processes which have no medium other than neurons through which to act. So if we can find no neural equivalent (or if we find a neural equivalent which, once labelled as such, reveals additional step) then your picture cannot actually be the case. The alternative is to say that you can have a conceptual scheme regardless of the physical reality of it's subject - in which case any conceptual scheme would work. If I disagreed with you and said "no, it goes intention>desire>appetite", how would you argue against that without invoking empirical evidence for what actually happens?
For the sake of perhaps communicating where I think you're going wrong, however, let's take your model as our basis. Beliefs about reality go reality>sensations>perceptions>beliefs. Intentions (ways things ought to be) go reality>internal states>interoception (what you're calling appetites)>desires>intentions.
When we make assumptions about the objective truth of our beliefs about the world, we assume they are objective because we assume we share reality, the bit at the beginning of the chain. It's a reasonable assumption. Get enough people together and errors in the chains of any individual should revert to the mean and so give a good account of that which is shared (reality).
What you're trying to claim is the same thing is not the same thing at all. With your model of intention, each step is not caused by the previous one.
We can model descriptive data points because (and only because) we assume a cause. Our modelling process is exactly to speculate as the the cause of our sensations (and thereby predict the results of our response). Without cause the modelling makes no sense at all.
So with sensations of pain and hunger we might model how they were caused, even our desires we could model how they were caused, but none of this gets us anything prescriptive.
The assumed 'reality' is not...
...merely the whatever-it-is that lies in the direction that our ever-growing accumulation of sensations is headed. — Pfhorrest
It is
the cause of our ever-growing accumulation of sensations.
We also have an ever growing accumulation of desires, hedonic sensations etc. We can model the cause of those too. But nowhere in that model would there be anything that we 'ought' to do.
by “appetites” I mean the “sensations” of pain, hunger, etc. These do not directly tell us (or constitute us thinking) that particular states of affairs ought to be the case — Pfhorrest
Appetites do not tell us that particular states of affairs ought to be the case indirectly either. They tell us only about the state of our endocrine system. We interpret that state as an attraction or a repulsion.
Moral blame is about the behaviour of others, so what matters is the point of inter-subjectivity. With both sense data and ineroception data the point of inter-subjectivity is the cause (reality), the assumed cause.
Intention requires inputs from outside of the chain you specify. It's not sufficient for us to have appetites derived from reality. First we must desire some valence of those appetites. An internal model which assumes some target valence to internal sense data may be either learned (such as feeling full after a meal) or hard-wired (such as osmoregulation). The target valence comes from a predictive model about the origin of sense data (ie something goes wrong if that valence is not maintained). What that something is could be biological or cultural.
Then these desires must be weighed with competing ones to produce intentions. The weighing most often takes place in the ventromedial prefrontal cortex - ie it's what we might call a rational process, there's some actual calculation going on. But it also takes input from models of interocepted states - you'll make a different calculation in a different hormonal environment. So intention depends not only on desires (which are already somewhat culturally mediated), but on your varying endocrinologic states.
None of this is to say that beliefs about reality or not also influenced by these systems, but they (unlike intentions) have a short-term checking system to tie them back into the assumed source. If we think we see a tiger (because perhaps we're scared and so our judgement of shadows is skewed toward an explanation for that fear), we will, within seconds, focus on audiovisual input that could confirm such a belief. If, however, we have an intention to make the world some way in order to reduce/increase the valence of some appetite to it's desired level, we cannot check that. We could check if the intention does indeed reduce/increase the valence of the appetite. But we cannot check if the target valence is the 'right' valence (there's nothing to check it against), nor can we check if the weighing of competing targets is 'right' (again, there's nothing to check against. This is because the targets (as opposed to the causes) are not derived directly from an external source.
Essentially (in spite of my extremely long-winded explanation) your error is simply that you say "because we do X with Y we can do it with Z" without any supporting argument. Just because we can make falsificationist-type inferences about causes, does not automatically mean we can do the same with intentions. they are two different processes (as I've just explained). It's like saying that because putting petrol in a car makes it go, it must be that doing so to a horse is also OK because they're both forms of transport.