The two AIs DO have a causal role, just not a conscious one - since they aren't conscious. The critical issue is that there's no basis for holding them accountable. (more on this later).
Without getting lost in the weeds on the causal aspect, could you elaborate on why you believe the AI's cannot be held accountable, and why do you think the human brain and nervous system is different than a cybernetic neural network? To me this sounds like a anthropic bias.
You don't need agency to HAVE compassion. You need agency to act on this compassion.
I disagree. Instincts are something everyone would agree is an automatic action, which require no agency to be instantiated. I am simply saying all action looks like instincts when you have enough information.
You're analyzing an instance of an optical illusion - which are notable only because they are exceptional. I'm talking about sensory input IN GENERAL. You don't skeptically analyze all the objects you encounter in the course of your everyday life simply because of the possibility you are misperceiving them.
It really is irrelevant if our reality is not composed of mostly optical illusions, although there can be an interesting conversation about how the brain is really constructing what you perceive, you are not really perceiving "out there," you are perceiving your brain's model. The reason your point is irrelevant, is because my argument is not based on the commonality of illusion, but rather, whether a particular thing actually is an illusion, which it seems obvious, that the phenomena of self, will, and consciousness, are all just mental constructs, not some spooky thing which floats to the left of your prefrontal cortex.
My position is that "free will" is a concept associated with responsibility and accountability.
Sure, that's what all compatibilists argue. I just think it is an unnecessary maneuver. A rapid dog has no "agency, or free will," but you would shoot it if it was attacking your baby. Invoking free will in order to have accountability is an artifact that is no longer needed.
It makes perfect sense to hold someone accountable for their actions: the action one takes are a consequence of one's beliefs, genetic dispositions, environmentally introduced dispositions, one's desires and aversions, the presence or absence of empathy, jealousy, anger, passion, love, and hatred. These factors are processed by the computer that is our mind to make a choice. If the consequences of that choice cause harm to someone else, how SHOULD others respond? Should they just excuse it because he had not choice (this seems to be the implication of your position)? No. We know he could have chosen differently had he been less reckless, or considered others, or any number of things. By doing so, that person becomes less likely to repeat the mistake - because he will have learned something. In effect, his programming will be changed because consequences provide a feedback loop that changes him.
You are just arguing against a straw man. I never made such an argument. I never once said I am advocating for undermining any notion of responding to someone who may be harmful to someone else.
Suppose the AIs in your example could experience pain, pleasure, regret, empathy, love, hate, and if it had desires that it worked to fulfill for the positive feelings it would experience, and aversions that it avoided because the negative feelings it would experience. Also suppose it could relate its choices to the consequences including the emotions it invoked, and that it could reprogram itself so that future choices would produce more positive and less negative outcomes.That would be closer akin to the "free willed" choices of humans. Whether or not we call it "free will" is irrelevant - my point is that accountability and responsibility comprise a feedback loop that we should acknowledge exists, and be glad of it. You weaken or break the loop when you deny accountability.
This is just an extension of your learning argument, which I already responded to. I do not think learning something or having code or neural networks altered does anything for the notion of free will, which is why I invoked the AI example in the first place.