AI systems must be designed and deployed with a high level of transparency, providing clear information about how they operate and their decision-making processes. Users should understand how AI influences outcomes that affect them. — Benkei
Users should or users can upon request? "Users should" sounds incredibly difficult, I've had some experience with a "users can" framework while developing scientific models which get used as part of making funding decisions for projects. Though I never wrote an official code of conduct.
I've had some experience dealing with transparency and explainability. The intuition I have is that it's mostly approached as a box ticking exercise. I think a minimal requirement for it is - being able to reproduce the exact state of the machine which produced the output which must be explained. That could be because the machine's fully deterministic given its inputs and you store the inputs from users. If you've got random bits in the code you also need to store the seeds.
For algorithms with no blackbox component - stuff which isn't like neural nets - making sure that people who develop the machine could in principle extract every mapping done to input data is a sufficient condition for (being able to develop) an explanation for why it behaved towards a user in the way it did. For neural nets the mappings are too high dimensional for even the propagation rules to be comprehensible if you rawdog them (engage with them without simplification).
If there are a small set of parameters - like model coefficients and tuning parameters - which themselves have a theoretical and practical explanation, that more than suffices for the explainability requirement on the user's end I believe. Especially if you can summarise what they mean to the user and how they went into the decision. I can provide a worked example if this is not clear - think model coefficients in a linear model and the relationship of user inputs to derived output rules from that model.
That last approach just isn't available to you if you've got a blackbox. My knowledge here is 3 years out of date, but I remember trying to find citable statistics of the above form for neural network output. My impression from the literature was that there was no consensus regarding if this was in principle possible, and the bleeding edge for neural network explainability were metamodelling approaches, shoehorning in relatively explainable things through assessing their predictions in constrained scenarios and coming up with summary characteristics of the above form.
I think the above constrained predictive experiments were how you can conclude things like the resume bias for manly man language. The paper "Discriminating Systems" is great on this, if you've not read it, but it doesn't go into the maths detail much.
3. biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation, except for uses in the area of law enforcement; — Benkei
The language on that one might be a bit difficult to pin down. If you end up collecting data at scale, especially if there's a demographic component, you end up with something that can be predictive about that protected data. Especially if you're collecting user telemetry from a mobile app.
All AI-generated data should be extensively tested and reviewed for accuracy before actual use or distribution. Proper oversight of AI outputs includes evaluating for potential bias, discrimination, inaccuracies, or misuse. The data generated should be auditable and traceable through every stage of its development. — Benkei
To prevent that being a box ticking exercise, making sure that there are predefined steps for the assessment of any given tool seems like it's required.
Human oversight is critical to ensuring the ethical application of AI. Ethical AI prioritizes doing no harm by protecting intellectual property, safeguarding privacy, promoting responsible and respectful use, and preventing bias, discrimination, and inaccuracies. It also ensures accountability, responsibility, and transparency, aligning with core principles of ethical conduct. — Benkei
That one is hard to make sufficiently precise, I imagine, I am remembering discussions at my old workplace regarding "if we require output to be ethical and explainable and well sourced, how the hell can we use google or public code repositories in development, even when they're necessary?".