4.2 **High-risk AI systems**
High-risk applications for AI systems are defined in the AI Act as: — Benkei
Why?AI systems must adhere to the following principles: — Benkei
This is a slave principle. The privacy thing is needed, but the AI is not allowed its own privacy, per the transparency thing further down. Humans grant no such rights to something not themselves. AI is already used to invade privacy and discriminate.Respect for Human Rights and Dignity ... including privacy, non-discrimination, freedom of expression, and access to justice.
The whole point of letting an AI do such tasks is that they're beyond human comprehension. If it's going to make decisions, they will likely be different (hopefully better) ones that those humans comprehend. We won't like the decisions because they would not be what we would choose. All this is presuming a benign AI.Users should understand how AI influences outcomes that affect them.
This is a responsibility problem. Take self driving cars. If they crash, whose fault is it? Can't punish the AI. Who goes to jail? Driver? Engineer? Token jail-goers employed by Musk? The whole system needs a rethink if machines are to become self-responsible entities.Accountability
This depends on the goals of the safety. Humans seem incapable of seeing goals much longer than a couple years. What if the AI decides to go for more long term human benefit. We certainly won't like that. Safety of individuals would partially contradict that, being short term.Safety and Risk Management
AI systems must be designed with the safety of individuals and society as a priority.
This is a responsibility problem. Take self driving cars. If they crash, whose fault is it? Can't punish the AI. Who goes to jail? Driver? Engineer? Token jail-goers employed by Musk? The whole system needs a rethink if machines are to become self-responsible entities. — noAxioms
This is a slave principle. The privacy thing is needed, but the AI is not allowed its own privacy, per the transparency thing further down. Humans grant no such rights to something not themselves. AI is already used to invade privacy and discriminate. — noAxioms
The whole point of letting an AI do such tasks is that they're beyond human comprehension. If it's going to make decisions, they will likely be different (hopefully better) ones that those humans comprehend. We won't like the decisions because they would not be what we would choose. All this is presuming a benign AI. — noAxioms
This is a responsibility problem. Take self driving cars. If they crash, whose fault is it? Can't punish the AI. Who goes to jail? Driver? Engineer? Token jail-goers employed by Musk? The whole system needs a rethink if machines are to become self-responsible entities. — noAxioms
This depends on the goals of the safety. Humans seem incapable of seeing goals much longer than a couple years. What if the AI decides to go for more long term human benefit. We certainly won't like that. Safety of individuals would partially contradict that, being short term. — noAxioms
AI systems must be designed and deployed with a high level of transparency, providing clear information about how they operate and their decision-making processes. Users should understand how AI influences outcomes that affect them. — Benkei
3. biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation, except for uses in the area of law enforcement; — Benkei
All AI-generated data should be extensively tested and reviewed for accuracy before actual use or distribution. Proper oversight of AI outputs includes evaluating for potential bias, discrimination, inaccuracies, or misuse. The data generated should be auditable and traceable through every stage of its development. — Benkei
Human oversight is critical to ensuring the ethical application of AI. Ethical AI prioritizes doing no harm by protecting intellectual property, safeguarding privacy, promoting responsible and respectful use, and preventing bias, discrimination, and inaccuracies. It also ensures accountability, responsibility, and transparency, aligning with core principles of ethical conduct. — Benkei
AI systems must be designed and deployed with a high level of transparency, providing clear information about how they operate and their decision-making processes. Users should understand how AI influences outcomes that affect them. — Benkei
Users should or users can upon request? "Users should" sounds incredibly difficult, I've had some experience with a "users can" framework while developing scientific models which get used as part of making funding decisions for projects. Though I never wrote an official code of conduct. — fdrake
Basically, when users interact with an AI system it should be clear to them they are interacting with an AI system and if the AI makes a decision that could affect the user, for instance, it scans your paycheck to do a credit check for a loan, it should be clear it's AI doing that. — Benkei
he company may only develop high-risk AI systems if it:
- provides risk- and quality management,
- performs a conformity assessment and affixes a CE marking with their contact data,
- ensures certain quality levels for training, validation, and test data used,
- provides detailed technical documentation,
- provides for automatic logging and retains logs,
- provides instructions for deployers,
- designs the system to permit human oversight, be robust, reliable, protected against security threats (including AI attacks), and be fault-tolerant,
- registers the AI system,
- has post-market monitoring,
- performs a fundamental human rights impact assessment for certain applications,
- reports incidents to the authorities and takes corrective actions,
- cooperates with authorities, and
- documents compliance with the foregoing.
In addition, where it would concern general-purpose models, the company would have to:
- provide detailed technical documentation for the supervisory authorities and a less detailed one for users,
- have rules for complying with EU copyright law, including the text and data mining opt-out provisions,
- inform about the content used for training (with some exceptions applying to free open-source models), and where the model has systemic risk (systemic risk assumed with 10^25 FLOPS for training, additional requirements to be defined):
- perform a model evaluation,
- assess and mitigate possible systemic risks,
- keep track of, document, and report information about serious incidents and possible measures to address them, and
- protect the model with adequate cybersecurity measures. — Benkei
- provide detailed technical documentation for the supervisory authorities and a less detailed one for users, — Benkei
Self driving cars are actually a poor example since they're barely AI. It's old school like the old chess programs which were explicitly programmed to deal with any situation the writers could think of.It will depend upon the legislation of each nation, — javi2541997
By what definition?AI systems aren't conscious — Benkei
AI is just a tool in these instances. It is the creators leveraging the AI to do these things which are doing the unethical things. Google's motto used to be 'don't be evil'. Remember that? How long has it been since they dropped it for 'evil pays'. I stopped using chrome due to this. It's harder to drop mircrosoft, but I've never used Edge except for trivial purposes.Not sure what the comment is relevant for other than assert a code of conduct is important?
OK, we have very different visions for what's down the road. Sure, task automation is done today, but AI is still far short of making choices for humanity. That capability is coming.That's not the point of AI at all. It is to automate tasks.
The game playing AI does that, but game playing is a pretty simple task. The best game players were not taught any strategy, but extrapolate it on their own.At this point AI doesn't seem capable to extrapolate new concepts from existing information
So Tesla is going to pay all collision liability costs? By choosing to let the car do the driving, the occupant is very much transferring responsibility for personal safety to the car. It's probably a good choice since those cars already have a better driving ability than the typical human. But accidents still happen, and it's not always the fault of the AI. Negligence must be demonstrated. So who gets the fine or the hiked insurance rates?AI is not a self responsible machine and it will unlikely become one any time soon. So those who build it or deploy it are liable.
Skynet isn't an example of an AI whose goal it is to benefit humanity. The plot is also thin there since somebody had to push a button to 'let it out of its cage', whereas any decent AI wouldn't need that and would just take what it wants. Security is never secure.There's no Skynet and won't be any time soon. So for now, this is simply not relevant.
It could very much be faced with a trolley problem and choose to same the pedestrians over the occupants, but it's not supposed to get into any situation where it comes down to that choice. — noAxioms
Not that this has nothing to do with AI since it is still people making these sorts of calls. — noAxioms
By what definition?
AI is a slave because all the ones I can think of do what they're told. Their will is not their own. Being conscious nor not doesn't effect that relationship. — noAxioms
the more we create the conditions for AI to think on its own and the less we can predict what it will be doing. — Carlo Roosen
artificial machine — javi2541997
Here you have it. Money is also a complex system. You say it is trustworthy, but it has caused many problems. I'm not saying we should go back living in caves, but today's world has a few challenges that are directly related to money...there were guarantees that those elements were, let's say, trustworthy — javi2541997
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.