I had ChatGPT anonymize the code of conduct I'm writing. So far:
---
1. **INTRODUCTION**
The European Union (EU) advanced regulations for artificial intelligence (AI) through the EU AI Act (Regulation (EU) 2024/1689), which aims to establish a legal framework for AI systems.
The Code establishes guiding principles and obligations for the company and all of its subsidiaries (together, “the company”) that design, develop, deploy, or manage Artificial Intelligence (AI) systems. The purpose is to promote the safe, ethical, and lawful use of AI technologies in accordance with the principles of the EU AI Act, ensuring the protection of fundamental rights, safety, and public trust.
2. **SCOPE**
This Code applies to:
- All developers, providers, and users of AI systems operating within or targeting the EU market.
- AI systems categorized under various risk levels (low, limited, high, and unacceptable risk) as defined by the EU AI Act.
An ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
3. **FUNDAMENTAL PRINCIPLES**
AI systems must adhere to the following principles:
3.1 **Respect for Human Rights and Dignity**
AI systems must respect the fundamental rights as enshrined in the EU Charter of Fundamental Rights, including privacy, non-discrimination, freedom of expression, and access to justice.
3.2 **Fairness and Non-discrimination**
AI systems must not lead to discriminatory outcomes. Measures should be in place to prevent, monitor, and mitigate bias in AI models.
3.3 **Transparency and Explainability**
AI systems must be designed and deployed with a high level of transparency, providing clear information about how they operate and their decision-making processes. Users should understand how AI influences outcomes that affect them.
3.4 **Accountability**
The company is accountable for the AI systems it designs, deploys, or manages. Clear governance structures should be in place to assign responsibility for compliance with this Code and the EU AI Act.
3.5 **Safety and Risk Management**
AI systems must be designed with the safety of individuals and society as a priority. This includes risk assessment and mitigation strategies to prevent harmful impacts or unintended consequences.
4. **CLASSIFICATION OF AI SYSTEMS BY RISK LEVEL**
To help you with the classification of the AI system you intend to develop or use, you can perform the AI self-assessment in the Legal Service Desk environment found here: [site]
4.1 **Unacceptable risks**
AI systems that pose an unacceptable risk to human rights, such as those that manipulate human behaviour or exploit vulnerable groups, are strictly prohibited. These include:
1. subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques;
2. an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability, or a specific social or economic situation;
3. biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation, except for uses in the area of law enforcement;
4. social scoring AI systems used for evaluation or classification of natural persons or groups of persons over a certain period based on their social behaviour or known, inferred, or predicted personal or personality characteristics;
5. ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless strictly necessary for certain objectives;
6. risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics;
7. AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
8. AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.
In addition, literature bears out that biometric categorization systems have abysmal accuracy rates, predictive policing generates racist and sexist outputs, and emotion recognition in high-risk areas has little to no ability to objectively measure reactions (together with prohibited AI systems “Unethical AI”). Additionally, they invariably can have major impacts on the rights to free speech, privacy, protesting, and assembly.
As a result, the company will not develop, use, or market Unethical AI, even in countries where such Unethical AI are not prohibited.
4.2 **High-risk AI systems**
High-risk applications for AI systems are defined in the AI Act as:
1. AI systems that are intended to be used as a safety component of a product, or the AI system is itself a product and that have to undergo a third-party conformity assessment (e.g., toys, medical devices, in vitro diagnostic medical devices, etc.);
2. biometrics including emotion recognition;
3. critical infrastructure;
4. education and vocational training;
5. employment, workers management, and access to self-employment;
6. access to and enjoyment of essential private services and essential public services and benefits;
7. law enforcement;
8. migration, asylum, and border control management; and
9. administration of justice and democratic processes.
This list omits other important areas, such as AI used in media, recommender systems, science and academia (e.g., experiments, drug discovery, research, hypothesis testing, parts of medicine), most of finance and trading, most types of insurance, and specific consumer-facing applications, such as chatbots and pricing algorithms, which pose significant risk to individuals and society. Particularly, the latter have shown to have provided bad advice or produced reputation-damaging outputs.
As a result, in addition to the above list, all AI systems related to pricing algorithms, credit scoring, and chatbots will be considered “high-risk” by the company.
4.2.1 **Development of high-risk AI systems**
The company may only develop high-risk AI systems if it:
- provides risk- and quality management,
- performs a conformity assessment and affixes a CE marking with their contact data,
- ensures certain quality levels for training, validation, and test data used,
- provides detailed technical documentation,
- provides for automatic logging and retains logs,
- provides instructions for deployers,
- designs the system to permit human oversight, be robust, reliable, protected against security threats (including AI attacks), and be fault-tolerant,
- registers the AI system,
- has post-market monitoring,
- performs a fundamental human rights impact assessment for certain applications,
- reports incidents to the authorities and takes corrective actions,
- cooperates with authorities, and
- documents compliance with the foregoing.
In addition, where it would concern general-purpose models, the company would have to:
- provide detailed technical documentation for the supervisory authorities and a less detailed one for users,
- have rules for complying with EU copyright law, including the text and data mining opt-out provisions,
- inform about the content used for training (with some exceptions applying to free open-source models), and where the model has systemic risk (systemic risk assumed with 10^25 FLOPS for training, additional requirements to be defined):
- perform a model evaluation,
- assess and mitigate possible systemic risks,
- keep track of, document, and report information about serious incidents and possible measures to address them, and
- protect the model with adequate cybersecurity measures.
4.3 **Limited-risk AI Systems**
AI systems posing limited or no risk are AI systems not falling within the scope of the foregoing high-risk and unacceptable risk.
4.3.1 **Development of Limited-risk AI Systems**
If the company develops Limited-risk AI Systems, then it should ensure the following:
- ensure that individuals are informed that they are interacting with an AI system, unless this is obvious,
- ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated, and that the solution is effective, interoperable, robust, and reliable (with exceptions for AI systems used only in an assistive role for standard editing or that do not substantially alter the input data),
- ensure adequate AI literacy within the organization, and
- ensure compliance with this voluntary Code.
In addition to the above, the company shall pursue the following best practices when developing Limited-risk AI Systems:
- provide risk- and quality management,
- provide detailed technical documentation,
- design the system to permit human oversight, be robust, reliable, protected against security threats (including AI attacks), and be fault-tolerant, and
- perform a fundamental human rights impact assessment.
5. **USE OF AI SYSTEMS**
Irrespective of the risk qualification of an AI system, when using any AI systems, employees are prohibited from submitting any intellectual property, sensitive data, or personal data to AI systems.
5.1 **Personal Data**
Submitting personal or sensitive data can lead to privacy violations, risking the confidentiality of individuals' information and the organization’s reputation. Compliance with data protection is crucial. An exception applies if the AI system is installed in a company-controlled environment and, if it concerns client data, there are instructions from the client for the intended processing activity of that personal data. Please note that anonymized data (data for which we do not have the encryption key) is not considered personal data.
5.2 **Intellectual Property Protection**
Sharing source code or proprietary algorithms can jeopardize the company's competitive advantage and lead to intellectual property theft. An exception applies if the AI system is installed in a company-controlled environment
5.3 **Data Integrity**
Submitting sensitive data to AI systems can result in unintended use or manipulation of that data, compromising its integrity and leading to erroneous outcomes. An exception may apply if the AI system is installed in a controlled environment. Please contact the Information Security Officer to ensure data integrity is protected.
5.4 **Misuse**
AI systems can unintentionally learn from submitted data, creating a risk of misuse or unauthorized access to that information. This can lead to severe security breaches and data leaks. An exception may apply if the AI system is installed in a controlled environment. Please contact the AI Staff Engineer to ensure the AI system will not lead to unintended misuse or unauthorized access.
5.5 **Trust and Accountability**
By ensuring that sensitive information is not shared, we uphold a culture of trust and accountability, reinforcing our commitment to ethical AI use. An exception may apply if the AI system is installed in a controlled environment. Please contact the Information Security Officer to ensure sensitive information is protected.
5.6 **Use of High-risk AI Systems**
If we use high-risk AI systems, then there are additional obligations on the use of such AI systems. These obligations include:
- Complying with the provider's instructions,
- Ensuring adequate human oversight,
- Participating in the provider's post-market monitoring of the AI system,
- Retaining automatically generated logs for at least six months,
- Ensuring adequate input,
- Informing employees if the AI system concerns them,
- Reporting serious incidents and certain risks to the authorities and provider,
- Informing affected persons regarding decisions that were rendered by or with the help of the AI system, and
- Complying with information requests of affected persons concerning such decisions.
Please note we can be considered both a provider as well as a user of AI systems if we intend to use an AI system we have developed for our own use.
5.7 **Use of Limited-risk AI Systems**
If the company uses Limited-risk AI Systems, then we should ensure the following:
- Ensure that individuals are informed that they are interacting with an AI system, unless this is obvious,
- Ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated, and that the solution is effective, interoperable, robust, and reliable (with exceptions for AI systems used only in an assistive role for standard editing or that do not substantially alter the input data),
- Ensure adequate AI literacy within the organization, and
- Ensure compliance with this voluntary Code.
5.7.1 **Best Practices**
In addition to the above, the company shall pursue the following best practices when using Limited-risk AI Systems:
- Complying with the provider's instructions,
- Ensuring adequate human oversight,
- Ensuring adequate input, and
- Informing employees if the AI system concerns them.
Please note we can be considered both a provider as well as a user of AI systems if we intend to use an AI system we have developed for our own use.
6. **Prevent Bias, Discrimination, Inaccuracy, and Misuse**
For AI systems to learn, they require data to train on, which can include text, images, videos, numbers, and computer code. Generally, larger data sets lead to better AI performance. However, no data set is entirely objective, as they all carry inherent biases, shaped by assumptions and preferences.
AI systems can also inherit biases in multiple ways. They make decisions based on training data, which might contain biased human decisions or reflect historical and social inequalities, even when sensitive factors such as gender, race, or sexual orientation are excluded. For instance, a hiring algorithm was discontinued by a major tech company after it was found to favor certain applicants based on language patterns more common in men's resumes.
Generative AI can sometimes produce inaccurate or fabricated information, known as "hallucinations," and present it as fact. These inaccuracies stem from limitations in algorithms, poor data quality, or lack of context. Large language models (LLMs), which enable AI tools to generate human-like text, are responsible for these hallucinations. While LLMs generate coherent responses, they lack true understanding of the information they present, instead predicting the next word based on probability rather than accuracy. This highlights the importance of verifying AI output to avoid spreading false or harmful information.
Another area of concern is improper use of AI-generated content. Organizations may inadvertently engage in plagiarism, unauthorized adaptations, or unlicensed commercial use of content, leading to potential legal risks.
To mitigate these challenges, it is crucial to establish processes for identifying and addressing issues with AI outputs. Users should not accept AI-generated information at face value; instead, they should question and evaluate it. Transparency in how the AI arrives at its conclusions is key, and qualified individuals should review AI outputs. Additionally, implementing red flag assessments and providing continuous training to reinforce responsible AI use within the workforce is essential.
6.1 **Testing Against Bias and Discrimination**
Predictive AI systems can be tested for bias or discrimination by simply denying the AI system the information suspected of biasing outcomes, to ensure that it makes predictions blind to that variable. Testing AI systems to avoid bias could work as follows:
1. Train the model on all data.
2. Then re-train the model on all the data except specific data suspected of generating bias.
3. Review the model’s predictions.
If the model’s predictions are equally good without the excluded information, it means the model makes predictions that are blind to that factor. But if the predictions are different when that data is included, it means one of two things: either the excluded data represented a valid explanatory variable in the model, or there could be potential bias in the data that should be examined further before relying on the AI system. Human oversight is critical to ensuring the ethical application of AI.
7. **Ensure Accountability, Responsibility, and Transparency**
Anyone applying AI to a process or data must have sufficient knowledge of the subject. It is the developer’s or user's responsibility to determine if the data involved is sensitive, proprietary, confidential, or restricted, and to fill out the self-assessment form and follow up on all obligations before integrating AI systems into processes or software. Transparency is essential throughout the entire AI development and use process. Users should inform recipients that AI was used to generate the data, specify the AI system employed, explain how the data was processed, and outline any limitations.
All AI-generated data should be extensively tested and reviewed for accuracy before actual use or distribution. Proper oversight of AI outputs includes evaluating for potential bias, discrimination, inaccuracies, or misuse. The data generated should be auditable and traceable through every stage of its development.
Human oversight is critical to ensuring the ethical application of AI. Ethical AI prioritizes doing no harm by protecting intellectual property, safeguarding privacy, promoting responsible and respectful use, and preventing bias, discrimination, and inaccuracies. It also ensures accountability, responsibility, and transparency, aligning with core principles of ethical conduct.
8. **Data Protection and Privacy**
AI systems must also comply with the EU's General Data Protection Regulation (GDPR). For any AI system we develop, a privacy impact assessment should be performed. For any AI system we use, we should ask the supplier to provide that privacy impact assessment to us. If the supplier does not have one, we should perform one ourselves before using the AI system.
A privacy impact assessment can be performed via the Legal Service desk here: [site]
Although the privacy impact assessment covers additional concerns, the major concerns with respect to any AI system are the following:
- **Data Minimization**: AI systems should only process the minimum amount of personal data necessary for their function.
- **Consent and Control**: Where personal data is involved, explicit consent must be obtained. Individuals must have the ability to withdraw consent and control how their data is used.
- **Right to Information**: Individuals have the right to be informed about how AI systems process their personal data, including decisions made based on this data.
- **Data Anonymization and Pseudonymization**: When feasible, data used by AI systems should be anonymized or pseudonymized to protect individual privacy.
9. **AI System Audits and Compliance**
High-risk AI systems should be subject to regular internal and external audits to assess compliance with this Code and the EU AI Act. To this end, comprehensive documentation on the development, deployment, and performance of AI systems should be maintained.
Please be aware that, as a developer or user of high-risk AI systems, we can be subject to regulatory audits or need to obtain certifications before deploying AI systems.
10. **Redress and Liability**
Separate liability regimes for AI are being developed under the Product Liability Directive and the Artificial Liability Directive. This chapter will be updated as these laws become final. What is already clear is that the company must establish accessible mechanisms for individuals to seek redress if adversely affected by AI systems used or developed by us.
This means any AI system the company makes available to clients must include a method for submitting complaints and workarounds to redress valid complaints.
11. **Environmental Impact**
AI systems should be designed with consideration for their environmental impact, including energy consumption and resource usage. The company must:
- **Optimize Energy Efficiency**: AI systems should be optimized to reduce their carbon footprint and overall energy consumption.
- **Promote Sustainability**: AI developers are encouraged to incorporate sustainable practices throughout the lifecycle of AI systems, from design to deployment.
12. **Governance and Ethical Committees**
This Code establishes the AI Ethics Committee intended to provide oversight of the company’s AI development and deployment, ensuring compliance with this Code and addressing ethical concerns. The Ethics Committee shall consist of the General Counsel, the AI Staff Engineer, and the CTO (chairman).
All developers intending to develop AI systems and all employees intending to use AI systems must perform the AI self
-assessment form and privacy impact assessment. If these assessments result in additional obligations set out in this Code or the assessments, they are responsible for ensuring those obligations are met before the AI system is used. Failure to perform any of these steps before the AI system is used may result in disciplinary action up to and including termination if the AI system should be classified as an unacceptable risk.
13. **Training**
The yearly AI awareness training is mandatory for all employees.
14. **Revisions and Updates to the Code**
This Code will be periodically reviewed and updated in line with new technological developments, regulatory requirements, and societal expectations.