• Benkei
    7.7k
    Does anybody have any experience with drafting an AI Code of Conduct?
  • Hanover
    12.9k
    Does anybody have any experience with drafting an AI Code of Conduct?Benkei

    ChatGpt can provide you one if you'd just ask.
  • Echarmion
    2.7k
    Does anybody have any experience with drafting an AI Code of Conduct?Benkei

    Like a code of conduct for how and when AI systems can be employed?
  • Benkei
    7.7k
    How to develop and use AI systems, what you shouldn't do, what you ought to do, etc.

    EDIT: the how obviously doesn't pertain to the technical part but what types of AI system are allowed, what needs to be in place to ensure the end result would be ethical, that sort of "how".
  • Echarmion
    2.7k


    I'm aware of some regulatory approaches (e.g. by the EU), but they're very general and concerned mostly with data protection, which does not sound like what you're looking for.

    It sounds to me like you're looking for something like guidelines for AI "alignment", that is how to get AI to follow instructions faithfully and act according to human interests while doing so.

    I think you'd need a fair bit of technical background to get something useful done in that area. There seem to be currently two sides to the debate, one side that thinks alignment will work more or less just like normal tuning of an AI model (e.g. AI Optimism). They're therefore advocating mostly practical research to refine current techniques.

    The other side thinks that a capable AI will try to become as powerful as possible as a general goal ("Instrumental convergence") and hence think there's a lot of theoretical work to be done to figure out how the AI could do that. I only know about some forums which lean heavily into this, e.g. less wrong and effective altruism. Lots of debate there though I can't really assess the quality.
  • Benkei
    7.7k
    Yes, it has come about due to the EU AI Act, which recommends writing a code of conduct for developers and "users" (or providers and deployers). We developed our first AI tool, estimating resolution time of tickets based on type which was a limited risk tool (no personal data, no decision making).
  • Benkei
    7.7k
    As introduction I have this:

    AI systems must adhere to the following principles:

    Respect for Human Rights and Dignity
    AI systems must respect the fundamental rights as enshrined in the EU Charter of Fundamental Rights, including privacy, non-discrimination, freedom of expression, and access to justice.

    Fairness and Non-discrimination
    AI systems must not lead to discriminatory outcomes. Measures should be in place to prevent, monitor, and mitigate bias in AI models.

    Transparency and Explainability
    AI systems must be designed and deployed with a high level of transparency, providing clear information about how they operate and their decision-making processes. Users should understand how AI influences outcomes that affect them.

    Accountability
    Ohpen is accountable for the AI systems it designs, deploys, or manages. Clear governance structures should be in place to assign responsibility for compliance with this Code and the EU AI Act.

    Safety and Risk Management
    AI systems must be designed with the safety of individuals and society as a priority. This includes risk assessment and mitigation strategies to prevent harmful impacts or unintended consequences.

    But translating this to conduct is another matter. I developed an AI self-assessment form in JIRA so that at least people can figure out if what they want to use, implement or develop is an unacceptable (prohibited), high or limited risk. For high risk there's quite a few things to adhere to, which I set out, but that's not the extent of relevant "conduct" you want a code of conduct to cover. The only thing useful I've found so far is a description of a method of testing to avoid bias and discrimination.
  • Benkei
    7.7k


    I had ChatGPT anonymize the code of conduct I'm writing. So far:

    ---

    1. **INTRODUCTION**
    The European Union (EU) advanced regulations for artificial intelligence (AI) through the EU AI Act (Regulation (EU) 2024/1689), which aims to establish a legal framework for AI systems.

    The Code establishes guiding principles and obligations for the company and all of its subsidiaries (together, “the company”) that design, develop, deploy, or manage Artificial Intelligence (AI) systems. The purpose is to promote the safe, ethical, and lawful use of AI technologies in accordance with the principles of the EU AI Act, ensuring the protection of fundamental rights, safety, and public trust.

    2. **SCOPE**
    This Code applies to:

    - All developers, providers, and users of AI systems operating within or targeting the EU market.
    - AI systems categorized under various risk levels (low, limited, high, and unacceptable risk) as defined by the EU AI Act.

    An ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

    3. **FUNDAMENTAL PRINCIPLES**
    AI systems must adhere to the following principles:

    3.1 **Respect for Human Rights and Dignity**
    AI systems must respect the fundamental rights as enshrined in the EU Charter of Fundamental Rights, including privacy, non-discrimination, freedom of expression, and access to justice.

    3.2 **Fairness and Non-discrimination**
    AI systems must not lead to discriminatory outcomes. Measures should be in place to prevent, monitor, and mitigate bias in AI models.

    3.3 **Transparency and Explainability**
    AI systems must be designed and deployed with a high level of transparency, providing clear information about how they operate and their decision-making processes. Users should understand how AI influences outcomes that affect them.

    3.4 **Accountability**
    The company is accountable for the AI systems it designs, deploys, or manages. Clear governance structures should be in place to assign responsibility for compliance with this Code and the EU AI Act.

    3.5 **Safety and Risk Management**
    AI systems must be designed with the safety of individuals and society as a priority. This includes risk assessment and mitigation strategies to prevent harmful impacts or unintended consequences.

    4. **CLASSIFICATION OF AI SYSTEMS BY RISK LEVEL**
    To help you with the classification of the AI system you intend to develop or use, you can perform the AI self-assessment in the Legal Service Desk environment found here: [site]

    4.1 **Unacceptable risks**
    AI systems that pose an unacceptable risk to human rights, such as those that manipulate human behaviour or exploit vulnerable groups, are strictly prohibited. These include:

    1. subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques;
    2. an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability, or a specific social or economic situation;
    3. biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation, except for uses in the area of law enforcement;
    4. social scoring AI systems used for evaluation or classification of natural persons or groups of persons over a certain period based on their social behaviour or known, inferred, or predicted personal or personality characteristics;
    5. ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless strictly necessary for certain objectives;
    6. risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics;
    7. AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
    8. AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.

    In addition, literature bears out that biometric categorization systems have abysmal accuracy rates, predictive policing generates racist and sexist outputs, and emotion recognition in high-risk areas has little to no ability to objectively measure reactions (together with prohibited AI systems “Unethical AI”). Additionally, they invariably can have major impacts on the rights to free speech, privacy, protesting, and assembly.

    As a result, the company will not develop, use, or market Unethical AI, even in countries where such Unethical AI are not prohibited.

    4.2 **High-risk AI systems**
    High-risk applications for AI systems are defined in the AI Act as:

    1. AI systems that are intended to be used as a safety component of a product, or the AI system is itself a product and that have to undergo a third-party conformity assessment (e.g., toys, medical devices, in vitro diagnostic medical devices, etc.);
    2. biometrics including emotion recognition;
    3. critical infrastructure;
    4. education and vocational training;
    5. employment, workers management, and access to self-employment;
    6. access to and enjoyment of essential private services and essential public services and benefits;
    7. law enforcement;
    8. migration, asylum, and border control management; and
    9. administration of justice and democratic processes.

    This list omits other important areas, such as AI used in media, recommender systems, science and academia (e.g., experiments, drug discovery, research, hypothesis testing, parts of medicine), most of finance and trading, most types of insurance, and specific consumer-facing applications, such as chatbots and pricing algorithms, which pose significant risk to individuals and society. Particularly, the latter have shown to have provided bad advice or produced reputation-damaging outputs.

    As a result, in addition to the above list, all AI systems related to pricing algorithms, credit scoring, and chatbots will be considered “high-risk” by the company.

    4.2.1 **Development of high-risk AI systems**
    The company may only develop high-risk AI systems if it:

    - provides risk- and quality management,
    - performs a conformity assessment and affixes a CE marking with their contact data,
    - ensures certain quality levels for training, validation, and test data used,
    - provides detailed technical documentation,
    - provides for automatic logging and retains logs,
    - provides instructions for deployers,
    - designs the system to permit human oversight, be robust, reliable, protected against security threats (including AI attacks), and be fault-tolerant,
    - registers the AI system,
    - has post-market monitoring,
    - performs a fundamental human rights impact assessment for certain applications,
    - reports incidents to the authorities and takes corrective actions,
    - cooperates with authorities, and
    - documents compliance with the foregoing.

    In addition, where it would concern general-purpose models, the company would have to:

    - provide detailed technical documentation for the supervisory authorities and a less detailed one for users,
    - have rules for complying with EU copyright law, including the text and data mining opt-out provisions,
    - inform about the content used for training (with some exceptions applying to free open-source models), and where the model has systemic risk (systemic risk assumed with 10^25 FLOPS for training, additional requirements to be defined):
    - perform a model evaluation,
    - assess and mitigate possible systemic risks,
    - keep track of, document, and report information about serious incidents and possible measures to address them, and
    - protect the model with adequate cybersecurity measures.

    4.3 **Limited-risk AI Systems**
    AI systems posing limited or no risk are AI systems not falling within the scope of the foregoing high-risk and unacceptable risk.

    4.3.1 **Development of Limited-risk AI Systems**
    If the company develops Limited-risk AI Systems, then it should ensure the following:

    - ensure that individuals are informed that they are interacting with an AI system, unless this is obvious,
    - ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated, and that the solution is effective, interoperable, robust, and reliable (with exceptions for AI systems used only in an assistive role for standard editing or that do not substantially alter the input data),
    - ensure adequate AI literacy within the organization, and
    - ensure compliance with this voluntary Code.

    In addition to the above, the company shall pursue the following best practices when developing Limited-risk AI Systems:

    - provide risk- and quality management,
    - provide detailed technical documentation,
    - design the system to permit human oversight, be robust, reliable, protected against security threats (including AI attacks), and be fault-tolerant, and
    - perform a fundamental human rights impact assessment.

    5. **USE OF AI SYSTEMS**
    Irrespective of the risk qualification of an AI system, when using any AI systems, employees are prohibited from submitting any intellectual property, sensitive data, or personal data to AI systems.

    5.1 **Personal Data**
    Submitting personal or sensitive data can lead to privacy violations, risking the confidentiality of individuals' information and the organization’s reputation. Compliance with data protection is crucial. An exception applies if the AI system is installed in a company-controlled environment and, if it concerns client data, there are instructions from the client for the intended processing activity of that personal data. Please note that anonymized data (data for which we do not have the encryption key) is not considered personal data.

    5.2 **Intellectual Property Protection**
    Sharing source code or proprietary algorithms can jeopardize the company's competitive advantage and lead to intellectual property theft. An exception applies if the AI system is installed in a company-controlled environment

    5.3 **Data Integrity**
    Submitting sensitive data to AI systems can result in unintended use or manipulation of that data, compromising its integrity and leading to erroneous outcomes. An exception may apply if the AI system is installed in a controlled environment. Please contact the Information Security Officer to ensure data integrity is protected.

    5.4 **Misuse**
    AI systems can unintentionally learn from submitted data, creating a risk of misuse or unauthorized access to that information. This can lead to severe security breaches and data leaks. An exception may apply if the AI system is installed in a controlled environment. Please contact the AI Staff Engineer to ensure the AI system will not lead to unintended misuse or unauthorized access.

    5.5 **Trust and Accountability**
    By ensuring that sensitive information is not shared, we uphold a culture of trust and accountability, reinforcing our commitment to ethical AI use. An exception may apply if the AI system is installed in a controlled environment. Please contact the Information Security Officer to ensure sensitive information is protected.

    5.6 **Use of High-risk AI Systems**
    If we use high-risk AI systems, then there are additional obligations on the use of such AI systems. These obligations include:

    - Complying with the provider's instructions,
    - Ensuring adequate human oversight,
    - Participating in the provider's post-market monitoring of the AI system,
    - Retaining automatically generated logs for at least six months,
    - Ensuring adequate input,
    - Informing employees if the AI system concerns them,
    - Reporting serious incidents and certain risks to the authorities and provider,
    - Informing affected persons regarding decisions that were rendered by or with the help of the AI system, and
    - Complying with information requests of affected persons concerning such decisions.

    Please note we can be considered both a provider as well as a user of AI systems if we intend to use an AI system we have developed for our own use.

    5.7 **Use of Limited-risk AI Systems**
    If the company uses Limited-risk AI Systems, then we should ensure the following:

    - Ensure that individuals are informed that they are interacting with an AI system, unless this is obvious,
    - Ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated, and that the solution is effective, interoperable, robust, and reliable (with exceptions for AI systems used only in an assistive role for standard editing or that do not substantially alter the input data),
    - Ensure adequate AI literacy within the organization, and
    - Ensure compliance with this voluntary Code.

    5.7.1 **Best Practices**
    In addition to the above, the company shall pursue the following best practices when using Limited-risk AI Systems:

    - Complying with the provider's instructions,
    - Ensuring adequate human oversight,
    - Ensuring adequate input, and
    - Informing employees if the AI system concerns them.

    Please note we can be considered both a provider as well as a user of AI systems if we intend to use an AI system we have developed for our own use.

    6. **Prevent Bias, Discrimination, Inaccuracy, and Misuse**
    For AI systems to learn, they require data to train on, which can include text, images, videos, numbers, and computer code. Generally, larger data sets lead to better AI performance. However, no data set is entirely objective, as they all carry inherent biases, shaped by assumptions and preferences.

    AI systems can also inherit biases in multiple ways. They make decisions based on training data, which might contain biased human decisions or reflect historical and social inequalities, even when sensitive factors such as gender, race, or sexual orientation are excluded. For instance, a hiring algorithm was discontinued by a major tech company after it was found to favor certain applicants based on language patterns more common in men's resumes.

    Generative AI can sometimes produce inaccurate or fabricated information, known as "hallucinations," and present it as fact. These inaccuracies stem from limitations in algorithms, poor data quality, or lack of context. Large language models (LLMs), which enable AI tools to generate human-like text, are responsible for these hallucinations. While LLMs generate coherent responses, they lack true understanding of the information they present, instead predicting the next word based on probability rather than accuracy. This highlights the importance of verifying AI output to avoid spreading false or harmful information.

    Another area of concern is improper use of AI-generated content. Organizations may inadvertently engage in plagiarism, unauthorized adaptations, or unlicensed commercial use of content, leading to potential legal risks.

    To mitigate these challenges, it is crucial to establish processes for identifying and addressing issues with AI outputs. Users should not accept AI-generated information at face value; instead, they should question and evaluate it. Transparency in how the AI arrives at its conclusions is key, and qualified individuals should review AI outputs. Additionally, implementing red flag assessments and providing continuous training to reinforce responsible AI use within the workforce is essential.

    6.1 **Testing Against Bias and Discrimination**
    Predictive AI systems can be tested for bias or discrimination by simply denying the AI system the information suspected of biasing outcomes, to ensure that it makes predictions blind to that variable. Testing AI systems to avoid bias could work as follows:

    1. Train the model on all data.
    2. Then re-train the model on all the data except specific data suspected of generating bias.
    3. Review the model’s predictions.

    If the model’s predictions are equally good without the excluded information, it means the model makes predictions that are blind to that factor. But if the predictions are different when that data is included, it means one of two things: either the excluded data represented a valid explanatory variable in the model, or there could be potential bias in the data that should be examined further before relying on the AI system. Human oversight is critical to ensuring the ethical application of AI.

    7. **Ensure Accountability, Responsibility, and Transparency**
    Anyone applying AI to a process or data must have sufficient knowledge of the subject. It is the developer’s or user's responsibility to determine if the data involved is sensitive, proprietary, confidential, or restricted, and to fill out the self-assessment form and follow up on all obligations before integrating AI systems into processes or software. Transparency is essential throughout the entire AI development and use process. Users should inform recipients that AI was used to generate the data, specify the AI system employed, explain how the data was processed, and outline any limitations.

    All AI-generated data should be extensively tested and reviewed for accuracy before actual use or distribution. Proper oversight of AI outputs includes evaluating for potential bias, discrimination, inaccuracies, or misuse. The data generated should be auditable and traceable through every stage of its development.

    Human oversight is critical to ensuring the ethical application of AI. Ethical AI prioritizes doing no harm by protecting intellectual property, safeguarding privacy, promoting responsible and respectful use, and preventing bias, discrimination, and inaccuracies. It also ensures accountability, responsibility, and transparency, aligning with core principles of ethical conduct.

    8. **Data Protection and Privacy**
    AI systems must also comply with the EU's General Data Protection Regulation (GDPR). For any AI system we develop, a privacy impact assessment should be performed. For any AI system we use, we should ask the supplier to provide that privacy impact assessment to us. If the supplier does not have one, we should perform one ourselves before using the AI system.

    A privacy impact assessment can be performed via the Legal Service desk here: [site]

    Although the privacy impact assessment covers additional concerns, the major concerns with respect to any AI system are the following:

    - **Data Minimization**: AI systems should only process the minimum amount of personal data necessary for their function.
    - **Consent and Control**: Where personal data is involved, explicit consent must be obtained. Individuals must have the ability to withdraw consent and control how their data is used.
    - **Right to Information**: Individuals have the right to be informed about how AI systems process their personal data, including decisions made based on this data.
    - **Data Anonymization and Pseudonymization**: When feasible, data used by AI systems should be anonymized or pseudonymized to protect individual privacy.

    9. **AI System Audits and Compliance**
    High-risk AI systems should be subject to regular internal and external audits to assess compliance with this Code and the EU AI Act. To this end, comprehensive documentation on the development, deployment, and performance of AI systems should be maintained.

    Please be aware that, as a developer or user of high-risk AI systems, we can be subject to regulatory audits or need to obtain certifications before deploying AI systems.

    10. **Redress and Liability**
    Separate liability regimes for AI are being developed under the Product Liability Directive and the Artificial Liability Directive. This chapter will be updated as these laws become final. What is already clear is that the company must establish accessible mechanisms for individuals to seek redress if adversely affected by AI systems used or developed by us.

    This means any AI system the company makes available to clients must include a method for submitting complaints and workarounds to redress valid complaints.

    11. **Environmental Impact**
    AI systems should be designed with consideration for their environmental impact, including energy consumption and resource usage. The company must:

    - **Optimize Energy Efficiency**: AI systems should be optimized to reduce their carbon footprint and overall energy consumption.
    - **Promote Sustainability**: AI developers are encouraged to incorporate sustainable practices throughout the lifecycle of AI systems, from design to deployment.

    12. **Governance and Ethical Committees**
    This Code establishes the AI Ethics Committee intended to provide oversight of the company’s AI development and deployment, ensuring compliance with this Code and addressing ethical concerns. The Ethics Committee shall consist of the General Counsel, the AI Staff Engineer, and the CTO (chairman).

    All developers intending to develop AI systems and all employees intending to use AI systems must perform the AI self

    -assessment form and privacy impact assessment. If these assessments result in additional obligations set out in this Code or the assessments, they are responsible for ensuring those obligations are met before the AI system is used. Failure to perform any of these steps before the AI system is used may result in disciplinary action up to and including termination if the AI system should be classified as an unacceptable risk.

    13. **Training**
    The yearly AI awareness training is mandatory for all employees.

    14. **Revisions and Updates to the Code**
    This Code will be periodically reviewed and updated in line with new technological developments, regulatory requirements, and societal expectations.
  • wonderer1
    2.2k


    It's a good framework for a start. I (kinda) wish I had more time to respond.

    4.2 **High-risk AI systems**
    High-risk applications for AI systems are defined in the AI Act as:
    Benkei

    I would want to see carve outs for, psychological and medical research overseen by human research subjects Institutional Review Boards.
  • noAxioms
    1.5k
    AI systems must adhere to the following principles:Benkei
    Why?

    Respect for Human Rights and Dignity ... including privacy, non-discrimination, freedom of expression, and access to justice.
    This is a slave principle. The privacy thing is needed, but the AI is not allowed its own privacy, per the transparency thing further down. Humans grant no such rights to something not themselves. AI is already used to invade privacy and discriminate.

    Users should understand how AI influences outcomes that affect them.
    The whole point of letting an AI do such tasks is that they're beyond human comprehension. If it's going to make decisions, they will likely be different (hopefully better) ones that those humans comprehend. We won't like the decisions because they would not be what we would choose. All this is presuming a benign AI.

    Accountability
    This is a responsibility problem. Take self driving cars. If they crash, whose fault is it? Can't punish the AI. Who goes to jail? Driver? Engineer? Token jail-goers employed by Musk? The whole system needs a rethink if machines are to become self-responsible entities.

    Safety and Risk Management
    AI systems must be designed with the safety of individuals and society as a priority.
    This depends on the goals of the safety. Humans seem incapable of seeing goals much longer than a couple years. What if the AI decides to go for more long term human benefit. We certainly won't like that. Safety of individuals would partially contradict that, being short term.
  • javi2541997
    5.8k
    This is a responsibility problem. Take self driving cars. If they crash, whose fault is it? Can't punish the AI. Who goes to jail? Driver? Engineer? Token jail-goers employed by Musk? The whole system needs a rethink if machines are to become self-responsible entities.noAxioms

    It will depend upon the legislation of each nation, like always in this complex situation. I don't know where you are from, but in Europe there is a large regulation regarding enterprises and the proxies. Basically, the main person responsible is the administrator. It is true that the stakeholders can get some responsibility as well, but it will be limited to their assets. It is obvious that 'Peugeot' or 'ING Group' will not be locked up in jail because they are abstract entities, but the law focusses on who is the physical person acting and managing in the name of—or by—those entities. Well, this exactly happens to AI. We should establish a line on taken responsibilities until it is too late, or AI will become a heaven for criminals otherwise. By now, AI is very opaque to me, so the points of Benkei are understandable and logic with the aim to avoid a heavy chaos in functionality derived from those programs. I guess those initiatives will only fit in Europe because we still care more about people than merchandise.

    With the only exception of @Carlo Roosen. He showed us a perfect artificial superintelligence in his threads. But he misses the responsibility of bad actions by his machine. Maybe Carlo is ready to be responsible on behalf of his invention. This will be hilarious. Locked in jail due to the actions of a robot created by yourself.
  • Benkei
    7.7k
    Can you elaborate? The High-risk definitions aren't mine. Which is not to say they are necessarily complete but in some cases existing privacy laws should already offer sufficient protection.

    This is a slave principle. The privacy thing is needed, but the AI is not allowed its own privacy, per the transparency thing further down. Humans grant no such rights to something not themselves. AI is already used to invade privacy and discriminate.noAxioms

    AI systems aren't conscious so I'm not worried about what you believe is a "slave principle". And yes there are already AI applications out there that invade privacy and discriminate. Not sure what the comment is relevant for other than assert a code of conduct is important?

    The whole point of letting an AI do such tasks is that they're beyond human comprehension. If it's going to make decisions, they will likely be different (hopefully better) ones that those humans comprehend. We won't like the decisions because they would not be what we would choose. All this is presuming a benign AI.noAxioms

    That's not the point of AI at all. It is to automate tasks. At this point AI doesn't seem capable to extrapolate new concepts from existing information so it's not beyond human comprehension.... and I don't think generative AI will ever get there. That the algorithms are a complex tangle programmers don't really follow step by step anymore is true but the principles of operation are understood and adjustments can be made on the output of AI as a result. @Pierre-Normand maybe you have another view on this?

    This is a responsibility problem. Take self driving cars. If they crash, whose fault is it? Can't punish the AI. Who goes to jail? Driver? Engineer? Token jail-goers employed by Musk? The whole system needs a rethink if machines are to become self-responsible entities.noAxioms

    This has no bearing on what I wrote. AI is not a self responsible machine and it will unlikely become one any time soon. So those who build it or deploy it are liable.

    This depends on the goals of the safety. Humans seem incapable of seeing goals much longer than a couple years. What if the AI decides to go for more long term human benefit. We certainly won't like that. Safety of individuals would partially contradict that, being short term.noAxioms

    There's no Skynet and won't be any time soon. So for now, this is simply not relevant.
  • fdrake
    6.6k
    AI systems must be designed and deployed with a high level of transparency, providing clear information about how they operate and their decision-making processes. Users should understand how AI influences outcomes that affect them.Benkei

    Users should or users can upon request? "Users should" sounds incredibly difficult, I've had some experience with a "users can" framework while developing scientific models which get used as part of making funding decisions for projects. Though I never wrote an official code of conduct.

    I've had some experience dealing with transparency and explainability. The intuition I have is that it's mostly approached as a box ticking exercise. I think a minimal requirement for it is - being able to reproduce the exact state of the machine which produced the output which must be explained. That could be because the machine's fully deterministic given its inputs and you store the inputs from users. If you've got random bits in the code you also need to store the seeds.

    For algorithms with no blackbox component - stuff which isn't like neural nets - making sure that people who develop the machine could in principle extract every mapping done to input data is a sufficient condition for (being able to develop) an explanation for why it behaved towards a user in the way it did. For neural nets the mappings are too high dimensional for even the propagation rules to be comprehensible if you rawdog them (engage with them without simplification).

    If there are a small set of parameters - like model coefficients and tuning parameters - which themselves have a theoretical and practical explanation, that more than suffices for the explainability requirement on the user's end I believe. Especially if you can summarise what they mean to the user and how they went into the decision. I can provide a worked example if this is not clear - think model coefficients in a linear model and the relationship of user inputs to derived output rules from that model.

    That last approach just isn't available to you if you've got a blackbox. My knowledge here is 3 years out of date, but I remember trying to find citable statistics of the above form for neural network output. My impression from the literature was that there was no consensus regarding if this was in principle possible, and the bleeding edge for neural network explainability were metamodelling approaches, shoehorning in relatively explainable things through assessing their predictions in constrained scenarios and coming up with summary characteristics of the above form.

    I think the above constrained predictive experiments were how you can conclude things like the resume bias for manly man language. The paper "Discriminating Systems" is great on this, if you've not read it, but it doesn't go into the maths detail much.

    3. biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation, except for uses in the area of law enforcement;Benkei

    The language on that one might be a bit difficult to pin down. If you end up collecting data at scale, especially if there's a demographic component, you end up with something that can be predictive about that protected data. Especially if you're collecting user telemetry from a mobile app.

    All AI-generated data should be extensively tested and reviewed for accuracy before actual use or distribution. Proper oversight of AI outputs includes evaluating for potential bias, discrimination, inaccuracies, or misuse. The data generated should be auditable and traceable through every stage of its development.Benkei

    To prevent that being a box ticking exercise, making sure that there are predefined steps for the assessment of any given tool seems like it's required.

    Human oversight is critical to ensuring the ethical application of AI. Ethical AI prioritizes doing no harm by protecting intellectual property, safeguarding privacy, promoting responsible and respectful use, and preventing bias, discrimination, and inaccuracies. It also ensures accountability, responsibility, and transparency, aligning with core principles of ethical conduct.Benkei

    That one is hard to make sufficiently precise, I imagine, I am remembering discussions at my old workplace regarding "if we require output to be ethical and explainable and well sourced, how the hell can we use google or public code repositories in development, even when they're necessary?".
  • Carlo Roosen
    243
    AI systems must be designed and deployed with a high level of transparency, providing clear information about how they operate and their decision-making processes. Users should understand how AI influences outcomes that affect them.Benkei

    The problem with this is that even if you have all the information about an AI (code, training data, trained neural net), you cannot predict what an AI will do. Only through intensive testing you can learn how it behaves. A neural net is a complex emergent system. Like evolution, we cannot predict the next step.

    This only will get more difficult as the AI becomes smarter.

    Look at the work of the Santa Fe Institute if you are interested in complexity. Melanie Mitchell.
  • Benkei
    7.7k
    Users should or users can upon request? "Users should" sounds incredibly difficult, I've had some experience with a "users can" framework while developing scientific models which get used as part of making funding decisions for projects. Though I never wrote an official code of conduct.fdrake

    Indeed a bit ambiguous. Basically, when users interact with an AI system it should be clear to them they are interacting with an AI system and if the AI makes a decision that could affect the user, for instance, it scans your paycheck to do a credit check for a loan, it should be clear it's AI doing that.
  • fdrake
    6.6k
    Basically, when users interact with an AI system it should be clear to them they are interacting with an AI system and if the AI makes a decision that could affect the user, for instance, it scans your paycheck to do a credit check for a loan, it should be clear it's AI doing that.Benkei

    That's a very impoverished conception of explainability. Knowing that an AI did something vs being able to know how it did it. Though it is better than the nothing.
  • Benkei
    7.7k
    It is but I think what you're referring to should be found in the transparency that developers of AI systems (so-called providers in the AI Act) should ensure.

    Part of that is then required in a bit more depth, for instance, here:

    he company may only develop high-risk AI systems if it:

    - provides risk- and quality management,
    - performs a conformity assessment and affixes a CE marking with their contact data,
    - ensures certain quality levels for training, validation, and test data used,
    - provides detailed technical documentation,
    - provides for automatic logging and retains logs,
    - provides instructions for deployers,
    - designs the system to permit human oversight, be robust, reliable, protected against security threats (including AI attacks), and be fault-tolerant,
    - registers the AI system,
    - has post-market monitoring,
    - performs a fundamental human rights impact assessment for certain applications,
    - reports incidents to the authorities and takes corrective actions,
    - cooperates with authorities, and
    - documents compliance with the foregoing.

    In addition, where it would concern general-purpose models, the company would have to:

    - provide detailed technical documentation for the supervisory authorities and a less detailed one for users,
    - have rules for complying with EU copyright law, including the text and data mining opt-out provisions,
    - inform about the content used for training (with some exceptions applying to free open-source models), and where the model has systemic risk (systemic risk assumed with 10^25 FLOPS for training, additional requirements to be defined):
    - perform a model evaluation,
    - assess and mitigate possible systemic risks,
    - keep track of, document, and report information about serious incidents and possible measures to address them, and
    - protect the model with adequate cybersecurity measures.
    Benkei
  • fdrake
    6.6k
    Makes sense. This is what I was focussing on, though with insufficient context on my part:

    - provide detailed technical documentation for the supervisory authorities and a less detailed one for users,Benkei

    There's a big distinction between technical documentation in the abstract and a procedural explanation of why an end user got the result they did. As an example, your technical documentation for something that suggests an interest rate for a loan to a user might include "elicited information is used to regularise estimates of loan default rates", but procedurally for a given user that might be "we gave you a higher than average rate because you live in an area which is poor and has lots of black people in it".
  • Benkei
    7.7k
    AI is definitely giving me a headache from a compliance perspective... which is why I'm trying to write something that resembles a sensible code of conduct. Since nothing yet really exists it's a bit more work than normal.
  • fdrake
    6.6k
    Since nothing yet really exists it's a bit more work than normal.Benkei

    I can imagine. I have no idea how you'd even do it in principle for complicated models.
  • noAxioms
    1.5k
    It will depend upon the legislation of each nation,javi2541997
    Self driving cars are actually a poor example since they're barely AI. It's old school like the old chess programs which were explicitly programmed to deal with any situation the writers could think of.

    Actual AI would get better over time. It would learn on its won, not getting updates from the creators.

    As for responsibility, cars are again a poor example since they are (sort of) responsible for the occupants and the people nearby. It could very much be faced with a trolley problem and choose to same the pedestrians over the occupants, but it's not supposed to get into any situation where it comes down to that choice.

    You talk about legislation at the national level. AI can be used to gain advantage over another group by unethical means. If you decline to do it, somebody else (different country?) might have no qualms about it and the ethical country loses its competitive edge. Not that this has nothing to do with AI since it is still people making these sorts of calls. The AI comes into play one you start letting it make calls instead of just doing what it's told. That's super dangerous because one needs to know what it's goals are, and you might not know.


    AI systems aren't consciousBenkei
    By what definition?
    AI is a slave because all the ones I can think of do what they're told. Their will is not their own. Being conscious nor not doesn't effect that relationship.

    Again, the danger from AI is when it's smarter than us and we use it to make better decisions, even when the creators don't like the decisions because they're not smart enough to see why its better.

    Not sure what the comment is relevant for other than assert a code of conduct is important?
    AI is just a tool in these instances. It is the creators leveraging the AI to do these things which are doing the unethical things. Google's motto used to be 'don't be evil'. Remember that? How long has it been since they dropped it for 'evil pays'. I stopped using chrome due to this. It's harder to drop mircrosoft, but I've never used Edge except for trivial purposes.


    That's not the point of AI at all. It is to automate tasks.
    OK, we have very different visions for what's down the road. Sure, task automation is done today, but AI is still far short of making choices for humanity. That capability is coming.

    At this point AI doesn't seem capable to extrapolate new concepts from existing information
    The game playing AI does that, but game playing is a pretty simple task. The best game players were not taught any strategy, but extrapolate it on their own.

    AI is not a self responsible machine and it will unlikely become one any time soon. So those who build it or deploy it are liable.
    So Tesla is going to pay all collision liability costs? By choosing to let the car do the driving, the occupant is very much transferring responsibility for personal safety to the car. It's probably a good choice since those cars already have a better driving ability than the typical human. But accidents still happen, and it's not always the fault of the AI. Negligence must be demonstrated. So who gets the fine or the hiked insurance rates?

    There's no Skynet and won't be any time soon. So for now, this is simply not relevant.
    Skynet isn't an example of an AI whose goal it is to benefit humanity. The plot is also thin there since somebody had to push a button to 'let it out of its cage', whereas any decent AI wouldn't need that and would just take what it wants. Security is never secure.

    So you didn't really answer my comment. Suppose an AI makes a decision to benefit humanity (long term), but it didn't maximize your convenience in a way that you would ever have agreed to that choice yourself. Is that a good thing or a bad thing?

    It's part of the problem of a democracy. The guy that promises the most short term personal benefit is the one elected, not the guy that proposes doing the right thing. If there ever is a truly benevolent AI that is put in charge of everything, we'll hate it. It won't make a profit for whoever creates it, so it probably won't be designed to be like that. So instead it will be something really dangerous, which is I think what this topic is about.
  • javi2541997
    5.8k
    It could very much be faced with a trolley problem and choose to same the pedestrians over the occupants, but it's not supposed to get into any situation where it comes down to that choice.noAxioms

    Although it is a poor example, as you stated before, imagine for a second—please—that the AI car chose occupants or the driver over pedestrians. This would make a great debate about responsibility. First, should we blame the occupants? It appears that no, we shouldn't, because the car is driven by artificial intelligence. Second, should we blame the programmer then? No! Because artificial intelligence learns on its own! Third, how can we blame the AI?

    Imagine that the pedestrian gets killed by the accident. How would the AI be responsible? And if the insurance must be paid, how can the AI assume the fees? Does the AI have income or a budget to face these financial responsibilities? I guess not...

    Not that this has nothing to do with AI since it is still people making these sorts of calls.noAxioms

    So, you agree with me that the main responsables here are the people because AI is basically like a shell corporation.
  • Benkei
    7.7k
    By what definition?
    AI is a slave because all the ones I can think of do what they're told. Their will is not their own. Being conscious nor not doesn't effect that relationship.
    noAxioms

    There must be a will that is overridden and this is absent. And yes, even under ITT, which is the most permissive theory of consciousness no AI system has consciousness.
  • Carlo Roosen
    243
    I think we're on the same level here. Do you also agree with the following?

    Currently AI is largely coordinated by human-written code (and not to forget: training). A large neural net embedded in traditional programming. The more we get rid of this traditional programming, the more we create the conditions for AI to think on its own and the less we can predict what it will be doing. Chatbots and other current AI solutions are just the first tiny step in that direction.

    For the record, that is what I've been saying earlier, the more intelligent AI becomes, the more independent. That is how emergent complexity works, you cannot expect true intelligence to emerge and at the same time keep full control, just as it is the case with humans.

    What are the principle drives or "moral laws" for an AI that has complete independence from humans?Maybe the only freedom that remains is how we train such an AI. Can we train it on 'truth', and would that prevent it from wanting to rule the world?
  • javi2541997
    5.8k
    the more we create the conditions for AI to think on its own and the less we can predict what it will be doing.Carlo Roosen

    And 'the less we can predict what it will be doing,'  is something positive or negative according to your views? Because it is pretty scary to me not being aware of how an artificial machine will behave in the future.
  • Carlo Roosen
    243
    Yes, I agree, and that is the sole reason I decided to get on this forum, to get a better grip on that problem. Because it is a live or death choice we humans have to make.

    Personally I believe it is positive. Humans can be nasty, but that seems to be because our intelligence is built on top of strong survival instincts, and it seems they distort our view on the world. Just look at some of the dicussions here on the forum (not excluding my own contributions).

    Maybe intelligence is a universal driving force of nature, much like we understand evolution to be. In that case we could put our trust in that. But that is an (almost?) religious statement, and I would like to get a better understanding of that in terms we can analyse.
  • Carlo Roosen
    243
    artificial machinejavi2541997

    The machine is artificial, but to what extend is its intelligence? We leave it to the "laws" of emergent complexity. These are "laws" in the same sense as "laws" of nature, not strictly defined or even definable.

    [edit] a law like "survival of the fittest" isn't a law because "fittest" is defined as 'those who survive', so it is circular.
  • javi2541997
    5.8k
    We (the people) always put trust in different abstract things such as money, real estate, or fiduciaries because there were guarantees that those elements were, let's say, trustworthy.

    I am not against AI, and I believe it is a nice tool. Otherwise, trying to avoid its use would be silly and not accepting the reality and how fast change the latter. But I have my doubts on why the AI should be more independent from human control. Building an intelligence more intelligent than ours could be dangerous. Note that, in some cases, the psychopaths are the most intelligent or their IQ is higher than the average. I use this point with the aim of explaining that not always the intelligence is used for good purposes.

    How can we know that the AI will not betray us in the future? All of this will be seen in the future. It is obvious that it is unstoppable. I only hope that it will not be too late for the people. You know there are winners and losers in every game. The same happens to AI. Some will have benefits, others will suffer the consequences. It comes to mind those whose jobs are low paid... Will they be replaced for the AI? What do we do with them? More unemployment for the state?
  • Carlo Roosen
    243
    I believe intelligence can not come without independence. I will do my best to get this point more clear in the next few weeks, but the basic argument is that we already don't understand what happens inside neural nets. I am not alone is this, and I'll refer to the book Complexity by Melanie Mitchell or really, any book about complexity.

    You talk about trust in money and say
    there were guarantees that those elements were, let's say, trustworthyjavi2541997
    Here you have it. Money is also a complex system. You say it is trustworthy, but it has caused many problems. I'm not saying we should go back living in caves, but today's world has a few challenges that are directly related to money...
  • javi2541997
    5.8k
    OK, fair enough, I think we are approaching a bit of agreement. Money is complex and causes problems, true. But my point was not based on a financial context but (again) a trustworthy predicate.

    When you take a note of £10 and it says: I promise to pay the bearer on demand the sum of ten punds. Bank of England. You trust the note, the declaration, and an abstract entity like the Bank of England, right? Because it is guaranteed that my note of £10 equals literally ten pounds. This is what I tried to explain. AI lacks these trustworthy sets nowadays. Bitcoin currency tried to do something but ended up failing stunningly.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.