Comments

  • The (possible) Dangers of of AI Technology


    I had ChatGPT anonymize the code of conduct I'm writing. So far:

    ---

    1. **INTRODUCTION**
    The European Union (EU) advanced regulations for artificial intelligence (AI) through the EU AI Act (Regulation (EU) 2024/1689), which aims to establish a legal framework for AI systems.

    The Code establishes guiding principles and obligations for the company and all of its subsidiaries (together, “the company”) that design, develop, deploy, or manage Artificial Intelligence (AI) systems. The purpose is to promote the safe, ethical, and lawful use of AI technologies in accordance with the principles of the EU AI Act, ensuring the protection of fundamental rights, safety, and public trust.

    2. **SCOPE**
    This Code applies to:

    - All developers, providers, and users of AI systems operating within or targeting the EU market.
    - AI systems categorized under various risk levels (low, limited, high, and unacceptable risk) as defined by the EU AI Act.

    An ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

    3. **FUNDAMENTAL PRINCIPLES**
    AI systems must adhere to the following principles:

    3.1 **Respect for Human Rights and Dignity**
    AI systems must respect the fundamental rights as enshrined in the EU Charter of Fundamental Rights, including privacy, non-discrimination, freedom of expression, and access to justice.

    3.2 **Fairness and Non-discrimination**
    AI systems must not lead to discriminatory outcomes. Measures should be in place to prevent, monitor, and mitigate bias in AI models.

    3.3 **Transparency and Explainability**
    AI systems must be designed and deployed with a high level of transparency, providing clear information about how they operate and their decision-making processes. Users should understand how AI influences outcomes that affect them.

    3.4 **Accountability**
    The company is accountable for the AI systems it designs, deploys, or manages. Clear governance structures should be in place to assign responsibility for compliance with this Code and the EU AI Act.

    3.5 **Safety and Risk Management**
    AI systems must be designed with the safety of individuals and society as a priority. This includes risk assessment and mitigation strategies to prevent harmful impacts or unintended consequences.

    4. **CLASSIFICATION OF AI SYSTEMS BY RISK LEVEL**
    To help you with the classification of the AI system you intend to develop or use, you can perform the AI self-assessment in the Legal Service Desk environment found here: [site]

    4.1 **Unacceptable risks**
    AI systems that pose an unacceptable risk to human rights, such as those that manipulate human behaviour or exploit vulnerable groups, are strictly prohibited. These include:

    1. subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques;
    2. an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability, or a specific social or economic situation;
    3. biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation, except for uses in the area of law enforcement;
    4. social scoring AI systems used for evaluation or classification of natural persons or groups of persons over a certain period based on their social behaviour or known, inferred, or predicted personal or personality characteristics;
    5. ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless strictly necessary for certain objectives;
    6. risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics;
    7. AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
    8. AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.

    In addition, literature bears out that biometric categorization systems have abysmal accuracy rates, predictive policing generates racist and sexist outputs, and emotion recognition in high-risk areas has little to no ability to objectively measure reactions (together with prohibited AI systems “Unethical AI”). Additionally, they invariably can have major impacts on the rights to free speech, privacy, protesting, and assembly.

    As a result, the company will not develop, use, or market Unethical AI, even in countries where such Unethical AI are not prohibited.

    4.2 **High-risk AI systems**
    High-risk applications for AI systems are defined in the AI Act as:

    1. AI systems that are intended to be used as a safety component of a product, or the AI system is itself a product and that have to undergo a third-party conformity assessment (e.g., toys, medical devices, in vitro diagnostic medical devices, etc.);
    2. biometrics including emotion recognition;
    3. critical infrastructure;
    4. education and vocational training;
    5. employment, workers management, and access to self-employment;
    6. access to and enjoyment of essential private services and essential public services and benefits;
    7. law enforcement;
    8. migration, asylum, and border control management; and
    9. administration of justice and democratic processes.

    This list omits other important areas, such as AI used in media, recommender systems, science and academia (e.g., experiments, drug discovery, research, hypothesis testing, parts of medicine), most of finance and trading, most types of insurance, and specific consumer-facing applications, such as chatbots and pricing algorithms, which pose significant risk to individuals and society. Particularly, the latter have shown to have provided bad advice or produced reputation-damaging outputs.

    As a result, in addition to the above list, all AI systems related to pricing algorithms, credit scoring, and chatbots will be considered “high-risk” by the company.

    4.2.1 **Development of high-risk AI systems**
    The company may only develop high-risk AI systems if it:

    - provides risk- and quality management,
    - performs a conformity assessment and affixes a CE marking with their contact data,
    - ensures certain quality levels for training, validation, and test data used,
    - provides detailed technical documentation,
    - provides for automatic logging and retains logs,
    - provides instructions for deployers,
    - designs the system to permit human oversight, be robust, reliable, protected against security threats (including AI attacks), and be fault-tolerant,
    - registers the AI system,
    - has post-market monitoring,
    - performs a fundamental human rights impact assessment for certain applications,
    - reports incidents to the authorities and takes corrective actions,
    - cooperates with authorities, and
    - documents compliance with the foregoing.

    In addition, where it would concern general-purpose models, the company would have to:

    - provide detailed technical documentation for the supervisory authorities and a less detailed one for users,
    - have rules for complying with EU copyright law, including the text and data mining opt-out provisions,
    - inform about the content used for training (with some exceptions applying to free open-source models), and where the model has systemic risk (systemic risk assumed with 10^25 FLOPS for training, additional requirements to be defined):
    - perform a model evaluation,
    - assess and mitigate possible systemic risks,
    - keep track of, document, and report information about serious incidents and possible measures to address them, and
    - protect the model with adequate cybersecurity measures.

    4.3 **Limited-risk AI Systems**
    AI systems posing limited or no risk are AI systems not falling within the scope of the foregoing high-risk and unacceptable risk.

    4.3.1 **Development of Limited-risk AI Systems**
    If the company develops Limited-risk AI Systems, then it should ensure the following:

    - ensure that individuals are informed that they are interacting with an AI system, unless this is obvious,
    - ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated, and that the solution is effective, interoperable, robust, and reliable (with exceptions for AI systems used only in an assistive role for standard editing or that do not substantially alter the input data),
    - ensure adequate AI literacy within the organization, and
    - ensure compliance with this voluntary Code.

    In addition to the above, the company shall pursue the following best practices when developing Limited-risk AI Systems:

    - provide risk- and quality management,
    - provide detailed technical documentation,
    - design the system to permit human oversight, be robust, reliable, protected against security threats (including AI attacks), and be fault-tolerant, and
    - perform a fundamental human rights impact assessment.

    5. **USE OF AI SYSTEMS**
    Irrespective of the risk qualification of an AI system, when using any AI systems, employees are prohibited from submitting any intellectual property, sensitive data, or personal data to AI systems.

    5.1 **Personal Data**
    Submitting personal or sensitive data can lead to privacy violations, risking the confidentiality of individuals' information and the organization’s reputation. Compliance with data protection is crucial. An exception applies if the AI system is installed in a company-controlled environment and, if it concerns client data, there are instructions from the client for the intended processing activity of that personal data. Please note that anonymized data (data for which we do not have the encryption key) is not considered personal data.

    5.2 **Intellectual Property Protection**
    Sharing source code or proprietary algorithms can jeopardize the company's competitive advantage and lead to intellectual property theft. An exception applies if the AI system is installed in a company-controlled environment

    5.3 **Data Integrity**
    Submitting sensitive data to AI systems can result in unintended use or manipulation of that data, compromising its integrity and leading to erroneous outcomes. An exception may apply if the AI system is installed in a controlled environment. Please contact the Information Security Officer to ensure data integrity is protected.

    5.4 **Misuse**
    AI systems can unintentionally learn from submitted data, creating a risk of misuse or unauthorized access to that information. This can lead to severe security breaches and data leaks. An exception may apply if the AI system is installed in a controlled environment. Please contact the AI Staff Engineer to ensure the AI system will not lead to unintended misuse or unauthorized access.

    5.5 **Trust and Accountability**
    By ensuring that sensitive information is not shared, we uphold a culture of trust and accountability, reinforcing our commitment to ethical AI use. An exception may apply if the AI system is installed in a controlled environment. Please contact the Information Security Officer to ensure sensitive information is protected.

    5.6 **Use of High-risk AI Systems**
    If we use high-risk AI systems, then there are additional obligations on the use of such AI systems. These obligations include:

    - Complying with the provider's instructions,
    - Ensuring adequate human oversight,
    - Participating in the provider's post-market monitoring of the AI system,
    - Retaining automatically generated logs for at least six months,
    - Ensuring adequate input,
    - Informing employees if the AI system concerns them,
    - Reporting serious incidents and certain risks to the authorities and provider,
    - Informing affected persons regarding decisions that were rendered by or with the help of the AI system, and
    - Complying with information requests of affected persons concerning such decisions.

    Please note we can be considered both a provider as well as a user of AI systems if we intend to use an AI system we have developed for our own use.

    5.7 **Use of Limited-risk AI Systems**
    If the company uses Limited-risk AI Systems, then we should ensure the following:

    - Ensure that individuals are informed that they are interacting with an AI system, unless this is obvious,
    - Ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated, and that the solution is effective, interoperable, robust, and reliable (with exceptions for AI systems used only in an assistive role for standard editing or that do not substantially alter the input data),
    - Ensure adequate AI literacy within the organization, and
    - Ensure compliance with this voluntary Code.

    5.7.1 **Best Practices**
    In addition to the above, the company shall pursue the following best practices when using Limited-risk AI Systems:

    - Complying with the provider's instructions,
    - Ensuring adequate human oversight,
    - Ensuring adequate input, and
    - Informing employees if the AI system concerns them.

    Please note we can be considered both a provider as well as a user of AI systems if we intend to use an AI system we have developed for our own use.

    6. **Prevent Bias, Discrimination, Inaccuracy, and Misuse**
    For AI systems to learn, they require data to train on, which can include text, images, videos, numbers, and computer code. Generally, larger data sets lead to better AI performance. However, no data set is entirely objective, as they all carry inherent biases, shaped by assumptions and preferences.

    AI systems can also inherit biases in multiple ways. They make decisions based on training data, which might contain biased human decisions or reflect historical and social inequalities, even when sensitive factors such as gender, race, or sexual orientation are excluded. For instance, a hiring algorithm was discontinued by a major tech company after it was found to favor certain applicants based on language patterns more common in men's resumes.

    Generative AI can sometimes produce inaccurate or fabricated information, known as "hallucinations," and present it as fact. These inaccuracies stem from limitations in algorithms, poor data quality, or lack of context. Large language models (LLMs), which enable AI tools to generate human-like text, are responsible for these hallucinations. While LLMs generate coherent responses, they lack true understanding of the information they present, instead predicting the next word based on probability rather than accuracy. This highlights the importance of verifying AI output to avoid spreading false or harmful information.

    Another area of concern is improper use of AI-generated content. Organizations may inadvertently engage in plagiarism, unauthorized adaptations, or unlicensed commercial use of content, leading to potential legal risks.

    To mitigate these challenges, it is crucial to establish processes for identifying and addressing issues with AI outputs. Users should not accept AI-generated information at face value; instead, they should question and evaluate it. Transparency in how the AI arrives at its conclusions is key, and qualified individuals should review AI outputs. Additionally, implementing red flag assessments and providing continuous training to reinforce responsible AI use within the workforce is essential.

    6.1 **Testing Against Bias and Discrimination**
    Predictive AI systems can be tested for bias or discrimination by simply denying the AI system the information suspected of biasing outcomes, to ensure that it makes predictions blind to that variable. Testing AI systems to avoid bias could work as follows:

    1. Train the model on all data.
    2. Then re-train the model on all the data except specific data suspected of generating bias.
    3. Review the model’s predictions.

    If the model’s predictions are equally good without the excluded information, it means the model makes predictions that are blind to that factor. But if the predictions are different when that data is included, it means one of two things: either the excluded data represented a valid explanatory variable in the model, or there could be potential bias in the data that should be examined further before relying on the AI system. Human oversight is critical to ensuring the ethical application of AI.

    7. **Ensure Accountability, Responsibility, and Transparency**
    Anyone applying AI to a process or data must have sufficient knowledge of the subject. It is the developer’s or user's responsibility to determine if the data involved is sensitive, proprietary, confidential, or restricted, and to fill out the self-assessment form and follow up on all obligations before integrating AI systems into processes or software. Transparency is essential throughout the entire AI development and use process. Users should inform recipients that AI was used to generate the data, specify the AI system employed, explain how the data was processed, and outline any limitations.

    All AI-generated data should be extensively tested and reviewed for accuracy before actual use or distribution. Proper oversight of AI outputs includes evaluating for potential bias, discrimination, inaccuracies, or misuse. The data generated should be auditable and traceable through every stage of its development.

    Human oversight is critical to ensuring the ethical application of AI. Ethical AI prioritizes doing no harm by protecting intellectual property, safeguarding privacy, promoting responsible and respectful use, and preventing bias, discrimination, and inaccuracies. It also ensures accountability, responsibility, and transparency, aligning with core principles of ethical conduct.

    8. **Data Protection and Privacy**
    AI systems must also comply with the EU's General Data Protection Regulation (GDPR). For any AI system we develop, a privacy impact assessment should be performed. For any AI system we use, we should ask the supplier to provide that privacy impact assessment to us. If the supplier does not have one, we should perform one ourselves before using the AI system.

    A privacy impact assessment can be performed via the Legal Service desk here: [site]

    Although the privacy impact assessment covers additional concerns, the major concerns with respect to any AI system are the following:

    - **Data Minimization**: AI systems should only process the minimum amount of personal data necessary for their function.
    - **Consent and Control**: Where personal data is involved, explicit consent must be obtained. Individuals must have the ability to withdraw consent and control how their data is used.
    - **Right to Information**: Individuals have the right to be informed about how AI systems process their personal data, including decisions made based on this data.
    - **Data Anonymization and Pseudonymization**: When feasible, data used by AI systems should be anonymized or pseudonymized to protect individual privacy.

    9. **AI System Audits and Compliance**
    High-risk AI systems should be subject to regular internal and external audits to assess compliance with this Code and the EU AI Act. To this end, comprehensive documentation on the development, deployment, and performance of AI systems should be maintained.

    Please be aware that, as a developer or user of high-risk AI systems, we can be subject to regulatory audits or need to obtain certifications before deploying AI systems.

    10. **Redress and Liability**
    Separate liability regimes for AI are being developed under the Product Liability Directive and the Artificial Liability Directive. This chapter will be updated as these laws become final. What is already clear is that the company must establish accessible mechanisms for individuals to seek redress if adversely affected by AI systems used or developed by us.

    This means any AI system the company makes available to clients must include a method for submitting complaints and workarounds to redress valid complaints.

    11. **Environmental Impact**
    AI systems should be designed with consideration for their environmental impact, including energy consumption and resource usage. The company must:

    - **Optimize Energy Efficiency**: AI systems should be optimized to reduce their carbon footprint and overall energy consumption.
    - **Promote Sustainability**: AI developers are encouraged to incorporate sustainable practices throughout the lifecycle of AI systems, from design to deployment.

    12. **Governance and Ethical Committees**
    This Code establishes the AI Ethics Committee intended to provide oversight of the company’s AI development and deployment, ensuring compliance with this Code and addressing ethical concerns. The Ethics Committee shall consist of the General Counsel, the AI Staff Engineer, and the CTO (chairman).

    All developers intending to develop AI systems and all employees intending to use AI systems must perform the AI self

    -assessment form and privacy impact assessment. If these assessments result in additional obligations set out in this Code or the assessments, they are responsible for ensuring those obligations are met before the AI system is used. Failure to perform any of these steps before the AI system is used may result in disciplinary action up to and including termination if the AI system should be classified as an unacceptable risk.

    13. **Training**
    The yearly AI awareness training is mandatory for all employees.

    14. **Revisions and Updates to the Code**
    This Code will be periodically reviewed and updated in line with new technological developments, regulatory requirements, and societal expectations.
  • Israel killing civilians in Gaza and the West Bank
    I shared citations with you setting out the facts. And you're just going "what about what I saw in the news?". The number of arrest certainly isn't indicative now is it?
  • Cryptocurrency
    We were talking about the taxes at the time of the Pharaohs. They were necessarily simple.Tarskian

    So am I.
  • Cryptocurrency
    The Egyptian tax on a farmer's harvest is not the same as modern personal income tax. The farmer did not have to give any information to facilitate the collection of that tax.Tarskian

    You're a funny man and obviously have no idea how taxes worked in Egypt. Look it up, because you're miles off. Farmers had to give information about their harvest and livestock.

    The three countries you mention have huge issues with modern slavery or human trafficking and are not favoured destination because immigrants travel for handouts but because they tend to flee for safety and economic opportunity. The economic immigrant abusing social benefits is just a racist canard.

    Also, nice false analogy showing a picture of poverty and a rich bitch on the beach.
  • Cryptocurrency
    The Egyptian tax collector would measure the farmer's land and compute taxes based on that information. He would not ask the farmer if he somehow made some more money in other ways and try to get half of that too.Tarskian

    Yes, they were very equitable back then until they weren't and the farmers starved. Maybe read a history book or something. But in any case, you're not refusing the point that income tax has existed for millenia.

    I find the practice of demanding people to fill out a tax return form to be particularly detestable. Why on earth would I give that kind of information to someone else?Tarskian

    Back in the day when people weren't paying lawyers and accountants tons of money to avoid paying taxes, tax forms weren't really a thing. Something to do with the inherent greed of many who do want government services like the enforcement of contracts, basic utilities ans safety but don't want to pay for it. If only the system wasn't so shit that the government needed this information to be able to tax people.

    Seriously, I don't want to live in a country where the ruling mafia asks me how much money I have made last year and then demands that I give them half.Tarskian

    Good luck in those failed states when you get sick.
  • Israel killing civilians in Gaza and the West Bank
    Regarding criminality we can all do our own research and make up our minds. Here in the states the issue of which side is more criminal isn't close. There have been many, many arrests on the pro-palestine side and very few on the pro-israel side. They block highways, destroy shops, violate noise ordinances, occasionally commit assaults... but you're free to believe as you like.BitconnectCarlos

    It's not a matter of belief, it's a matter of fact that pro-Israeli sides have committed violence in the US as well and the majority (97%) of all protests on both sides have been peaceful.
  • Israel killing civilians in Gaza and the West Bank
    Anti-zionism is effectively anti-semitism.BitconnectCarlos

    No it isn't. One is opposition against a political idea, the other is just plain hatred.

    And it's this sort of dumb shit that causes so many people to not care about the distinction anymore.
  • Cryptocurrency
    Certainly not to the extent that it exists in the West. For example, personal income taxation was introduced only in 1913 while the human race has been around for almost 300 000 years. Governments outside the West may also have it on the books but until this day they still do not collect it and they probably never will.Tarskian

    Having to deliver 15 sacks of grain you harvested is effectively income tax. It existed quite a bit longer, at least since the Egyptians.
  • Israel killing civilians in Gaza and the West Bank
    Pro-Palestine protests also tend to be more prone to criminality than pro-Israel ones which explains the more heavy-handed treatment.BitconnectCarlos

    Based on what? I'm sure that's the excuse the police will give you.

    Also the fact people confuse Jews with Israel is on the heads of many of your brothers and sisters insisting for decades any criticism of Israel was "anti-semitism" or that "anti-zionism" = "anti-semitism". The guilt over WWII has been wielded as an instrument and setting up Israel as the "Jewish homeland" really makes things confusing for most people.

    EDIT: I asked perplexity:

    Based on the search results provided, there have been some instances of violence at pro-Israeli demonstrations and counter-protests, though the overall picture is complex. Here are the key points:

    ## Violent Incidents Involving Pro-Israel Groups

    - At UCLA, a pro-Israel mob violently attacked a peaceful pro-Palestinian encampment on campus[1][2]. The attackers, described as largely non-student age individuals, used fireworks, pepper spray, sticks, stones, and metal fencing to assault students[1].

    - Counter-protesters, identified as pro-Israel, attempted to storm a Palestine solidarity encampment at UCLA, leading to violent clashes[2]. They tore down barricades, shot fireworks into the encampment, and sprayed irritant gases[2].

    - In Chicago during the Democratic National Convention, a protest organized by pro-Hamas groups turned violent, with demonstrators throwing objects at police and surrounding a taxi with passengers inside[3].

    ## Context and Broader Trends

    - While these violent incidents have occurred, it's important to note that the vast majority of demonstrations related to the Israel-Palestine conflict in the US have been peaceful[5].

    - According to ACLED data, 97% of student demonstrations related to the conflict between October 7, 2023, and May 3, 2024, remained peaceful[5].

    - Pro-Palestinian demonstrations have also faced accusations of antisemitism and resulted in violence in some cases, particularly in Europe[4].

    - Israeli authorities have also cracked down on anti-war protests within Israel, with some restrictions placed on demonstrations[4].

    It's crucial to recognize that while there have been violent incidents involving pro-Israel groups, violence has not been characteristic of all pro-Israel demonstrations. The situation remains complex, with tensions high on both sides of the conflict.

    Citations:
    [1] https://www.aljazeera.com/news/2024/5/1/ucla-clashes-pro-palestinian-protesters-attacked-by-israel-supporters
    [2] https://dailybruin.com/2024/05/01/pro-israel-counter-protesters-attempt-to-storm-encampment-sparking-violence
    [3] https://www.nbcnews.com/news/-israeli-consulate-tonight-groups-one-dncs-violent-protests-rcna167384
    [4] https://en.wikipedia.org/wiki/Israel%E2%80%93Hamas_war_protests
    [5] https://acleddata.com/2024/05/10/us-student-pro-palestine-demonstrations-remain-overwhelmingly-peaceful-acled-brief/
    [6] https://www.cbsnews.com/losangeles/news/pro-israel-group-to-hold-counterdemonstration-on-ucla-campus/
    [7] https://www.timesofisrael.com/us-jewish-students-say-pro-israel-violence-at-ucla-protest-camp-undercuts-advocacy/
    [8] https://www.youtube.com/watch?v=gNrkh8V8IMw
  • Poets and tyrants in the Republic, Book I
    You are correct. It had slipped my mind when I was skimming the text again.
  • Cryptocurrency
    I can't find how Alameda Research was set up. Whether it was single or two tier. It appears single tier, making the CEO the representative of the company and responsible for its decisions.

    From the histories, it seems SBF took some distance from Alameda Research and started working for FTX and put Ellison and another trader in charge as CO-CEOs. Ellison ended up as CEO later when the other co-CEO left. Before the meltdown, SBF apparently told everyone to take out personal loans against the company. There were 5 billion in personal loans on the balance sheet. It's not clear during what time they were entered into and whether SBF was CEO at the time or the two co-CEO's or just Ellison. In any case, who lends 5 billion without collateral? Only a criminally liable idiot when it goes south.

    Alameda Research had special privileges allowing it to directly use customer deposits. SBF probably thought that the insane 65 billion USD line of credit from FTX to Alameda Research made that ok. This credit line (again INSANELY high) was never disclosed to investors, FTX clients or auditors.
    Some people would also deposit money directly into the account of Alameda Research, they would skim money of it for expenses and only then forward it to FTX.

    It was FTX who transferred the 8 billion USD in deposits to Alameda Research. That does seem to be mostly on SBF's shoulders then. It's not clear what the basis for the transfer was. Was it a loan? Some kind of asset swap? Or really as dumb as a straight transfer? (It couldn't be the credit line, because pulling that would be done by Alameda).

    But Ellison didn't return the money as she should have.

    As far as I'm concerned, the whole thing stinks to high heaven. The fact the credit line was hidden, the ridiculous amount of personal loans, the absence of appropriate hedges (they were all traders for crying out loud!) and even not hiring risk managers and giving them appropriate tools is really on the company leadership.

    As far as I'm concerned Ellison got off way too lightly.
  • Israel killing civilians in Gaza and the West Bank
    Yes, because you do not care about police action in the Netherlands only to raise it in a thread where it has zero bearing. The subject is the Israeli-Palestinian conflict not what some police officers might think of it. I can assure you that for every police officer not wanting to protect Jewish locations they are in the minority compared to the pro-Israeli officers and they are not out there beating protesters as has happened with pro-Palestinian protesters. So, quite frankly I don't care as it is a non-issue.
  • US Election 2024 (All general discussion)
    Seems well corroborated across different sources but I wasn't aware he was running?
  • Israel killing civilians in Gaza and the West Bank
    if only you would've gotten a hissy-fit about Dutch police refusing to act against extinction rebellion, which happened months ago, because they were conscientious objectors too I would actually think you'd be raising this in good faith. Or complained about the excessive violence by Dutch police against covid-protesters.

    I love the selective outrage so you can continue to play act being a victim while the most moral army in the world keeps killing civilians and has apparently bought into the insanity of de-escalation through escalation. Two World Wars started because different sides thought they had more to gain through violence but don't let that stop you from supporting idiots.
  • Poets and tyrants in the Republic, Book I
    A different interpretation just occurred to me and we might be reading more into it than he meant.

    He seems to be nice to the poets, but is he really? He only believes the idea of doing good to friends and harming enemies didn't originate with them. It doesn't logically follow that Socrates thinks poets are absolved from wrongdoing. And we find no irony in his approach as a result.
  • Poets and tyrants in the Republic, Book I
    Maybe setting up a guilt by association? The wealthy pay the piper and he plays their tune. So the poet is just a tool.

    Or undermining any claims to authority with respect to wealthy men and poets alike.

    Other than that, I've got nothing.
  • Cryptocurrency
    Whether there was damage is irrelevant. You do not use the money of clients of another company, that you are holding on behalf of that other company to pay of your loans. You do not invest with money of clients of another company that you are holding on behalf of that other company. He invested with money that wasn't his which is only permitted with the appropriate licenses and meeting certain capital requirements. The company had neither the license or came remotely close to the capital requirements needed if he would've had such license. That Solana bounced back was pure luck, it really could've been a huge loss.

    What I'm more baffled about is what SBF's role was at Alameda Research. How come the CEO gets away with this shit - she's the one presumably running the company! Is that just because she put SBF out to dry? And if SBF was just a shareholder then he again fucked up by trying to direct the company, which wasn't his role as a shareholder.

    And Michael Lewis has a tendency to withhold pertinent facts. How many of those billions were paid out from the FTX holdings SBF and Ellison forfeited? Not clear to me. If they were funded from the FTX holdings then "depositors" had a loss, they just managed to cover it with other gains.
  • Israel killing civilians in Gaza and the West Bank
    It's simply about power. If you're powerful enough, humanitarian considerations don't matter because it's not beneficial to restrain yourself. Exercising power at its maximum yields the greatest rewards. But this is short-term thinking, assuming you'll be powerful forever or cynical if you realise you won't be but do it any way and let later generations deal with the fall out.
  • A rebuttal of Nozick's Entitlement Theory - fruits of labour
    Work gobbled up all my free time for the immediate future. Hope to get back to this at some point... :-(
  • A rebuttal of Nozick's Entitlement Theory - fruits of labour
    But it does if employers act as you state here:I like sushi

    Let's pretend that because it sometimes happens it isn't an issue? Really?

    The market doesn't operate that way. There's no price mechanism through which such information is communicated so even if you would want to, you can't.
  • A rebuttal of Nozick's Entitlement Theory - fruits of labour
    "haha, I'll keep acting unethically and reap the benefits of unethical behaviour".

    Thank you for your irrelevant opinion.
  • A rebuttal of Nozick's Entitlement Theory - fruits of labour
    That you didn't like how it worked in practice doesn't resolve the underlying issue that disenfranchised people are confronted with systemic barriers. Come up with something better then.
  • A rebuttal of Nozick's Entitlement Theory - fruits of labour
    By Nozick? I have read Chapters 1 & 2 so I am unsure why you are suggesting these standards are largely unexamined? Also in Chapter 10 (which I have also read fairly thoroughly) how and why society adopt certain standards are looked at here too. For instance, in the three utopian positions: 'Imperialistic,' 'Missionary,' and 'Existential'.

    Maybe Chapter 3 does not cover what bothers you thoroughly enough. It woudl be helpful if you can pinpoint where in the Chapter he falls short. I will read that Chapter now. I have been meaning to get back to the book and read every page so this is a good enough excuse to do so now :) Thanks
    I like sushi

    No, I meant not examined by larger society. Nozick obviously did examine it, although as you've surmised I think he misses the point. But some ideas are so entrenched in society; deregulation is better for markets, privatisation is better, companies are more efficient than governments etc., that they aren't really examined anymore even when there's plenty of historic data disproving a lot these assumptions. I think the automatic reflex assuming what we earn through labour is morally ours is such an unexamined idea. Which is weird, because there's plenty of criticsm of Nozick's idea but they don't really get the attention they deserve outside of philosophy.

    Historically, criticisms of Nozick's idea can be categorised as follows: it fails to account for historical injustices, the social nature of labor, the complexities of inequalities, and the moral dimensions of desert and justice.

    By the way, I read Nozick 20 years ago along with Rawls "A Theory of Justice". So when you've read it, you'll be more knowledgeable than me for sure.

    What I think I'm trying to add to existing criticisms is the following:

    By framing "worth" as central to justice in labor and distribution, I emphasize the importance of evaluating individuals' contributions beyond mere economic output. This perspective can be seen as an re-emphasis of theAristotlean idea of justice as giving people what they deserve based on their virtues or contributions, especially when we connect it to modern concerns about meritocracy, inequality, and ethical labor practices.

    Positioning need as a central ethical criterion in hiring and labor contracts adds a layer of moral responsibility that goes beyond traditional economic considerations. This can be seen as a contribution if it’s used to advocate for specific policies or business practices that prioritize those most in need but of course Marx and Rawls both addressed need as well.

    Combining just production with worth and need might create an ethical framework that could be used to critique current market practices. And I think in a sense I'm still stuck in the individual objective here but including the social justice aspect @T Clark mentioned might enrich it further. Which I was thinking about since his post and I'm going to have a stab at.

    Social Justice and Worth
    To address this, we could broaden the concept of worth to include potential worth. This means recognizing that individuals from socially disadvantaged groups may not have had the same opportunities to demonstrate their worth due to systemic barriers. Therefore, affirmative action or equal opportunity initiatives would be justified to help these individuals reach their potential. This adjustment reframes worth not just as a reflection of past contributions but as a recognition of untapped potential, especially in underrepresented groups.

    Social Justice and Need
    Need can be expanded to include contextual need—the recognition that social disadvantages often create long-term, less visible needs. For example, a person from a marginalized community may not appear to be in acute need but may suffer from a lack of educational opportunities, social capital, or access to networks. Addressing these deeper, systemic needs through targeted interventions (such as scholarships, mentorship programs, or community-based initiatives) ensures that the framework is sensitive to the hidden dimensions of social disadvantage.

    Social Justice and Just Production
    Just production can be expanded to include inclusive production, which explicitly aims to involve and empower socially disadvantaged groups. This could mean adopting hiring practices that prioritize diversity, ensuring that supply chains are free from discrimination, and promoting workplace cultures that are inclusive and supportive of all employees. Inclusive production ensures that social justice is embedded in the very process of creating goods and services, not just in their distribution.

    Or something like that but this deviates from the original point of the OP: a rebuttal of the entitlement theory.
  • Israel killing civilians in Gaza and the West Bank
    The Muslims didn't agree with it. Not the people; the muslims - their political leadership.BitconnectCarlos

    Their religious persuasion is irrelevant. They were forced to accept a division of land after decades of colonisation without any say as to how this should be done. And it wasn't as if Jews were unwelcome before that.

    And that's what it comes down to. Apparently for some people, Jewish self-determination is dependent on getting permission from the Muslims. Jews want to rule over themselves? Better get the Muslims to sign off on that. Specifically the Mufti of Jerusalem at that time, Amin al-Husseini, who supported the dhimmi system and was a friend of Hitler's. The Jews need his permission.BitconnectCarlos

    No, what it comes down to is that you cannot exercise self-determination by displacing other natives (and it's not as if Jews were natives themselves, given the diaspora). That's been the issue that was and is resisted and it has nothing to do with being Jewish.

    But please pretend to be the victim when you steal someone else's land.
  • A rebuttal of Nozick's Entitlement Theory - fruits of labour
    There was no absolute moral factor invoked.jgill

    True but Nozick does invoke morals and it underpins beliefs held in wider society that to most are a self-evident truth. Nobody questions if they have a right to their income, it is sufficient that they did the work. There's a contract after all. Etc. Etc. "The standards society adopted" are largely unexamined. It is a card house of assumptions and I'm challenging a specific one.
  • A rebuttal of Nozick's Entitlement Theory - fruits of labour
    Question: can there be a right that is not either directly or indirectly a moral right? It seems to me that all rights are at least in some part moral.tim wood

    A license to sell drugs, financial products or therapy seem to be rights that are independent of moral rights but instead agreement on how things should be regulated in order to protect higher norms (consumer protection for instance).

    And you appear to hold them as somehow an optional add-on. As if morality stored in that tent over there, and maybe we go get some and maybe we don't and just pass on by.tim wood

    What part of what I wrote makes you say this when I'm only discussing one specific presumed moral right?

    My argument, then, is that I have a moral right to possession of what I earn, both for the immediate good, to me and mine, and also for the greater good of all enjoying the same right. This not an absolute claim to all and everything I earn, because there are conflicting obligations we all have as members of communities that also have to be met.tim wood

    I don't think you have such a moral right for the reasons I set out. All things being equal, except you are not homeless and starving, should you get the job or a homeless person? The market doesn't care, which is why you have homeless and starving people in all market-driven economies even when there's more than enough wealth available for this not to be the case.

    Where is morality found when societies accept ludicrous riches and abject poverty at the same time?
  • A rebuttal of Nozick's Entitlement Theory - fruits of labour
    Glad you liked it! To be honest, it's been awhile since I've really put an effort in something other than current politics. Actual philosophy takes time.

    How much of the answer to this question depends on the existence of income as money rather than what is produced - material or agricultural? Money separates and abstracts the labor from the product. Of course, a sharecropper or slave gives some or all of what is produced to the landowner and even farmers who are landowners themselves pay taxes.T Clark

    I don't think it is dependent on it. Cash is what we know so easiest to imagine. Also why for brevity's sake I moved from "fruits of labour" to income but I'm talking about any "fruit" really.

    Although there is need to address the fact that CEOs often have an income that is hundreds of times what their workers make, there is also the issue of risk. The willingness to take on risk has value that has to be compensated. Beyond willingness, there also has to be ability, which often depends more on wealth than income.T Clark

    Fair point. I assume you mean shareholders then, since the CEO usually isn't invested until after his golden parachute and bonuses. :wink: I think risk is secondary though. In an ideal, ethical world, a shareholder will only invest in ethical business. When that is clear, we can value risk.

    On a tangential but related point, the perpetual gains of a shareholder is another process I'm not ethically comfortable with. Capital markets primarily put borrowers and lenders together, and while the interest is "variable", shareholders don't take additional risk compared to a bank providing a private loan. In fact, I think it's lower because selling shares is a whole lot easier than selling private loans to a third party thereby having additional options available to reduce losses. But in return, he gets perpetual rights to dividend and the value of his "property" increases as value is added through the "fruits of labour" of employees. It feels like double-dipping if you compare it with a regular loan; which just gives right to the notional amount and whatever interest was agreed for a specified time frame (or if perpetual with an option to repay the notional amount). Put differently, shareholder returns are much higher and persist for longer than they should if we consider the basic function of capital is just another type of loan. But I digress; just take away that I'm not a fan of the perpetual nature of companies.

    I see this as a fraught issue. It makes sense that more trained, experienced, and competent people should be paid more than people who are less so, but so called "meritocracy" without more or less rigid job definition will generally lead to socially disadvantaged people, e.g. racial minorities and women, being paid less. Beyond that, it leads directly to those from historically privileged groups becoming even more privileged, but I guess that is outside the scope of this discussion.T Clark

    A good addition! I'm not sure how this could be added because it's a further moral argument why some people have no moral claim to their income. I think it's suggestive of social justice and the way I set up my criticism is that is at this point blind to such considerations.

    This makes sense from an ethical perspective. As I see it, a good society should ensure a decent life to all it's members willing to participate. I think that would be hard to implement. I guess minimum wages are an attempt to get at the issue. Beyond that, the only practical solution I can think of is a universal basic income, which can separate work from income completely. I guess that's also outside the scope of this discussion.T Clark

    It's not out of scope but yes, I haven't gotten as far to think about actual policy implementations. I merely want to waylay a foundational point of Nozick's entitlement theory which is widely shared in broader society as a given. Disproving it, should open up different avenues of discussion instead of acceptance of a status quo that doesn't give us moral outcomes.

    The problem is enforcement. There have to be legal rights to enforce moral ones.T Clark

    Don't be so practical man! First things first. :yum:
  • A rebuttal of Nozick's Entitlement Theory - fruits of labour
    Your remarks remain vague so perhaps I'm not simply understanding you properly.

    Nozick's idea is a basic and usually unchallenged assumption for many economic pundits that argue in favour of specific policies, which are often expressed through law. If the assumption is incorrect, many of those specific policies lack a rational basis. This changes the discussion because it's mostly argued along the lines of collectivism vs. individualism, which resemble more political intiutions. At least, I've not really seen collectivists and individualism meet in some kind of Aufhebung. As far as I know, this specific line of criticism has not been previously expressed.
  • A rebuttal of Nozick's Entitlement Theory - fruits of labour
    Different issue I think. Let's assume both had a right to their income and this is the end result then this is not a matter of a moral right to income but a question perhaps of solidarity.
  • A rebuttal of Nozick's Entitlement Theory - fruits of labour
    And yet many think people have a moral right to all their income and wealth and some even go so far as to say taxation is theft but when it isn't then what does that say about tax, redistribution, compensation, indemnities etc. and many other rights that people currently hold inviolable?

    Again, I'm not offering a practical moral theory here, I'm arguing some basic assumptions that are shared widely, also outside of libertarian thought, with regards to some legal rights are wrong.
  • A rebuttal of Nozick's Entitlement Theory - fruits of labour
    I've considered it but probably not. At least, not because you produced it but if you produce ethically and nobody has greater need (a starving child perhaps?), then yes.

    Just to point out, I'm not aiming necessarily to have a functional and practical rule how to make these choices, merely to waylay the notion of moral entitlement to income merely because you did the work (moral right would require additional justification).

    I'm suggesting laws can change. The larger political point probabyl is that we have a lot more political room to decide what we should do with income and other means of wealth production.
  • Ukraine Crisis
    I think all social media is a blight on information sharing. Bullshit certainty exceeds truth and thoughtful doubt by a factor 1,000. As far as I'm concerned everybody should be deplatformed, Facebook, Instagram, X; the whole lot should be burned to the ground.
  • What is the most uninteresting philosopher/philosophy?
    I had expected to find Rawls in there too.
  • Ukraine Crisis
    Jesus. Which is why I'm never on Twitter/X/Elon's propaganda toy.
  • Climate change denial
    A claim nobody ever has made.

    Judd said the timeline should serve as a wake-up call. Even under the worst-case scenarios, human-caused warming will not push the Earth beyond the bounds of habitability. But it will create conditions unlike anything seen in the 300,000 years our species has existed — conditions that could wreak havoc through ecosystems and communities.

    We're talking about mass displacement due to flooding and droughts, food shortages due to failed crops, more violent weather, supply chain disruptions, fresh water shortages, increased likelihood of wars for scarce resources etc.
  • The (possible) Dangers of of AI Technology
    As introduction I have this:

    AI systems must adhere to the following principles:

    Respect for Human Rights and Dignity
    AI systems must respect the fundamental rights as enshrined in the EU Charter of Fundamental Rights, including privacy, non-discrimination, freedom of expression, and access to justice.

    Fairness and Non-discrimination
    AI systems must not lead to discriminatory outcomes. Measures should be in place to prevent, monitor, and mitigate bias in AI models.

    Transparency and Explainability
    AI systems must be designed and deployed with a high level of transparency, providing clear information about how they operate and their decision-making processes. Users should understand how AI influences outcomes that affect them.

    Accountability
    Ohpen is accountable for the AI systems it designs, deploys, or manages. Clear governance structures should be in place to assign responsibility for compliance with this Code and the EU AI Act.

    Safety and Risk Management
    AI systems must be designed with the safety of individuals and society as a priority. This includes risk assessment and mitigation strategies to prevent harmful impacts or unintended consequences.

    But translating this to conduct is another matter. I developed an AI self-assessment form in JIRA so that at least people can figure out if what they want to use, implement or develop is an unacceptable (prohibited), high or limited risk. For high risk there's quite a few things to adhere to, which I set out, but that's not the extent of relevant "conduct" you want a code of conduct to cover. The only thing useful I've found so far is a description of a method of testing to avoid bias and discrimination.
  • The (possible) Dangers of of AI Technology
    Yes, it has come about due to the EU AI Act, which recommends writing a code of conduct for developers and "users" (or providers and deployers). We developed our first AI tool, estimating resolution time of tickets based on type which was a limited risk tool (no personal data, no decision making).
  • The (possible) Dangers of of AI Technology
    How to develop and use AI systems, what you shouldn't do, what you ought to do, etc.

    EDIT: the how obviously doesn't pertain to the technical part but what types of AI system are allowed, what needs to be in place to ensure the end result would be ethical, that sort of "how".