Comments

  • I do not pray. Therefore God exists.
    :lol: Yes, yes, don't be too literal. You do have a mammary gland though.
  • I do not pray. Therefore God exists.
    "If A then B" is logically equivalent to "if C then D." You're going to have offer a proof that is not the case without equivocating between deductive and inductive logic. I don't see how that can be done.Hanover

    This is quite obviously not logically equivalent. The statements "if A then B" and "if C then D" involve different propositional variables (A, B, C, and D). Unless we have additional information about the relationship between these variables, we cannot assume they have any connection. The truth value of "if A then B" is determined solely by the truth values of A and B, while the truth value of "if C then D" depends only on C and D. These are independent of each other.

    Without additional information, there's no reason to believe that the truth value of one statement would always match the other for all possible combinations of truth values. It's therefore entirely possible for "if A then B" to be true while "if C then D" is false, or vice versa, depending on the specific truth values of A, B, C, and D.

    This offers an equivocation of the term "true." The sylIogism "If A then B, A, therefore B" is true. The statement "I am at work today" is true. It's the analytic/synthetic distinction. It's for that reason why a statement can be deductively true and inductively false, which is what the OP showed. Analytic validity says nothing about synthetic validity.Hanover

    Yes, you're right to point out some equivocation here but the point I was trying to make stands. If the premisses of a deductive argument are true (and I'm assuming a form of correspondence theory) then a valid argument will have a logically true conclusion and necessarily correspond with reality.

    The definition of "mammal" was arrived at a posteriori as opposed to "bachelor" which, as you've used it, (i.e. there is no probability a bachelor can be married) is a purely analytic statement. That is, no amount of searching for the married bachelor will locate one. On the other hand, unless you've reduced all definitions to having a necessary element for them to be applicable (which would be an essentialist approach), the term "mammal" could be applied to a non-milk providing animal, assuming sufficient other attributes were satisfied. This might be the case should a new subspecies be found. For example, all mammals give birth to live young, except the platypus, which lays eggs. That exception is carved out because the users of the term "mammal" had other purposes for that word other than creation of a legalistic analytic term.Hanover

    While scientific terms do evolve, they do function as relatively fixed definitions within the scientific community. The fact that definitions can change doesn't necessarily mean they are probabilistic or inductive in nature during their period of use and "giving milk" is a rather necessary condition in that definition since the name is derived from breasts because of the mammary gland. So no, nice try but nobody has ever used the term for any animal that doesn't produce milk and they never will.
  • All Causation is Indirect
    I think “distal” is a better term than “ultimate” because ultimate causes are never really ultimate, and are always also proximal to some effect in a chain.Baden

    I see your point about "ultimate" causes never really being ultimate, as they’re always proximal to something else in a chain. Personally, I prefer the term "necessary cause," especially when applying the conditio sine qua non test ("but for" test). The idea is that if X hadn't occurred, the entire chain leading to A wouldn’t have happened. So, in practice, you look for the most proximate cause where this test holds true.

    But this might just be my legal upbringing in Dutch law, where we assess which damages naturally follow from a tortious act or negligence. The focus is on finding the most direct necessary cause that can be reasonably linked to the effect, rather than something more abstract like an ultimate cause.
  • US Election 2024 (All general discussion)
    Roe vs. Wade was overturned largely thanks to Trump getting the lying Kavanaugh appointed to the Supreme Court. @180 Proof thinks this has cost Trump a lot support from women now that all sorts of abortion bans have been implemented in various US states. So "Roevember" reflects his expectation of a landslide victory for Kamala Harris as a result.

    Edit: Take for instance Brett's story about the "Devil's Triangle". That's apparently a game of quarters with three cups arranged in a triangle. The rules are unknown because the inventor of the game, Brett Kavanaugh, could not explain them under oath.

    It's also commonly known as a threesome involving two men and one woman.
  • I do not pray. Therefore God exists.
    The two arguments (mine and the OP) are logically equivalent under deductive logic. They are represented symbolically the exact same. For one to be more ridiculous than the other means you are using some standard of measure other than deductive logic to measure them, which means you see one as a syllogism and the other as something else.'Hanover

    Logical equivalence is not determined solely by symbolic representation, especially in light of the interpretive choices made when translating from natural language to formal logical symbols. Even so, two arguments can be symbolically similar but not logically equivalent if their premises or conclusions differ in truth value or meaning. Logical equivalence requires that both arguments have the same truth value in all possible scenarios.

    Deductive logic says nothing at all about the world.Hanover

    This statement is only partially correct. Deductive logic ensures that if the premises are true, the conclusion must also be true. Obviously when the premises are true, a valid deductive conclusion will say something about the world.

    Inductive logic references drawing a general conclusion from specific observations and it relates to gathering information about the world, not just simply maintaining the truth value of a sentence. To claim that statement of the OP is more logical than mine means that the conclusion of the OP bears some relationship to reality. If that is the case, it is entirely coincidental.Hanover

    Inductive logic indeed involves drawing general conclusions from specific observations but they can never be proven true the way a deductive argument can. It merely deals in probabilities; the more observations you have the likelier your conclusion.

    Your second argument is not inductively supported because the conclusion is supported by the definition of mammal. It's like saying, all bachelors are single, John is single and therefore a bachelor. There's no probability involved that a single man isn't a bachelor.

    And yes, in formal logic, premises in syllogisms are assumed to be true for the sake of argumentation.
  • Israel killing civilians in Gaza and the West Bank
    Is it fair to say at least that you're a sympathizer?BitconnectCarlos

    Nope.

    My point has been consistently that what Hamas does and our opinions on that are irrelevant. They are the evemy and for peace you'll have to negotiate with them. Trying to categorically wipe them out serves exactly one agenda and it isn't saving hostages.
  • Israel killing civilians in Gaza and the West Bank
    Nice guilt by association fallacy going on there. But yes there are plenty of people who support the violent resistance against oppressors. As is their right. You do the same each time you defend Israel, except you defend a colonizer and oppressor hell-bent on doing to others what you complain protestors to want to do to Israel. And each protestor wielding an Israeli flag is no different than people wielding Hamas flags. It's Israel actually and factually and practically annihilating Palestinians and their culture. People calling for the end of Israel are still less evil than actual Israeli soldiers and politicians committing crimes. But yes, why don't you complain about those protesters as if it had any bearing at all on the war crimes of Israel.

    The most straightforward explanation is that people are done with the double standards: where are the memorials for Gaza terror victims?
  • Israel killing civilians in Gaza and the West Bank
    They're protesting against oppression, apartheid, war crimes and for self-determination of Palestinians. That's not protesting for Hamas (which is in any case a reaction to Israeli oppression) or a particular political setup to begin with. So nice strawman as usual.

    Edit : also Israel is neither western nor democratic.
  • Israel killing civilians in Gaza and the West Bank
    The sad part about that last post is all of that has actually been said in this thread. The world is going insane.
  • Israel killing civilians in Gaza and the West Bank
    MOST MORAL ARMY IN THE WORLD! UN ARE ANTI-SEMITES! ANTI-ZIONISM IS ANTI-SEMITISM! HAMAS IS EVIL. CIVILIANS ARE COLLATERAL DAMAGE. ZIONISM = DECOLONISATION! SELF-DETERMINISM FOR JEWS NOT FOR PALESTINIANS!

    I forgot: WOULD YOU RATHER LIVE UNDER ISRAELI RULE THAN HAMAS RULE? EVERYTHING WE DO IS MORAL BECAUSE WE IS GOOD GUYS!
  • I do not pray. Therefore God exists.
    The written form is, the formal notation isn't.
  • I do not pray. Therefore God exists.
    I don't think that is quite right. Q is merely implied because of the way a material conditional works. The inference <~P; ∴(P→A)> is different from, "If there are no prayers, they cannot be answered." It says, "If there are no prayers, then it is true that (P→A)."Leontiskos

    Thank you for explaining that. That put me on the right track to understand what's going on. I found this via perplexity.ai:

    Applications and Limitations

    The material conditional is widely used in mathematics and formal logic. It serves as the basis for many programming language constructs. However, it's important to note that the material conditional doesn't always align perfectly with our intuitive understanding of "if-then" statements in natural language[1][2].

    Paradoxes

    The material conditional leads to some counterintuitive results when applied to natural language:

    1. A conditional with a false antecedent is always true.
    2. A conditional with a true consequent is always true.
    3. There's no requirement for a logical connection between the antecedent and consequent[3].

    These "paradoxes" arise from the truth-functional nature of the material conditional, which only considers the truth values of its components, not their meanings or relevance to each other[4].

    Understanding these properties and limitations is crucial for correctly interpreting and applying the material conditional in logical reasoning and formal systems.

    Citations:
    [1] https://en.wikipedia.org/wiki/Material_conditional
    [2] https://www.webpages.uidaho.edu/~morourke/202-phil/11-Fall/Handouts/Philosophical/Material-Conditional.htm
    [3] https://open.conted.ox.ac.uk/sites/open.conted.ox.ac.uk/files/resources/Create%20Document/Note-ifthen.pdf
    [4] https://rjh221.user.srcf.net/courses/1Aconditionals/Lecture1.pdf

    So, I"m reading up right now. :smile:
  • I do not pray. Therefore God exists.
    I disagree you can disregard the "not S" step, because the statement in its entirety must be false. If I say "if I pray then my prayers are answered", stating "I don't pray" says nothing about the consequent of that statement so we don't know what it means. Q is merely implied because if there are no prayers, they cannot be answered.

    I can also interpret the statement as a regular modus tollens and I will be affirming the consequent as a result:

    If God does not exist, then it is false that if I pray, my prayers will be answered. (If P, then Q)
    I do not pray. (Implies Q)
    Therefore, God exists. (Concludes not P)

    So I agree this is valid:

    ~G→~(P→A)
    ~P
    G

    But the logical structure and the argument are not necessarily the same. There are different ways to interpret it.
  • I do not pray. Therefore God exists.
    I think that's what ↪javi2541997 says. In reality, there is no necessary relation between God's existence and prayers being answered, in either direction, because "fate" might answer the prayers, instead of God, and God could choose not to answer prayers.Metaphysician Undercover

    Could be but that doesn't invalidate an argument. Premisses do not have to be true or correct to reach a valid argument. It only means the argument is unsound.
  • I do not pray. Therefore God exists.
    @javi2541997 @Metaphysician Undercover
    I'm not sure why the inversion fallacy is considered a separate fallacy from the fallacy of denying the antecedent. It only seems to differ in the assumption that if "If P, then Q" is true that therefore "if not P, then not Q" must also be true. But you get there if you analyse it as denying the antecedent as well.

    Denying the Antecedent fallacy

    If P, then Q
    Not P
    Therefore, not Q

    If God does not exist, then it is false that if I pray, then my prayers will be answered. So I do not pray. Therefore God exists.Banno

    If P, then Q
    Not P
    Therefore, not Q

    but really it says:

    If not P, then not Q (if R, then S)
    Q equals if R, then S
    Not R
    Therefore, not S
    Therefore, Q (through double negation)
    Therefore, P

    But not "R" therefore not "S" is denying the antecedent in the secondary argument "if I pray, then my prayers will be answered". So this is still invalid if you ask me.
  • Israel killing civilians in Gaza and the West Bank
    To say that you are "anti-zionist" is to say that you are opposed to Jewish self-determination.BitconnectCarlos

    Bullshit. That you cannot wrap your head around it because you adhere to a definition of zionism that's ahistorical and wrong is your problem.
  • Israel killing civilians in Gaza and the West Bank
    @BitconnectCarlos Of which there are many. You can be pro-Israel and against zionism, against war crimes and against disgusting reframing of colonisation as de-colonisation and lying about that recently invented frame as if it had existed for a long time. You want respect? Don't lie and recognise the splinters in your own eyes.
  • Israel killing civilians in Gaza and the West Bank
    You need to earn respect. You simply lost it all.
  • Plato's Republic Book 10
    It is different but it's not a footnote if my philosophy teacher was anybody to trust, who in turn really liked Eric Vögelin. He viewed it as a critical part of Plato's philosophical argument, particularly regarding the relationship between reality, imitation, and the nature of truth.

    Plato critiques poetry and the arts for being imitative, potentially misleading, and emotionally manipulative, distancing people from truth and rational understanding.The layers of imitation (the forms, the craftsman's creations, and the imitators' representations) reflect the complexity of human understanding and the challenge of grasping the transcendent order. It re-emphasizes the importance of striving for a direct encounter with the real rather than settling for mere representations or ideological constructs, much like the Simile of the Cave.

    It could be inferred that Plato’s critique of poetry reflects a broader philosophical concern about the ways in which individuals and societies can become detached from genuine understanding. The danger lies in accepting images or ideologies as sufficient substitutes for reality, leading to a distorted perception of justice and truth.

    I can understand how some people see it as a footnote though because it seems to re-examine points already made in the book.
  • The (possible) Dangers of of AI Technology
    By what definition?
    AI is a slave because all the ones I can think of do what they're told. Their will is not their own. Being conscious nor not doesn't effect that relationship.
    noAxioms

    There must be a will that is overridden and this is absent. And yes, even under ITT, which is the most permissive theory of consciousness no AI system has consciousness.
  • Israel killing civilians in Gaza and the West Bank
    You shouldn't engage this insanity. The most moral army commits war crime after war crime. Only Jews shall have self-determination (in a place they didn't live in for centuries) and settler colonism is now decolonization. Also, that idea existed for a very long time even if it hasn't.

    Instead of learning from his interlocuters here, who aren't exactly dumb, he chooses to drink right wing Israeli cool aid.
  • Israel killing civilians in Gaza and the West Bank
    This is a bold lie. The history of Zionism has nothing to do with decolonisation. The idea of Jews returning to their ancestral homeland has biblical precedents, with the Torah describing the Exodus from Egypt and journey to the Land of Israel. Throughout history, small numbers of Jews made pilgrimages or moved to Palestine, motivated by religious devotion.

    Modern political Zionism developed in the late 19th century in response to growing antisemitism in Europe:
    • The Hovevei Zion ("Lovers of Zion") movement formed in 1881, promoting Jewish settlement in Palestine.
    • The First Aliyah, a wave of Jewish immigration to Palestine, began in 1882.
    • Theodor Herzl is considered the founder of modern political Zionism. After witnessing antisemitism in France during the Dreyfus Affair in 1895, he concluded Jews needed their own state.

    He considered several places other than Palestine before that so claiming it's a decolonisation movement is just bullshit. Zionism has more commonly been viewed as a form of settler colonialism - and rightfully so. It's only been recently that some Zionist advocates have attempted to reframe the narrative by claiming Zionism as a decolonization or indigenous rights movement. This is a relatively new and controversial perspective that is not accepted by historians or scholars.

    Nice to see you are radicalising right in front of our noses. :vomit:
  • The (possible) Dangers of of AI Technology
    AI is definitely giving me a headache from a compliance perspective... which is why I'm trying to write something that resembles a sensible code of conduct. Since nothing yet really exists it's a bit more work than normal.
  • The (possible) Dangers of of AI Technology
    It is but I think what you're referring to should be found in the transparency that developers of AI systems (so-called providers in the AI Act) should ensure.

    Part of that is then required in a bit more depth, for instance, here:

    he company may only develop high-risk AI systems if it:

    - provides risk- and quality management,
    - performs a conformity assessment and affixes a CE marking with their contact data,
    - ensures certain quality levels for training, validation, and test data used,
    - provides detailed technical documentation,
    - provides for automatic logging and retains logs,
    - provides instructions for deployers,
    - designs the system to permit human oversight, be robust, reliable, protected against security threats (including AI attacks), and be fault-tolerant,
    - registers the AI system,
    - has post-market monitoring,
    - performs a fundamental human rights impact assessment for certain applications,
    - reports incidents to the authorities and takes corrective actions,
    - cooperates with authorities, and
    - documents compliance with the foregoing.

    In addition, where it would concern general-purpose models, the company would have to:

    - provide detailed technical documentation for the supervisory authorities and a less detailed one for users,
    - have rules for complying with EU copyright law, including the text and data mining opt-out provisions,
    - inform about the content used for training (with some exceptions applying to free open-source models), and where the model has systemic risk (systemic risk assumed with 10^25 FLOPS for training, additional requirements to be defined):
    - perform a model evaluation,
    - assess and mitigate possible systemic risks,
    - keep track of, document, and report information about serious incidents and possible measures to address them, and
    - protect the model with adequate cybersecurity measures.
    Benkei
  • The (possible) Dangers of of AI Technology
    Users should or users can upon request? "Users should" sounds incredibly difficult, I've had some experience with a "users can" framework while developing scientific models which get used as part of making funding decisions for projects. Though I never wrote an official code of conduct.fdrake

    Indeed a bit ambiguous. Basically, when users interact with an AI system it should be clear to them they are interacting with an AI system and if the AI makes a decision that could affect the user, for instance, it scans your paycheck to do a credit check for a loan, it should be clear it's AI doing that.
  • The (possible) Dangers of of AI Technology
    Can you elaborate? The High-risk definitions aren't mine. Which is not to say they are necessarily complete but in some cases existing privacy laws should already offer sufficient protection.

    This is a slave principle. The privacy thing is needed, but the AI is not allowed its own privacy, per the transparency thing further down. Humans grant no such rights to something not themselves. AI is already used to invade privacy and discriminate.noAxioms

    AI systems aren't conscious so I'm not worried about what you believe is a "slave principle". And yes there are already AI applications out there that invade privacy and discriminate. Not sure what the comment is relevant for other than assert a code of conduct is important?

    The whole point of letting an AI do such tasks is that they're beyond human comprehension. If it's going to make decisions, they will likely be different (hopefully better) ones that those humans comprehend. We won't like the decisions because they would not be what we would choose. All this is presuming a benign AI.noAxioms

    That's not the point of AI at all. It is to automate tasks. At this point AI doesn't seem capable to extrapolate new concepts from existing information so it's not beyond human comprehension.... and I don't think generative AI will ever get there. That the algorithms are a complex tangle programmers don't really follow step by step anymore is true but the principles of operation are understood and adjustments can be made on the output of AI as a result. @Pierre-Normand maybe you have another view on this?

    This is a responsibility problem. Take self driving cars. If they crash, whose fault is it? Can't punish the AI. Who goes to jail? Driver? Engineer? Token jail-goers employed by Musk? The whole system needs a rethink if machines are to become self-responsible entities.noAxioms

    This has no bearing on what I wrote. AI is not a self responsible machine and it will unlikely become one any time soon. So those who build it or deploy it are liable.

    This depends on the goals of the safety. Humans seem incapable of seeing goals much longer than a couple years. What if the AI decides to go for more long term human benefit. We certainly won't like that. Safety of individuals would partially contradict that, being short term.noAxioms

    There's no Skynet and won't be any time soon. So for now, this is simply not relevant.
  • Scarcity of cryptocurrencies
    All cryptocurrency, at least all that is valuable, is scarce.hypericin

    Is it? Or just expensive and sometimes artificially so?
  • The (possible) Dangers of of AI Technology


    I had ChatGPT anonymize the code of conduct I'm writing. So far:

    ---

    1. **INTRODUCTION**
    The European Union (EU) advanced regulations for artificial intelligence (AI) through the EU AI Act (Regulation (EU) 2024/1689), which aims to establish a legal framework for AI systems.

    The Code establishes guiding principles and obligations for the company and all of its subsidiaries (together, “the company”) that design, develop, deploy, or manage Artificial Intelligence (AI) systems. The purpose is to promote the safe, ethical, and lawful use of AI technologies in accordance with the principles of the EU AI Act, ensuring the protection of fundamental rights, safety, and public trust.

    2. **SCOPE**
    This Code applies to:

    - All developers, providers, and users of AI systems operating within or targeting the EU market.
    - AI systems categorized under various risk levels (low, limited, high, and unacceptable risk) as defined by the EU AI Act.

    An ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

    3. **FUNDAMENTAL PRINCIPLES**
    AI systems must adhere to the following principles:

    3.1 **Respect for Human Rights and Dignity**
    AI systems must respect the fundamental rights as enshrined in the EU Charter of Fundamental Rights, including privacy, non-discrimination, freedom of expression, and access to justice.

    3.2 **Fairness and Non-discrimination**
    AI systems must not lead to discriminatory outcomes. Measures should be in place to prevent, monitor, and mitigate bias in AI models.

    3.3 **Transparency and Explainability**
    AI systems must be designed and deployed with a high level of transparency, providing clear information about how they operate and their decision-making processes. Users should understand how AI influences outcomes that affect them.

    3.4 **Accountability**
    The company is accountable for the AI systems it designs, deploys, or manages. Clear governance structures should be in place to assign responsibility for compliance with this Code and the EU AI Act.

    3.5 **Safety and Risk Management**
    AI systems must be designed with the safety of individuals and society as a priority. This includes risk assessment and mitigation strategies to prevent harmful impacts or unintended consequences.

    4. **CLASSIFICATION OF AI SYSTEMS BY RISK LEVEL**
    To help you with the classification of the AI system you intend to develop or use, you can perform the AI self-assessment in the Legal Service Desk environment found here: [site]

    4.1 **Unacceptable risks**
    AI systems that pose an unacceptable risk to human rights, such as those that manipulate human behaviour or exploit vulnerable groups, are strictly prohibited. These include:

    1. subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques;
    2. an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability, or a specific social or economic situation;
    3. biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation, except for uses in the area of law enforcement;
    4. social scoring AI systems used for evaluation or classification of natural persons or groups of persons over a certain period based on their social behaviour or known, inferred, or predicted personal or personality characteristics;
    5. ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless strictly necessary for certain objectives;
    6. risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics;
    7. AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
    8. AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.

    In addition, literature bears out that biometric categorization systems have abysmal accuracy rates, predictive policing generates racist and sexist outputs, and emotion recognition in high-risk areas has little to no ability to objectively measure reactions (together with prohibited AI systems “Unethical AI”). Additionally, they invariably can have major impacts on the rights to free speech, privacy, protesting, and assembly.

    As a result, the company will not develop, use, or market Unethical AI, even in countries where such Unethical AI are not prohibited.

    4.2 **High-risk AI systems**
    High-risk applications for AI systems are defined in the AI Act as:

    1. AI systems that are intended to be used as a safety component of a product, or the AI system is itself a product and that have to undergo a third-party conformity assessment (e.g., toys, medical devices, in vitro diagnostic medical devices, etc.);
    2. biometrics including emotion recognition;
    3. critical infrastructure;
    4. education and vocational training;
    5. employment, workers management, and access to self-employment;
    6. access to and enjoyment of essential private services and essential public services and benefits;
    7. law enforcement;
    8. migration, asylum, and border control management; and
    9. administration of justice and democratic processes.

    This list omits other important areas, such as AI used in media, recommender systems, science and academia (e.g., experiments, drug discovery, research, hypothesis testing, parts of medicine), most of finance and trading, most types of insurance, and specific consumer-facing applications, such as chatbots and pricing algorithms, which pose significant risk to individuals and society. Particularly, the latter have shown to have provided bad advice or produced reputation-damaging outputs.

    As a result, in addition to the above list, all AI systems related to pricing algorithms, credit scoring, and chatbots will be considered “high-risk” by the company.

    4.2.1 **Development of high-risk AI systems**
    The company may only develop high-risk AI systems if it:

    - provides risk- and quality management,
    - performs a conformity assessment and affixes a CE marking with their contact data,
    - ensures certain quality levels for training, validation, and test data used,
    - provides detailed technical documentation,
    - provides for automatic logging and retains logs,
    - provides instructions for deployers,
    - designs the system to permit human oversight, be robust, reliable, protected against security threats (including AI attacks), and be fault-tolerant,
    - registers the AI system,
    - has post-market monitoring,
    - performs a fundamental human rights impact assessment for certain applications,
    - reports incidents to the authorities and takes corrective actions,
    - cooperates with authorities, and
    - documents compliance with the foregoing.

    In addition, where it would concern general-purpose models, the company would have to:

    - provide detailed technical documentation for the supervisory authorities and a less detailed one for users,
    - have rules for complying with EU copyright law, including the text and data mining opt-out provisions,
    - inform about the content used for training (with some exceptions applying to free open-source models), and where the model has systemic risk (systemic risk assumed with 10^25 FLOPS for training, additional requirements to be defined):
    - perform a model evaluation,
    - assess and mitigate possible systemic risks,
    - keep track of, document, and report information about serious incidents and possible measures to address them, and
    - protect the model with adequate cybersecurity measures.

    4.3 **Limited-risk AI Systems**
    AI systems posing limited or no risk are AI systems not falling within the scope of the foregoing high-risk and unacceptable risk.

    4.3.1 **Development of Limited-risk AI Systems**
    If the company develops Limited-risk AI Systems, then it should ensure the following:

    - ensure that individuals are informed that they are interacting with an AI system, unless this is obvious,
    - ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated, and that the solution is effective, interoperable, robust, and reliable (with exceptions for AI systems used only in an assistive role for standard editing or that do not substantially alter the input data),
    - ensure adequate AI literacy within the organization, and
    - ensure compliance with this voluntary Code.

    In addition to the above, the company shall pursue the following best practices when developing Limited-risk AI Systems:

    - provide risk- and quality management,
    - provide detailed technical documentation,
    - design the system to permit human oversight, be robust, reliable, protected against security threats (including AI attacks), and be fault-tolerant, and
    - perform a fundamental human rights impact assessment.

    5. **USE OF AI SYSTEMS**
    Irrespective of the risk qualification of an AI system, when using any AI systems, employees are prohibited from submitting any intellectual property, sensitive data, or personal data to AI systems.

    5.1 **Personal Data**
    Submitting personal or sensitive data can lead to privacy violations, risking the confidentiality of individuals' information and the organization’s reputation. Compliance with data protection is crucial. An exception applies if the AI system is installed in a company-controlled environment and, if it concerns client data, there are instructions from the client for the intended processing activity of that personal data. Please note that anonymized data (data for which we do not have the encryption key) is not considered personal data.

    5.2 **Intellectual Property Protection**
    Sharing source code or proprietary algorithms can jeopardize the company's competitive advantage and lead to intellectual property theft. An exception applies if the AI system is installed in a company-controlled environment

    5.3 **Data Integrity**
    Submitting sensitive data to AI systems can result in unintended use or manipulation of that data, compromising its integrity and leading to erroneous outcomes. An exception may apply if the AI system is installed in a controlled environment. Please contact the Information Security Officer to ensure data integrity is protected.

    5.4 **Misuse**
    AI systems can unintentionally learn from submitted data, creating a risk of misuse or unauthorized access to that information. This can lead to severe security breaches and data leaks. An exception may apply if the AI system is installed in a controlled environment. Please contact the AI Staff Engineer to ensure the AI system will not lead to unintended misuse or unauthorized access.

    5.5 **Trust and Accountability**
    By ensuring that sensitive information is not shared, we uphold a culture of trust and accountability, reinforcing our commitment to ethical AI use. An exception may apply if the AI system is installed in a controlled environment. Please contact the Information Security Officer to ensure sensitive information is protected.

    5.6 **Use of High-risk AI Systems**
    If we use high-risk AI systems, then there are additional obligations on the use of such AI systems. These obligations include:

    - Complying with the provider's instructions,
    - Ensuring adequate human oversight,
    - Participating in the provider's post-market monitoring of the AI system,
    - Retaining automatically generated logs for at least six months,
    - Ensuring adequate input,
    - Informing employees if the AI system concerns them,
    - Reporting serious incidents and certain risks to the authorities and provider,
    - Informing affected persons regarding decisions that were rendered by or with the help of the AI system, and
    - Complying with information requests of affected persons concerning such decisions.

    Please note we can be considered both a provider as well as a user of AI systems if we intend to use an AI system we have developed for our own use.

    5.7 **Use of Limited-risk AI Systems**
    If the company uses Limited-risk AI Systems, then we should ensure the following:

    - Ensure that individuals are informed that they are interacting with an AI system, unless this is obvious,
    - Ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated, and that the solution is effective, interoperable, robust, and reliable (with exceptions for AI systems used only in an assistive role for standard editing or that do not substantially alter the input data),
    - Ensure adequate AI literacy within the organization, and
    - Ensure compliance with this voluntary Code.

    5.7.1 **Best Practices**
    In addition to the above, the company shall pursue the following best practices when using Limited-risk AI Systems:

    - Complying with the provider's instructions,
    - Ensuring adequate human oversight,
    - Ensuring adequate input, and
    - Informing employees if the AI system concerns them.

    Please note we can be considered both a provider as well as a user of AI systems if we intend to use an AI system we have developed for our own use.

    6. **Prevent Bias, Discrimination, Inaccuracy, and Misuse**
    For AI systems to learn, they require data to train on, which can include text, images, videos, numbers, and computer code. Generally, larger data sets lead to better AI performance. However, no data set is entirely objective, as they all carry inherent biases, shaped by assumptions and preferences.

    AI systems can also inherit biases in multiple ways. They make decisions based on training data, which might contain biased human decisions or reflect historical and social inequalities, even when sensitive factors such as gender, race, or sexual orientation are excluded. For instance, a hiring algorithm was discontinued by a major tech company after it was found to favor certain applicants based on language patterns more common in men's resumes.

    Generative AI can sometimes produce inaccurate or fabricated information, known as "hallucinations," and present it as fact. These inaccuracies stem from limitations in algorithms, poor data quality, or lack of context. Large language models (LLMs), which enable AI tools to generate human-like text, are responsible for these hallucinations. While LLMs generate coherent responses, they lack true understanding of the information they present, instead predicting the next word based on probability rather than accuracy. This highlights the importance of verifying AI output to avoid spreading false or harmful information.

    Another area of concern is improper use of AI-generated content. Organizations may inadvertently engage in plagiarism, unauthorized adaptations, or unlicensed commercial use of content, leading to potential legal risks.

    To mitigate these challenges, it is crucial to establish processes for identifying and addressing issues with AI outputs. Users should not accept AI-generated information at face value; instead, they should question and evaluate it. Transparency in how the AI arrives at its conclusions is key, and qualified individuals should review AI outputs. Additionally, implementing red flag assessments and providing continuous training to reinforce responsible AI use within the workforce is essential.

    6.1 **Testing Against Bias and Discrimination**
    Predictive AI systems can be tested for bias or discrimination by simply denying the AI system the information suspected of biasing outcomes, to ensure that it makes predictions blind to that variable. Testing AI systems to avoid bias could work as follows:

    1. Train the model on all data.
    2. Then re-train the model on all the data except specific data suspected of generating bias.
    3. Review the model’s predictions.

    If the model’s predictions are equally good without the excluded information, it means the model makes predictions that are blind to that factor. But if the predictions are different when that data is included, it means one of two things: either the excluded data represented a valid explanatory variable in the model, or there could be potential bias in the data that should be examined further before relying on the AI system. Human oversight is critical to ensuring the ethical application of AI.

    7. **Ensure Accountability, Responsibility, and Transparency**
    Anyone applying AI to a process or data must have sufficient knowledge of the subject. It is the developer’s or user's responsibility to determine if the data involved is sensitive, proprietary, confidential, or restricted, and to fill out the self-assessment form and follow up on all obligations before integrating AI systems into processes or software. Transparency is essential throughout the entire AI development and use process. Users should inform recipients that AI was used to generate the data, specify the AI system employed, explain how the data was processed, and outline any limitations.

    All AI-generated data should be extensively tested and reviewed for accuracy before actual use or distribution. Proper oversight of AI outputs includes evaluating for potential bias, discrimination, inaccuracies, or misuse. The data generated should be auditable and traceable through every stage of its development.

    Human oversight is critical to ensuring the ethical application of AI. Ethical AI prioritizes doing no harm by protecting intellectual property, safeguarding privacy, promoting responsible and respectful use, and preventing bias, discrimination, and inaccuracies. It also ensures accountability, responsibility, and transparency, aligning with core principles of ethical conduct.

    8. **Data Protection and Privacy**
    AI systems must also comply with the EU's General Data Protection Regulation (GDPR). For any AI system we develop, a privacy impact assessment should be performed. For any AI system we use, we should ask the supplier to provide that privacy impact assessment to us. If the supplier does not have one, we should perform one ourselves before using the AI system.

    A privacy impact assessment can be performed via the Legal Service desk here: [site]

    Although the privacy impact assessment covers additional concerns, the major concerns with respect to any AI system are the following:

    - **Data Minimization**: AI systems should only process the minimum amount of personal data necessary for their function.
    - **Consent and Control**: Where personal data is involved, explicit consent must be obtained. Individuals must have the ability to withdraw consent and control how their data is used.
    - **Right to Information**: Individuals have the right to be informed about how AI systems process their personal data, including decisions made based on this data.
    - **Data Anonymization and Pseudonymization**: When feasible, data used by AI systems should be anonymized or pseudonymized to protect individual privacy.

    9. **AI System Audits and Compliance**
    High-risk AI systems should be subject to regular internal and external audits to assess compliance with this Code and the EU AI Act. To this end, comprehensive documentation on the development, deployment, and performance of AI systems should be maintained.

    Please be aware that, as a developer or user of high-risk AI systems, we can be subject to regulatory audits or need to obtain certifications before deploying AI systems.

    10. **Redress and Liability**
    Separate liability regimes for AI are being developed under the Product Liability Directive and the Artificial Liability Directive. This chapter will be updated as these laws become final. What is already clear is that the company must establish accessible mechanisms for individuals to seek redress if adversely affected by AI systems used or developed by us.

    This means any AI system the company makes available to clients must include a method for submitting complaints and workarounds to redress valid complaints.

    11. **Environmental Impact**
    AI systems should be designed with consideration for their environmental impact, including energy consumption and resource usage. The company must:

    - **Optimize Energy Efficiency**: AI systems should be optimized to reduce their carbon footprint and overall energy consumption.
    - **Promote Sustainability**: AI developers are encouraged to incorporate sustainable practices throughout the lifecycle of AI systems, from design to deployment.

    12. **Governance and Ethical Committees**
    This Code establishes the AI Ethics Committee intended to provide oversight of the company’s AI development and deployment, ensuring compliance with this Code and addressing ethical concerns. The Ethics Committee shall consist of the General Counsel, the AI Staff Engineer, and the CTO (chairman).

    All developers intending to develop AI systems and all employees intending to use AI systems must perform the AI self

    -assessment form and privacy impact assessment. If these assessments result in additional obligations set out in this Code or the assessments, they are responsible for ensuring those obligations are met before the AI system is used. Failure to perform any of these steps before the AI system is used may result in disciplinary action up to and including termination if the AI system should be classified as an unacceptable risk.

    13. **Training**
    The yearly AI awareness training is mandatory for all employees.

    14. **Revisions and Updates to the Code**
    This Code will be periodically reviewed and updated in line with new technological developments, regulatory requirements, and societal expectations.
  • Israel killing civilians in Gaza and the West Bank
    I shared citations with you setting out the facts. And you're just going "what about what I saw in the news?". The number of arrest certainly isn't indicative now is it?
  • Cryptocurrency
    We were talking about the taxes at the time of the Pharaohs. They were necessarily simple.Tarskian

    So am I.
  • Cryptocurrency
    The Egyptian tax on a farmer's harvest is not the same as modern personal income tax. The farmer did not have to give any information to facilitate the collection of that tax.Tarskian

    You're a funny man and obviously have no idea how taxes worked in Egypt. Look it up, because you're miles off. Farmers had to give information about their harvest and livestock.

    The three countries you mention have huge issues with modern slavery or human trafficking and are not favoured destination because immigrants travel for handouts but because they tend to flee for safety and economic opportunity. The economic immigrant abusing social benefits is just a racist canard.

    Also, nice false analogy showing a picture of poverty and a rich bitch on the beach.
  • Cryptocurrency
    The Egyptian tax collector would measure the farmer's land and compute taxes based on that information. He would not ask the farmer if he somehow made some more money in other ways and try to get half of that too.Tarskian

    Yes, they were very equitable back then until they weren't and the farmers starved. Maybe read a history book or something. But in any case, you're not refusing the point that income tax has existed for millenia.

    I find the practice of demanding people to fill out a tax return form to be particularly detestable. Why on earth would I give that kind of information to someone else?Tarskian

    Back in the day when people weren't paying lawyers and accountants tons of money to avoid paying taxes, tax forms weren't really a thing. Something to do with the inherent greed of many who do want government services like the enforcement of contracts, basic utilities ans safety but don't want to pay for it. If only the system wasn't so shit that the government needed this information to be able to tax people.

    Seriously, I don't want to live in a country where the ruling mafia asks me how much money I have made last year and then demands that I give them half.Tarskian

    Good luck in those failed states when you get sick.
  • Israel killing civilians in Gaza and the West Bank
    Regarding criminality we can all do our own research and make up our minds. Here in the states the issue of which side is more criminal isn't close. There have been many, many arrests on the pro-palestine side and very few on the pro-israel side. They block highways, destroy shops, violate noise ordinances, occasionally commit assaults... but you're free to believe as you like.BitconnectCarlos

    It's not a matter of belief, it's a matter of fact that pro-Israeli sides have committed violence in the US as well and the majority (97%) of all protests on both sides have been peaceful.
  • Israel killing civilians in Gaza and the West Bank
    Anti-zionism is effectively anti-semitism.BitconnectCarlos

    No it isn't. One is opposition against a political idea, the other is just plain hatred.

    And it's this sort of dumb shit that causes so many people to not care about the distinction anymore.
  • Cryptocurrency
    Certainly not to the extent that it exists in the West. For example, personal income taxation was introduced only in 1913 while the human race has been around for almost 300 000 years. Governments outside the West may also have it on the books but until this day they still do not collect it and they probably never will.Tarskian

    Having to deliver 15 sacks of grain you harvested is effectively income tax. It existed quite a bit longer, at least since the Egyptians.
  • Israel killing civilians in Gaza and the West Bank
    Pro-Palestine protests also tend to be more prone to criminality than pro-Israel ones which explains the more heavy-handed treatment.BitconnectCarlos

    Based on what? I'm sure that's the excuse the police will give you.

    Also the fact people confuse Jews with Israel is on the heads of many of your brothers and sisters insisting for decades any criticism of Israel was "anti-semitism" or that "anti-zionism" = "anti-semitism". The guilt over WWII has been wielded as an instrument and setting up Israel as the "Jewish homeland" really makes things confusing for most people.

    EDIT: I asked perplexity:

    Based on the search results provided, there have been some instances of violence at pro-Israeli demonstrations and counter-protests, though the overall picture is complex. Here are the key points:

    ## Violent Incidents Involving Pro-Israel Groups

    - At UCLA, a pro-Israel mob violently attacked a peaceful pro-Palestinian encampment on campus[1][2]. The attackers, described as largely non-student age individuals, used fireworks, pepper spray, sticks, stones, and metal fencing to assault students[1].

    - Counter-protesters, identified as pro-Israel, attempted to storm a Palestine solidarity encampment at UCLA, leading to violent clashes[2]. They tore down barricades, shot fireworks into the encampment, and sprayed irritant gases[2].

    - In Chicago during the Democratic National Convention, a protest organized by pro-Hamas groups turned violent, with demonstrators throwing objects at police and surrounding a taxi with passengers inside[3].

    ## Context and Broader Trends

    - While these violent incidents have occurred, it's important to note that the vast majority of demonstrations related to the Israel-Palestine conflict in the US have been peaceful[5].

    - According to ACLED data, 97% of student demonstrations related to the conflict between October 7, 2023, and May 3, 2024, remained peaceful[5].

    - Pro-Palestinian demonstrations have also faced accusations of antisemitism and resulted in violence in some cases, particularly in Europe[4].

    - Israeli authorities have also cracked down on anti-war protests within Israel, with some restrictions placed on demonstrations[4].

    It's crucial to recognize that while there have been violent incidents involving pro-Israel groups, violence has not been characteristic of all pro-Israel demonstrations. The situation remains complex, with tensions high on both sides of the conflict.

    Citations:
    [1] https://www.aljazeera.com/news/2024/5/1/ucla-clashes-pro-palestinian-protesters-attacked-by-israel-supporters
    [2] https://dailybruin.com/2024/05/01/pro-israel-counter-protesters-attempt-to-storm-encampment-sparking-violence
    [3] https://www.nbcnews.com/news/-israeli-consulate-tonight-groups-one-dncs-violent-protests-rcna167384
    [4] https://en.wikipedia.org/wiki/Israel%E2%80%93Hamas_war_protests
    [5] https://acleddata.com/2024/05/10/us-student-pro-palestine-demonstrations-remain-overwhelmingly-peaceful-acled-brief/
    [6] https://www.cbsnews.com/losangeles/news/pro-israel-group-to-hold-counterdemonstration-on-ucla-campus/
    [7] https://www.timesofisrael.com/us-jewish-students-say-pro-israel-violence-at-ucla-protest-camp-undercuts-advocacy/
    [8] https://www.youtube.com/watch?v=gNrkh8V8IMw
  • Poets and tyrants in the Republic, Book I
    You are correct. It had slipped my mind when I was skimming the text again.
  • Cryptocurrency
    I can't find how Alameda Research was set up. Whether it was single or two tier. It appears single tier, making the CEO the representative of the company and responsible for its decisions.

    From the histories, it seems SBF took some distance from Alameda Research and started working for FTX and put Ellison and another trader in charge as CO-CEOs. Ellison ended up as CEO later when the other co-CEO left. Before the meltdown, SBF apparently told everyone to take out personal loans against the company. There were 5 billion in personal loans on the balance sheet. It's not clear during what time they were entered into and whether SBF was CEO at the time or the two co-CEO's or just Ellison. In any case, who lends 5 billion without collateral? Only a criminally liable idiot when it goes south.

    Alameda Research had special privileges allowing it to directly use customer deposits. SBF probably thought that the insane 65 billion USD line of credit from FTX to Alameda Research made that ok. This credit line (again INSANELY high) was never disclosed to investors, FTX clients or auditors.
    Some people would also deposit money directly into the account of Alameda Research, they would skim money of it for expenses and only then forward it to FTX.

    It was FTX who transferred the 8 billion USD in deposits to Alameda Research. That does seem to be mostly on SBF's shoulders then. It's not clear what the basis for the transfer was. Was it a loan? Some kind of asset swap? Or really as dumb as a straight transfer? (It couldn't be the credit line, because pulling that would be done by Alameda).

    But Ellison didn't return the money as she should have.

    As far as I'm concerned, the whole thing stinks to high heaven. The fact the credit line was hidden, the ridiculous amount of personal loans, the absence of appropriate hedges (they were all traders for crying out loud!) and even not hiring risk managers and giving them appropriate tools is really on the company leadership.

    As far as I'm concerned Ellison got off way too lightly.
  • Israel killing civilians in Gaza and the West Bank
    Yes, because you do not care about police action in the Netherlands only to raise it in a thread where it has zero bearing. The subject is the Israeli-Palestinian conflict not what some police officers might think of it. I can assure you that for every police officer not wanting to protect Jewish locations they are in the minority compared to the pro-Israeli officers and they are not out there beating protesters as has happened with pro-Palestinian protesters. So, quite frankly I don't care as it is a non-issue.