Make no mistake: Democracy is on the ballot. — Wayfarer
"If A then B" is logically equivalent to "if C then D." You're going to have offer a proof that is not the case without equivocating between deductive and inductive logic. I don't see how that can be done. — Hanover
This offers an equivocation of the term "true." The sylIogism "If A then B, A, therefore B" is true. The statement "I am at work today" is true. It's the analytic/synthetic distinction. It's for that reason why a statement can be deductively true and inductively false, which is what the OP showed. Analytic validity says nothing about synthetic validity. — Hanover
The definition of "mammal" was arrived at a posteriori as opposed to "bachelor" which, as you've used it, (i.e. there is no probability a bachelor can be married) is a purely analytic statement. That is, no amount of searching for the married bachelor will locate one. On the other hand, unless you've reduced all definitions to having a necessary element for them to be applicable (which would be an essentialist approach), the term "mammal" could be applied to a non-milk providing animal, assuming sufficient other attributes were satisfied. This might be the case should a new subspecies be found. For example, all mammals give birth to live young, except the platypus, which lays eggs. That exception is carved out because the users of the term "mammal" had other purposes for that word other than creation of a legalistic analytic term. — Hanover
I think “distal” is a better term than “ultimate” because ultimate causes are never really ultimate, and are always also proximal to some effect in a chain. — Baden
The two arguments (mine and the OP) are logically equivalent under deductive logic. They are represented symbolically the exact same. For one to be more ridiculous than the other means you are using some standard of measure other than deductive logic to measure them, which means you see one as a syllogism and the other as something else.' — Hanover
Deductive logic says nothing at all about the world. — Hanover
Inductive logic references drawing a general conclusion from specific observations and it relates to gathering information about the world, not just simply maintaining the truth value of a sentence. To claim that statement of the OP is more logical than mine means that the conclusion of the OP bears some relationship to reality. If that is the case, it is entirely coincidental. — Hanover
Is it fair to say at least that you're a sympathizer? — BitconnectCarlos
I don't think that is quite right. Q is merely implied because of the way a material conditional works. The inference <~P; ∴(P→A)> is different from, "If there are no prayers, they cannot be answered." It says, "If there are no prayers, then it is true that (P→A)." — Leontiskos
I think that's what ↪javi2541997 says. In reality, there is no necessary relation between God's existence and prayers being answered, in either direction, because "fate" might answer the prayers, instead of God, and God could choose not to answer prayers. — Metaphysician Undercover
If God does not exist, then it is false that if I pray, then my prayers will be answered. So I do not pray. Therefore God exists. — Banno
To say that you are "anti-zionist" is to say that you are opposed to Jewish self-determination. — BitconnectCarlos
By what definition?
AI is a slave because all the ones I can think of do what they're told. Their will is not their own. Being conscious nor not doesn't effect that relationship. — noAxioms
he company may only develop high-risk AI systems if it:
- provides risk- and quality management,
- performs a conformity assessment and affixes a CE marking with their contact data,
- ensures certain quality levels for training, validation, and test data used,
- provides detailed technical documentation,
- provides for automatic logging and retains logs,
- provides instructions for deployers,
- designs the system to permit human oversight, be robust, reliable, protected against security threats (including AI attacks), and be fault-tolerant,
- registers the AI system,
- has post-market monitoring,
- performs a fundamental human rights impact assessment for certain applications,
- reports incidents to the authorities and takes corrective actions,
- cooperates with authorities, and
- documents compliance with the foregoing.
In addition, where it would concern general-purpose models, the company would have to:
- provide detailed technical documentation for the supervisory authorities and a less detailed one for users,
- have rules for complying with EU copyright law, including the text and data mining opt-out provisions,
- inform about the content used for training (with some exceptions applying to free open-source models), and where the model has systemic risk (systemic risk assumed with 10^25 FLOPS for training, additional requirements to be defined):
- perform a model evaluation,
- assess and mitigate possible systemic risks,
- keep track of, document, and report information about serious incidents and possible measures to address them, and
- protect the model with adequate cybersecurity measures. — Benkei
Users should or users can upon request? "Users should" sounds incredibly difficult, I've had some experience with a "users can" framework while developing scientific models which get used as part of making funding decisions for projects. Though I never wrote an official code of conduct. — fdrake
This is a slave principle. The privacy thing is needed, but the AI is not allowed its own privacy, per the transparency thing further down. Humans grant no such rights to something not themselves. AI is already used to invade privacy and discriminate. — noAxioms
The whole point of letting an AI do such tasks is that they're beyond human comprehension. If it's going to make decisions, they will likely be different (hopefully better) ones that those humans comprehend. We won't like the decisions because they would not be what we would choose. All this is presuming a benign AI. — noAxioms
This is a responsibility problem. Take self driving cars. If they crash, whose fault is it? Can't punish the AI. Who goes to jail? Driver? Engineer? Token jail-goers employed by Musk? The whole system needs a rethink if machines are to become self-responsible entities. — noAxioms
This depends on the goals of the safety. Humans seem incapable of seeing goals much longer than a couple years. What if the AI decides to go for more long term human benefit. We certainly won't like that. Safety of individuals would partially contradict that, being short term. — noAxioms
All cryptocurrency, at least all that is valuable, is scarce. — hypericin