• Kant's Notions of Space and Time


    I have found that Transcendental Idealism is interpreted very differently depending on the person and Kant was a very poor writer (as far as I am considered) so it is hard to tell exactly what he meant (precisely); so here is my take.

    Is the space Kant discusses in the Aesthetic the same space I experience and move through on a daily basis and is the time he discusses in the Aesthetic the same time I experience passing by on a daily basis?

    Space and time are not something you experience (in the sense that Kant means it), as @Mww noted, but, rather, the necessary precondition of your experience. Space, for example, is not an entity which you encounter but, rather, is the pure form of your experience.

    Now, if you are a realist about space and time, then it may be that the pure forms of one's experience corresponds or is governed by whatever laws affect them; but Kant is not claiming anything about that, as space and time beyond the forms of one's experience, would be something related to the things-in-themselves.

    For example, Einstein famously held that Kant can't have space and time as synthetic and a priori; as he thought that there really is a space and time, of which we can empirically treat like entities, that are mind-independent. Is he right? I will leave that up to you and your metaphysics.

    With respect to idealism, Kantianism paved the way for Schopenhaurian metaphysics that posits that the only space and time there is the pure forms of one's experience--as all there is are mind activities happening.

    Last thing I will say is that time and space as originally proposed by Kant do not hold up to Einstein's special nor general relativity; as Kant, and Schopenhauer, held that we can be a priori certain of how they work and that the succession in space and time is universal for rational minds: both of which have been refuted by Einstein.

    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    It sounds like you would like to terminate the discussion, so, out of respect, I am going to refrain from responding to your points and let you have the last word.

    As always, I hope you have a wonderful day and cannot wait to hear what else you have to say on this forum!

    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    I made a bad judgement call, so my apologies and I will never use an appeal to authority again in our discussions.

    Absolutely no worries my friend! I think, with all due respect, that we are completely speaking past each other on this dispute about “rationality”. Likewise:

    I did not say I supported moral realism, nor was I debunking anyone who opposes moral realism. That's the straw man here Bob.

    I apologize, as you never actually said you supported moral realism; however, in the interpretation of your contentions with my view on rationalism, they are only work if you are claiming to be a “normative realist” at a minimum—that’s why I said that; but I should have asked first.

    To try and clear things up, I think that by “rationality” you are simply referring to something toto genere from what I mean. I think, and correct me if I am wrong, you are referring to the act (or lack thereof) of corresponding to reality in one’s assertions—which I call (more or less) truth and not rationality. Within that interpretation of our dispute, I think you are noting that “truth” is not relative (which I agree with) but are semantically associating it with “rationality”. I am associating “rationality” with an act which is in accordance with one’s primitive epistemic standards, which inevitably are norms (and norms are either categorical or hypothetical).

    With that in mind:

    True: Smoking leads to poor health.
    Resolution: If I want to be in good health, I should not smoke.

    Wanting to be in good health and being obligated to be in good health are both norms; and I completely agree with you here as it is exactly what I said:

    if I should be healthy, then I should not smoke. This is true regardless of whether I want it to be or not

    Your “resolution” section is the exact same thing I said but you substituted “should” for “want”, and , since they are both normative statements, it doesn’t matter: normative statements are subjective.

    When I said “this is true regardless...” I was agreeing with you that “smoking leads to poor health” and so if I should be healthy, then it logically follows that I should not smoke; and this is not subjective.

    Rationally you should choose not to smoke if you want to retain good health. But you don't have to be rational

    And here’s where I think you are saying more than just that truth is absolute: you are saying that what defines a person as rational is the epistemic norm that they should try to correspond with reality. This is a normative statement which, as I said before, you cannot prove is objective.

    P1: One who is incoherent in their beliefs should be considered irrational.
    P2: To smoke and think that one should be healthy is to hold incoherent beliefs.
    C: Therefore, to smoke and think one should be healthy is to be irrational. — Bob Ross

    P1 is not an assertion because of "should". That's just an ambiguous sentence. A proper claim for logic is "One who is incoherent in their beliefs IS considered irrational, or even IS NO considered irrational. "Should" leaves the point incomplete. Why should it? Why should it not? What does should even mean? Does that mean the outcome is still uncertain?
    I am sorry, but this is just a blatant straw man. Firstly, assertions which contain obligations (such as “should”) are assertions. I can assert that “I should eat food in 5 minutes”--you can’t say that isn’t an assertion. Secondly, P1 is not ambiguous at all: it is the claim that “one who is incoherent in their beliefs should be considered irrational”--it doesn’t get any clearer than that. The person is saying, apart from what is the case, that what should be the case is that…. . Thirdly, I purposely made the premises have “oughts” in them: you can’t just arbitrarily change them to descriptive statements. If you want to do that and it not be considered irrelevant to the conversation, then you must demonstrate that rationality is objective—then you can claim they are descriptive statements. I am saying rationality is just epistemic norms, which are prescriptive statements.

    To be charitable, I think you are noting that truth is absolute and objective; and from that interpretation, I would agree that one can demonstrate that. So, it would be fair to say that P1 is either true or false, and that is objectively so; however, to claim someone is irrational or rational is to posit that they should be epistemically doing something else: it is not a descriptive statement. In other words, within your terms, you could claim that a person that holds a contradiction is not holding something which is objectively true, but not that they are irrational for it. These are two different claims.

    Unclear premises are allowed to be rejected in any logical discussion because they are open to interpretation by each subject and are the root of many logical fallacies.

    If you are confused by what the premises are saying, then it is on you to ask for clarification: you don’t get to just dismiss the argument because you don’t understand what the premises are claiming. I find them to be very, very clear.

    Each person subjectively decides what rationality means. Because of this, there is no objective rationality, or something which is rational apart from our subjective experience.
    Correct! But….:

    Since the above is the case, I can subjectively conclude that there is an objective rationality apart from our subjective experiences. Since your proposal necessarily lets me hold a contradiction (a negation of your point that you cannot refute) your proposal is not true.

    NO. I am saying that in truth there is nothing it is to be irrational or rational apart from one’s (or our) epistemic standards (which are normative statements) and so to claim that there is an objective standard of rationality is to, from my point of view, hold a false belief; BUT, I cannot say they are objectively irrational for holding it.

    I do not let you hold a contradiction as true: I let you hold that you are not objectively irrational for holding a contradiction as true (although it is false).

    Now on to the distinctive sets!

    A probability is an induction Bob. When I say I have a 4/52 chance of pulling a jack, that's because we don't know the outcome of the card.

    No! The 4/52 chance of pulling a jack is not an induction: that is a deduction. I know there are 4 jacks and 52 cards, and I can analytically deduce the probability of pulling a jack. This is not the same claim as building off of that probability to say that “I will pull a jack next time because there is a 4/52 chance of getting it”: that’s the induction. Probabilities are absolutely never inductions themselves: they are mathematically deduced from what is already known.

    We've deduced the induction, but deducing an induction does not make the induction not an induction.

    You have not “deduced an induction” when you claim that “I will pick a jack because there is a 4/52 chance of getting it”: you have used deduction knowledge to formulate an induction. If you think that you can deduce that induction (or something similar), then provide the syllogism.

    An induction is a form of argumentation where the premises do not necessitate the conclusion: the 4/52 chance is purely a deduction, and the induction is built off of it but is not deduced from it. I cannot provide a syllogism that absolutely entails the 4/52 deduced chance with the claim that I will pull a jack next time (and, thusly, it is not deduced).


    Distinctive knowledge set 1: Fac

    Distinctive knowledge set 2: Face and num

    Please outline exactly what the essential properties are that you keep referring to in this example. By my lights, it is not what is essential to the formulation of the inductions; so I am confused what you mean by “essential properties” of the inductions.

    For example, in your example #1, you didn’t use the same properties nor some essential set to formulate the patterning and probability based inductions. Is it supposed to be what is essential to the scenario given?

    Inductions derive from the distinctive property sets we create.

    What I am saying is that we create distinctive property sets, but there are, in reality, relevant factors to the situation. Period. It isn’t distinctive knowledge itself.


    The set of inductions I can form when considering only A and B are potentially different when considering the full property sets of A, B, X, and Y.

    Correct. That is why I am bringing up relevant factors, because that is what you are describing here: you aren’t depicting any sort of “essential properties”.


    you have not given anything rational that explains why H2 should be picked over H1.

    I already have. But I think we need to keep on track with the other points and resolve some things before revisiting this part.

    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    I think we both see that certain aspects of our conversation do not seem to be progressing, so I am going to the parts that I think we can still as of yet further the discussion.

    But a knowledge set is the distinctive properties you are using at its base, not the inductions. The inductions rely on the base. You can compare inductions between the bases, but it always comes back to the bases in the end. I've noted there is no rational justification for comparing inductions between knowledge sets. So far you have not provided any either.

    We are saying the same thing here: since you conceptually structure your epistemology where there are hierarchies and then comparisons of the distinctive sets, it just sounds like you aren’t cross-comparing inductions that are not in the same hierarchies; however, in a more broad sense, you are comparing the inductions by comparing the hierarchies because those “bases” you speak of are what decide the properties of the inductions themselves—so you are comparing the properties of the inductions via those structures. I don’t see any real disagreement here.

    Because illogical means irrational. The antonym of rationality doesn't explain what rationality is.

    They are not antonyms: to be illogical is to hold logical contradictions as true, and to be rational is, well, I already defined that before. Being logically consistent is not enough to be rational (in the sense I mean it) nor does it equate to living in accordance with reality (or staying closely married to reality); nor does it get you to your idea of trying to not to contradict reality. A “contradiction of reality” is not a logical contradiction: the latter pertains to the form of the argument and never the content—viz., to say something contradicted reality is to affirm something about the content and pertains nothing to the form of the argument itself. This is why it is not illogical to say that “hair is short and long” whereas “hair is green and not green” is, even though most people, who are not immersed in formal logic, would think that both are logical contradictions. In formal logic, there is nothing logically contradictory with saying “∀x (Hair[x] ^ Short[x] ^ Long[x])”. It is incoherent with what most would consider true of reality (viz., once one realizes what is meant by the property of Short and Long it becomes clear both cannot cohere), but it is not illogical. I think you may be thinking about logic more loosely than I am; and perhaps all you mean is that to be rational is to be coherent, to the best of one’s ability, with reality.

    The end goal is not to pick an induction. The end goal is to pick a distinctive knowledge set that when applied, will give you a rational assessment of reality.

    To me, your second sentence here is a just a more complicated way of saying that the end goal is to pick an induction. When we try to get the most rational assessment of reality when we cannot deduce what to do, then we are necessarily trying to choose the best induction to use.

    Bob, I read this a few times and I could not understand what you were trying to say at all. Please see if a second pass can make this more clear.

    I was saying essentially this:

    1. The probability of … is Z% is not an induction.
    2. An inference which is not deduced from #1 but utilizes #1 is an induction.
    3. #2 is an induction which has the essential property, to it as a particular induction, of #1.
    4. The pattern of … is not an induction (or at least not the one in question).
    5. An inference which is not deduced from #4 but utilizes #4 is an induction (that is in question here).
    6. #5 is an induction which has an essential property, to it as a particular induction, of #4.
    7. Therefore, #6 and #3 do not have the same essential properties (even in virtue of just their utilization of pattern vs. probability).

    If you really want to say that “only inductions with the same essential properties can be compared”, then you cannot mean by “essential property” that which was essential to the formulation of the induction while claiming that there are inductions which have the same essential properties (because, as shown above, just one induction using a probability vs. pattern makes them have different essential properties)(and, furthermore, every property would be essential to each induction, so only the exact same induction, to the T, would equate to an induction which equal essential properties).

    I don’t think your argument works here. You will have to clarify what you mean by “essential properties of an induction” within the context of “only inductions with the same essential properties can be compared”.

    If by it you mean:

    We are talking about the essential distinctive properties that are needed to make that induction.

    Then, as shown above, no induction which is not completely identical to another can be compared, which is clearly not what you are trying to argue for.

    I have a set of distinctive properties I consider important to a decision.

    This is not the same thing as an essential property to the formulation of an induction that you were arguing before! This is a relevant factor! This is what I have been trying to get you to see: there’s no term for what you just described there in your epistemology.

    Its just like these statements, "Nothing is true." Is that a true statement?

    What I outlined is nothing like that statement. Please demonstrate the logical contradiction in holding that imperatives are indexical. You still haven’t demonstrated it.

    I can just say you're wrong and I'm correct under your statement.

    There’s no logical contradiction in you saying that I am wrong relative to what you think is “rationality”. Philosophim, if you truly think it is illogical, then please demonstrate the logical contradiction. I want you to demonstrate that my claim leads to (p ^ !p).

    When your point allows a contradiction of your point to stand, that's reality contradicting your point.

    You disagreeing with me, relative to what you think is “rationality”, is not a contradiction of my point: it agrees with it. I contradiction is not the same thing as a disagreement; and, also, by reality contradicting my point I am assuming you mean that reality is incoherent with my point (and not that there is a logical contradiction in it). To that, I also don’t see what you are saying is incoherent about it: please demonstrate, if you cannot expose a logical contradiction, what is incoherent (with respect to reality) with my position.


    I've been formerly trained in philosophy and have been around some incredibly intelligent, learned, and capable people. Every single one of them would dismantle your point without a second thought

    Your statement on rationality is a well tread and thoroughly debunked idea in any serious circle of thought

    Philosophim, I am not interested in comparing our (or others’) egos or credentials; but, since you brought it up, I have studied metaethics in depth, so I know for a fact that moral anti-realism is not an irrational position nor has moral realism thoroughly debunked it. The fact of the matter is that there are rational and good arguments on both sides. There have been many great philosophers that have been one, and many the other.

    I have no problem with your adamant support for moral realism here (which, as I was saying before, is the crux of our dispute about rationality); but to say that your prominent opponents (even in the literature itself) are all irrational and that anyone who is serious can debunk them in a heart beat is a straw man, inaccurate, borderline dogmatic, and unproductive to think.

    With that being said, I want to clarify one thing about “rationality”: I sometimes get the impression, after hearing what you think rationality is, that you may be under the impression that I am saying we can subjectively makeup what corresponds best with reality—and I am NOT saying that. I am saying that imperatives are hypothetical and never categorical (viz., subjective and never objective). For example, if I should be healthy, then I should not smoke. This is true regardless of whether I want it to be or not; however, whether I should be healthy or not is not grounded in objectivity—it is subjective. Likewise, to say “one is irrational if they smoke and think that they should be healthy” is to argue that:

    P1: One who is incoherent in their beliefs should be considered irrational.
    P2: To smoke and think that one should be healthy is to hold incoherent beliefs.
    C: Therefore, to smoke and think one should be healthy is to be irrational.

    But whence does this obligation in P1 arise? If it is not from a categorical imperative (ultimately), then it is merely hypothetical and thusly irrationality is grounded in subjectivity.

    To refute this, all you have to do is provide the categorical imperative that you are deriving rationality from (i.e., deriving that if someone is incoherent with reality then they should be considered irrational). If you can’t, then, I am sorry, but you are wrong.

    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    I've informed you that not only do we not have to compare the inductions between the hierarchy sets, we logically can't justify doing so.
    ...
    We can reason why we should choose certain knowledge sets over others, and I've set different scenarios to demonstrate this.

    I am saying that choosing between “knowledge sets” is a comparison. The moment you decide, by analysis within or without the “induction hierarchy”, that this induction is a better pick than that one, you have thereby compared and evaluated them. I don’t think you can coherently claim to never compare the inductions if you are likewise claiming that you can determine which is better based of of analysis of the “distinctive knowledge sets”.

    This truly is the core of rationality without any extra detail. Just to specify a tad more, I would say it is that which is not contradicted by reality

    Why is rationality that which is not contradicted by reality? Why cannot not be “to be illogical”? I don’t think you can justify this without it bottoming out at a desire: the desire to obtain and abide by that which most closely aligns with reality.

    As such, I'm going to ask you to drop the "relevant factors" and just communicate using the basic terminology we've already established.

    I can’t because there is no term for it. They aren’t essential properties necessarily of anything.

    As the person who's established the theory, I want to see a contradiction or a lack using the terms involved first. If you can do so, then we can discuss trying to figure out what is missing

    There’s not logical contradiction: there’s just no term currently mapped to what I mean by “relevant factors”. All I can do is explicate again what I mean by a relevant factor and if you think there is a term that fits in your theory then please let me know: it is a piece of information that is relevant to formulating the set of possible inductions one could use to resolve the dilemma. I must repeat: they are not distinctive knowledge. I can have distinctive knowledge of what I may guess are the relevant factors (if I do not have applicable knowledge of them); but the relevant factors are what actually are, in reality, relevant to the formulation of inductions pertaining to the context and the dilemma therein. I don’t know of any term in your theory that means that: please let me know if there is.

    You compare the distinctive knowledge sets, not the inductions.

    What do you mean by “distinctive knowledge sets”? You said inductions are distinctive knowledge, and the sets (hierarchies) of inductions are also distinctive knowledge; so when you compare the hierarchies (sets) themselves, you are doing so to compare the inductions within different hierarchies to determine which one to use. That is a comparison.

    You: People want to compare inductions across different distinctive knowledge sets.
    Me: Can't do that. Its incorrect thinking. If they want to think correctly, they need to look at the distinctive knowledge sets.
    You But I don't want to. (I'm poking fun a little bit, I just don't see anything else in your argument so far)

    I am saying:

    1. When you “look at the distinctive knowledge sets [(hierarchies)]”, you are thereby comparing the inductions. A comparison is not limited to comparing within the hierarchies, but the criteria that you are using to compare within may not (and in this case are not) applicable to comparing the sets themselves; and

    2. If you say that one is better than the other, even by analyzing the sets, then one can deduce that you compared them; because you can’t determine something is better than another without comparison: that’s what a comparison is.

    I get that you are just joking a bit with that last alleged rebuttal of mine but, with all due respect, it demonstrates to me that you do not understand in the slightest what I am saying (and perhaps I am just not explaining it well enough).

    The theory has a logical solution to the problem you've proposed, to look at the distinctive knowledge sets and compare those instead.

    You compare the sets to compare the inductions. The end goal is to pick an induction and if there are two in different sets then you compare the sets to compare them.

    So I see no lack on my part

    The lack of applicability is if you actually can’t compare the inductions, which I don’t think you are truly saying (although you keep saying it). If you can’t compare them, then you can’t say one set is more rational to hold than another and, in turn, that one induction (within one set) is more rational than another (in another set). At that point, you theory is effectively useless.

    Please explain what you mean by this. By my example below:

    P1: Probability of A with X and B with Y is Z%
    P2: Pattern of A with X and B with Y predicts the next pull will be an AX
    P3: Plausibility of A with Y will be pulled next time, even though it hasn't happened yet.

    P1 is not an induction itself: a probability is a deduction itself and the induction is the inference made utilizing it. So P1 should really be “the next pick is a A with X because there is a Z% chance of it happening”: I am going to call this rP1 (revised-P1). rP1 has an essential property of Z% chance of getting an A with X, which neither the pattern nor plausibility can ever have.

    Without the utilization of Z%, rP1 is not longer rP1: it is another probability. That’s why I said talking about essential properties of particular inductions is trivial and useless.

    Likewise, P2 has an essential property of the pattern (as, again, the patter itself is not the induction, the inference made about it—e.g., I will pull an A with X because of this pattern), and the probability, rP1 can never have that property. Without the pattern, the induction is not longer that induction: it is something else.

    Same thing with the P3.

    Now, the only other option when speaking about essential properties is the essence of a general class of things and, in this case, the essential properties of an induction (i.e., what makes an induction, at its core, an induction?)--and that affords no foreseeable use to your argument.

    How is that not a set of three different types of inductions that use the same essential properties to create those inductions?

    I think you are thinking that the essential properties of the inductions are the “A with X” and “B with Y”, but that’s just plainly false. Firstly, the inductions themselves are not the patterns nor probabilities; and, secondly, if we are talking about the essential properties of a particular induction (which is what you were talking about), then every property thereof is essential (because without even one property it would not longer be that exact induction). The only time accidental properties emerge is if you are talking about the essence of a thing, which pertains to formulating a general class that it is a member of; but if you are talking about what makes a particular thing that particular thing—well...that’s every aspect of that particular thing!

    This is what I think I ought to be doing epistemically, and does not exist apart from my will/mind. So if you're right, I'm right.

    If we have conflicting views on what rationality is, then I would be wrong relative to you and you to me. We aren’t both right. Propositions that are subjective are indexical.

    The problem is you're saying its subjective, then asserting it can't be a certain way.

    Because, again, subjective judgments are indexical: “I think killing babies is wrong” could true for me and false for you (or vice-versa) (or true/false for both of us). If it is true for me and false for you, then I can still say you are wrong for killing a baby because I think killing babies is wrong.

    If its fully subjective, then I subjectively believe you're wrong, and you have to agree with me to keep your proposal.

    I don’t have to agree that we are both right: I have to agree that relative to me you are wrong and relative to you you are right.

    Something which is fully subjective cannot be wrong if the subject says its right.

    Sort of. The problem is that we tend to psycho-analyze ourselves rather poorly. Just because I say “I think killing babies is wrong” that does not thereby make it true that I think killing babies I wrong. Subjective judgments are reflections of our psyche and usually at its deepest core, which we don’t “control” in any colloquial sense of the term. I can absolutely formulate a false belief about a subjective judgment that I hold (or don’t hold).

    I ought to behave in a way that demonstrates your idea of rationality is wrong. This is my desire. Therefore it is rational that you're wrong

    This just pushes the more important question back of what you think rationality is, as you are implicitly using it by saying that you demonstrate that my idea of rationality is wrong. If by this you are just noting that it is possible for “rational is X” to be false for you and true for me (and that there is nothing objective to decipher which is “right”), then, yeah, that’s true. However, people tend to have productive conversations nonetheless (about morality and the like) because most of the time they have false or partly inaccurate beliefs about what they will as right or wrong; and, therefore, conversing about it with other people can change their mind as they are forced to dive deeper into what they think is right or wrong (which, again, is just to say that they have to dive deeper into their own psyche to determine what they truly are obligated to). It’s not as simple, philosophim, as saying to oneself “I think X is wrong” and then thinking they are absolutely right about that because it is subjective: they could be formulating a false belief about themselves.

    ts just a contradiction Bob

    There wasn’t any logical contradiction in the example you gave. The proposition “I want rationality to be X” can be false for you and true for me. If you violate X, then I can thereby call you irrational and you would say you are still being rational (because it was false for you). Where’s the logical contradiction Philosophim?

    As for morality, I may one day post my thoughts on it. Its a little more complicated then something as simple as moral realism. You have to have knowledge before you can know morality. So we'll have to finish this up first.

    Here’s a big difference between us: I think obligations are more fundamental ontologically than reason and although, yes, we have to figure out how to know things first we necessarily utilize our obligations implicitly in formulating our epistemologies (at its core). Also, I am a moral anti-realist.

    I asked this because if you are a moral realist then that is why we are disagreeing so adamantly on what rationality is, just like we could argue similarly about what “good” or “better” is: they all fall into the class of oughts.

    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    I appreciate you summarizing our differences; but if that summary of my position is truly what you think I am claiming, then, with all due respect, I don’t think you are understanding what I am claiming at all. Thusly, I am going to summarize my points hereon so you can address them and I will respond thereafter to some of the points you made.

    I am saying (in order of importance):

    1. That one has to compare the inductions in the box scenario or leave it up to an arbitrary decision. (by “comparison”, I do not strictly mean the utilization of your concept of an “induction hierarchy”); and

    2. That relevant factors of a situation for resolving a dilemma are not necessarily essential properties of any induction: the former is a piece of information that could affect the conclusion, whereas the latter is a property that a formulated induction cannot exist without. A relevant factor (of the situation…) can never end up being formulated into an induction and an induction can have essential properties which are not relevant factors (of the situation…); and

    3. The relevant factors of a situation are not distinctive knowledge, they are applicable knowledge. One can formulate distinctive knowledge about the relevant factors, but there are necessarily a set of relevant factors to the situation irregardless of what one distinctively claims to know; and

    4. That because you have only provided a method of determining cogency of inductions within your concept of a “hierarchy induction” (and have adamantly asserted that we cannot determine cogency otherwise), I am left to conclude that the applicability of your epistemology to decipher what is most cogent to believe is severely wanting—as the vast majority of practical and theoretical situations force the person to compare two inductions that have different essential properties. This is not a dig at the ‘induction hierarchy’ itself, as it does what it purports to do—but it isn’t applicable to the vast majority of situations which is a problem if you are trying to explicate a system of acquiring knowledge (and beliefs) in the most rational manner possible in the majority of situations; and

    5. That I have provided a clear and concise definition of “rationality” (i.e., to be, to the best of one’s ability, logically consistent, internally/externally coherent, empirically adequate, considerate of credence, considerate of explanatory power, parsimonious, a person that goes with intellectual seemings, and a person that goes with their immediate apprehensions) but you have not. I have provided a concept whereas, this whole time and within your papers, you are working with a notion.

    6. Although I haven’t mentioned this yet, noting essential properties of an induction is trivial: if an essential property of an induction is a property which the induction cannot exist without, then every property of the induction is an essential property because even changing one property transforms the induction into a different induction; and if that is the case, then there are no inductions which have the same essential properties. This is because you at not noting what is essential to what an induction is (i.e., the essence of the concept of an induction), but, rather, what is essential to the formulation of a particular induction. Just something to think about.

    Now let me address some points in your response that caught my eye.

    First, I ask you to trust my good faith that if a point is proven, I will concede. I trust you'll do the same.

    I agree: I don’t think either of us will argue in bad faith.

    In terms of your points:

    #1: This is true, but doesn’t negate any of my critiques above. Likewise, with respect to my #4, you haven’t defined what rationality even is. I agree with you that it is “rational”, but I am interpreting it as my definition because you haven’t provided one.

    #2: This is also true, and also doesn’t negate any of my critiques above. You “induction hierarchy” is a concept that can be used to decipher what is most cogent in certain situations; and one just stipulation of the situation is that the inductions have to have the same essential properties.

    #3:
    Because we can have different distinctive knowledge sets, we could create a different set of inductions to compare within each knowledge set.

    We can create different distinctive sets and different inductions; but there are a set of relevant factors to the situation and there are better inductions to formulate with those relevant factors than others.

    Once you choose your distinctive knowledge set, you then look within the hierarchy that results within that distinctive knowledge set to choose the most rational induction.

    This is unapplicable to the vast majority of practical and theoretical situations because inductions typically do not have the same “essential properties” (and, as I said above, noting the essential properties of a particular induction, when not referring to the essence of an induction in general, is trivial and makes it unique to every other possible induction).

    #4:

    This leaves the question, "What is the most rational distinctive knowledge set to hold?"

    What is most rational to distinctively hold is what corresponds best to reality.

    And to your point that is supposed to be my point:

    I have not seen any justification from your end that we should view "relevant properties" as anything different than I've noted

    A relevant factor is a piece of information that impacts one’s formulation of possible inductions in the scenario; whereas an essential property of an induction is a property that if removed which change the induction into a different induction. As noted above, noting the essence of an induction is not the same thing as noting the essential properties of a particular induction: the latter leads to your hierarchy being unapplicable to every scenario (because all inductions are unique with that regard) and the former irrelevant to the properties or relevant factors of the situation (as it only outlines abstractly what makes an induction, at its core, an induction). Relevant factors, likewise, aren’t being argued as being a factor that one should use in all their possible inductions to choose from (so they aren’t essential to every induction) but rather are used to formulate possible inductions and then each induction is weighed against one another.

    Demonstrate how you can create the induction pattern that involves X and Y without using X and Y. If X and Y are accidental or secondary to the induction, then they are not needed for the formation of the induction.

    This demonstrates, with all due respect, a lack of understanding of what I am saying. Obviously, if I formulate an induction with X and Y, then removing them from the formulation changes the induction to a different one. Again, a induction being a probability and another being a possibility likewise would be, under your definition here, essential properties which one has and the other doesn’t; so they don’t have the same essential properties. Likewise, if you mean the essence of an induction, that is just what makes an induction an induction at it core which would not have anything do with being a probability, using designs, etc.

    To be charitable, I am interpreting you to be claiming not that the induction literally have to have the same essential properties but, rather, that they need to be the exact same inductions apart from their “type” (e.g., a probability, a possibility, a speculation, etc.).

    We decide what rationality means and it is contingent on what we think we ought to be doing epistmically which, in turn, doesn’t exist in reality apart from our wills/minds. — Bob Ross

    If this statement is correct, then the discussion is over. I believe my point is more rational, you believe your point is more rational, and there's nothing that either can ever do.

    This is clear straw man. We can both explicate what we think “rationality” should be and see where it goes from there. You haven’t even defined it yet.

    Therefore its pointless to even discuss it. Its the ultimate, get out of argument card Bob.

    Again, straw man. I am not saying that “well, I want it to be that, so I am not going to hear what you think it should be”. That’s nonsense. I am saying that, fundamentally, how we define rationality is dependent on our obligation (as it is literally a definition about how we ought to behave), and obligations are subjective; so it will bottom out at a desire (because of Hume’s guillotine). That doesn’t mean we can’t discuss it just like morals. Are you a moral realist?

    I've proposed what is rational within the theory,

    But you haven’t proposed what “rationality” is; just examples of it.

    I look forward to hearing from you,
    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    I do not have a term "relevant factors" in my theory. I noted the term was ok as long as you understood it was a synonym for "essential properties in consideration of the induction".

    I wasn’t saying you use that term in your methodology, but there is no (as of yet) 1:1 term mapping that gets at what I am talking about within your theory. Hence, I agree with a lot of what you said within your terms of “essential property of formulating inductions”, but:

    Whatever you involve in creating your inductions, are essential properties for that formation of that induction

    It becomes an essential property in an induction about whether that X/Y pattern determines whether the box has air in it or not.

    There is a difference between claiming that (1) whatever factors are utilized to formulate a particular induction are essential properties thereof and (2) a property is essential to the formulation of inductions within the context in general. We can say, in the box scenario, that the X/Y pattern is essential to the formulation of the pattern induction; but we cannot say that it is essential to the formulation of possible inductions within the context (S). I am not saying that the X/Y pattern is an essential property of the formulation of inductions in S but, rather, that the pattern induction should be used because the relevant factor of the pattern outweighs, in S, the relevant factors of the probability induction (viz., the properties which are of, and are necessary for formulating the specific induction, the pattern induction are better than the properties of the probability induction).

    The question is not about comparing the H1 and H2 set then, its about deciding what essential properties you're going to use in your inductions. So we don't compare hierarchy sets. We decide what essential properties we're going to use, then that leads to us into a place where we can make comparisons of our inductions.

    You are comparing H1 and H2 via your analysis of determining which relevant factors to use—viz., you are comparing the essential properties of the inductions themselves and determining which ones outweigh the other ones; I don’t see how you can say you are not comparing inductions. However, I get that your comparison criteria doesn’t apply here.

    You're not comparing the hierarchies to determine which essential properties to use.

    Correct. Because by “comparing” you are using it in a narrow sense of the criteria you use to compare inductions which have the same essential properties (i.e., relevant factors); but a comparison, in the normal sense of the word, is when one analyzes one thing juxtaposed to another—and that is exactly what one has to do to determine which induction to use in the scenario (by means of comparing relevant factors: essential properties of the inductions themselves).

    What you seem to imply is that there is something in the hierarchy that is the end all be all of rationality that shows one set to be more rational than another. There is not.

    No, because, again, you are talking about the comparison of inductions which have the same essential properties when you say “hierarchy”: I am saying that, when comparing inductions which do not have the same essential properties, there is a most cogent and least cogent option (assuming there are at least two). There is a most rational and least rational pick: it is not arbitrary like you are claiming. Which leads me to:

    It depends on a great many contextual factors, so its not a blanket, "This is always better" situation.

    Just because changing the context affects which relevant factors are most pertinent (and cogent), does not entail that there isn’t an actually most cogent induction to hold. I agree that it is tough decisions, but I specifically chose a scenario where it is obvious (to me) which is the more rational decision.

    The X/Y are accidental on just a box. But when you now tie them in with the identity of having air or not, they are now an essential property of whether the box has air or not.

    They are not an essential property of whether the box has air or not: they are essential to the formulated induction that proposes that it has air in it or not. The former is to claim it is an essential property of the identity of the boxes, and the latter is to use an accidental property to infer the identity of the boxes.

    In this very specific scenario you originally mentioned, overlapping the two is ideal.

    And here is the crux: how, philosophim, is it more ideal if it isn’t more or less rational?

    So the most cogent induction I have when including the X/Y designs as essential to my inductions is the pattern.

    Again, how is it most cogent if someone can equally cogently not include the designs as essential to their inductions? By my lights, you cannot say that one is more or less cogent than the other as essential to one’s inductions if it is not more cogent to include or not include it as essential to one’s inductions: you have now claimed both of these.

    For example:

    What I don't have an answer for you, is whether you should use a distinctive knowledge set where X/Y is irrelevant to whether the box has air or not, vs where it is.

    And:

    So the most cogent induction I have when including the X/Y designs as essential to my inductions is the pattern.

    I don’t think you can coherently have both here (but correct me if I am wrong).

    Change the set and context and we have to re-evaluate which distinctive knowledge set would be more rationale to take, or if there is no answer for that specific scenario.

    This is irrelevant to what I have been saying: we are talking about a specific scenario. I agree that if you change the scenario we have to re-evaluate which is most cogent; but that doesn’t change in the slightest that there is a most cogent solution.

    What I will note is that your claim that H2 is more rational to choose than H1 has only provided a confirmation bias justification.

    I don’t see how it is confirmation bias at all. We decide what rationality means and it is contingent on what we think we ought to be doing epistmically which, in turn, doesn’t exist in reality apart from our wills/minds.

    The fourth is coming up btw! I don't know if you're American, but happy 4th regardless!

    Happy fourth to you as well my friend!

    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    I feel we're back to discussing the situation properly now and can continue.

    Good! I am glad to hear that!

    I am not saying that H1 or H2 is more cogent.

    You are saying that, as far as I am understanding, the hierarchy which is more cogent is dependent on what essential properties the person uses; so you are indirectly speaking to which is more or less cogent in that sense.

    I am not applying the hierarchy to whether I should chose H1 or H2

    Agreed.

    I just want to clarify that the determination of which relevant factors to use in the context is a comparison of the hierarchies. You are still comparing the hierarchies, and you must to make a decision; however, you are noting that you are not applying the rules of your “induction hierarchy” to the comparison of the hierarchies themselves, which is fine.

    Your question seems to be, "Which identity set should I use?"

    My question, is which induction do you think, in totality of your analysis of the situation, is most cogent to hold in the box scenario?

    Your answer seems to be contingent on the relevant factors used in the situation, and it seems as though you may have a criteria for deciphering which is more cogent to include (in terms of relevant factors). Perhaps now you can answer the original question (above)?

    I am saying the hierarchy does not involve making any claim to the rationality of the distinctive properties a person chooses

    That is fine; my original question seems to boil down to what makes a factor relevant; but I want to clarify that I am not talking about properties but, rather, relevant factors. What is relevant to determining what to induce is not a property of an induction nor of an entity within the scenario; but, rather, merely an identified relevant factor of that situation.

    A property is an attribute that a thing has, whereas a factor is a piece of information that is relevant to the question at hand.

    For example, a property of the induction to pick BWA (in the box scenario) could be that it affirms (or utilizes) the relevant fact of the designs; whereas, the relevant factor of their being a correlation of designs is exactly that despite if it is used in an induction (and consequently if it is a property of any induction made).

    You'll need to prove that you cannot choose your essential properties.

    I was slightly wrong last time I explicated this, so let me clarify: the essential properties factor into what is a relevant factor, but it is not the sole consideration. If in the box scenario you disputed that the boxes which you saw a billion times were actually boxes (due to a difference in what we both considered the essential properties of a box) then will nullify certain aspects as relevant factors to determining if this selected box has air in it. However, I must stress, that the scenario I gave eliminates this possibility of dispute because the essential properties are stipulated from the beginning. So any dispute between what is a relevant factor to determining if the box has air in it is going to stem from something other than essential property disputes of the identity of entities.

    I will say it again: an accidental property of an entity within a context can be a relevant factor: not just essential properties. The essence of a thing is just the properties that it cannot exist without; in the box scenario, the designs are not essential properties but are relevant factors to the scenario nonetheless.

    Also, relevant factors are determined by the stipulations of the scenario (i.e., the context), and so some can outweigh others. For example:

    but it took 2 hours of examination to figure it out? If I only had 3 hours to sort

    You have just added a new stipulation (to the others I already gave) to the scenario which changes it. The time limit stipulation will affect what is a relevant factor for inducing a conclusion within the context: you just changed the context.

    Bob, hypothetically what if there was a color difference of red and green on A and B boxes

    Again, you have just changed the context is all. In this scenario, it would depend on if they still trusted the heavy correlation between the designs and the box types. Is someone they trust with their life tagging along with them when they experience the box correlation a billion times (to let them know which one was which)? Anyways, this is all irrelevant to the scenario I gave you.

    The relevant factors in the scenario do not change, and the design patter is one of them. It is a relevant factor because it can affect the conclusion (and in this case, quite heavily).

    "If I have an option to make a property essential to an identity, when should I?"

    I think we may have veered off from the original scenario and I think it is time we revisited it: I am not asking how one should determine the essential properties of an entity—I am asking how you are determining, in the scenario, which factors are relevant. The essential properties of the boxes are already given to you as a stipulation. It’s assuming you actually agree that you experienced a billion times a box with design X/Y. Expounding on how we determine what a “box” is is outside of the scope of the scenario.

    My point is that you don’t get to choose what is relevant to determining what induction to use in this scenario apart from what essential properties you use to determine what the things are within it (and that part I left out before). In the scenario, the essential properties of the boxes are already given.

    This is not a hierarchy question. I repeat, this is not a hierarchy question. At this point, we must leave inductions behind and focus on this question alone.

    I think this is wrong: although it is not a “hierarchy question” in the sense that it bears to relevance to the induction hierarchy criteria, it is nonetheless a comparison of the inductions indirectly based off of the comparison of relevant factors. The minute you decide to go with the pattern you have chosen that induction over the other one by means of comparing the relevant factors and determining that you ought to include the designs in there. You haven’t completely left the inductions behind at this point: you determining the relevant factors to compare them (and, yes, I know that it will not be a comparison in the sense of the “induction hierarchy”).

    Recall that the hierarchy is based on its distance from applicable knowledge within the distinctions chosen. I applicably know the probability. I don't distinctively know the probability. I applicably know the pattern. I don't distinctively know the pattern. Finally, I don't applicably know that I can get a box that has half air, and not half air. So if I choose an induction, whether I'm going to get an A or B box next, I have to choose an induction that strays away the least from the applicable knowledge that I have. In this case, its the probability.

    I don’t have a problem with this; but I am failing to see where I made these errors of “crossing applicable knowledge with inductions”.

    So then, the relevant factors of the identity set are the distinctive knowledge that you see as essential.

    As said above, I am not talking about “relevant factors of an identity set”: that is just another way of saying “essential properties of an entity”. I am talking about the factors that are relevant to formulating an induction within the context: these are not the same thing.

    The relevant factors within the hierarchy are your applicable knowledge involving those distinctions.

    You applicably know the pattern and the probability in the box scenario, and I am saying that using the pattern is more cogent: you are saying you can’t say whether it is more cogent or not unless I give you what relevant factors the person decides to use.

    "Usefulness" of distinctive knowledge can be broken down into a few categories (and I'm sure you can think of more):

    I would like you to, in light of these criteria you gave, tell me which induction within the box scenario is more cogent to use; and no I am not asking you to compare them within your induction hierarchy criteria because we already agreed that they are in two separate hierarchies and cannot be compared in that manner.

    But you didn't demonstrate logical consistency.

    What do you mean? I said that logical consistency is a criteria of being rational: that is a logically consistent position because there is no logical contradiction in claiming that.

    If you want to equate parsimonious with rationality, you have to demonstrate that rationality. As it was, your claim is its rational because its "rational".

    I am saying it is analogous to logical consistency (as well as others); and it is what I mean when I say someone is rational or irrational. If you have a different definition, then I am all ears.

    The law of non-contradiction is a distinctive bit of knowledge that when applied to reality, has always been confirmed. What is rational is to create applicable identities which assess reality correctly. We know this if reality does not contradict these applications.

    The first underlined portion is false: we have never confirmed via an empirical test that the law of non-contradiction is true; secondly, with respect to the second underlined portion, that is circular logic: you are saying that LNC is true because it does not contradict reality which, in turn, presupposes that a law cannot both be true and false of reality—which is the LNC and that premise is what you were supposed to prove in the onset. You basically just said LNC is true because LNC.

    Our desires to not change this

    Desires do not change what is in reality; but it does affect what we come to claim to know about reality. If a proposition cannot be both true and false, then that is either true or false irregardless of what we both desire; however, if either of us claim either way, then our claims will bottom out at desires. This is hume’s guillotine at work here.

    Likewise, rationality is different than LNC insofar as it is something that does NOT exist in the world beyond our wills: it is utterly dependent on what we think we ought to be doing—and obligations are not objective.

    Sorry Bob, but I'm not going to accept any idea that our feelings or desires are the underpinnings of rationality, at least without a deeper argument into why.

    For now this is really an offshoot of our conversation, so I will refrain from going too deep into it for now.

    Bob
  • Knowledge and induction within your self-context


    Also, to clarify my distinction between the essential properties of an identity of an entity and the relevant factors of resolving a dilemma within a context; it can be noted that both are essential properties: just not pertaining to the same thing. To say that I can decide what I think is essential to what this "entity" is does not mean that I can decide what is an essential factor to assessing the entire situation. For example, a essential factor of assessment of a context can be an accidental property of an entity within the context, and an essential property of an entity within the context can be irrelevant to one's assessment. I can choose what I use to identify an entity within the context, but I don't get to choose what is relevant to assessing the dilemma within that context.

    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    I appreciate the elaboration: I see more clearly now what you are and are not saying. I want to say it back to you so as to confirm that I am getting it right:

    You are claiming that the two sets, H1 and H2, can only be evaluated as more or less cogent than one another insofar as you know which factors are being considered relevant (e.g., if X/Y and A/B, then go with A; if just A/B, then go with B; etc.); but, most importantly, the person can decide which factors are relevant, being distinctive knowledge, and thusly it is not more or less rational (i.e., cogent) to use factors X/Y and A/B (or to just use A/B, or just X/Y). Is this correct?

    After re-reading the OP and essays, I think I have finally pinned down my disagreement here (assuming my above summary is accurate): the relevant factors of the actual situation are not themselves distinctive knowledge but, rather, are applicable knowledge. In your essays and OP, you never discuss relevant factors of a context (i.e., situation) in the manner that I am talking about but, rather, essential properties: the latter being what is essential to the identity of an entity, whereas the former is what is essential to the consideration of the resolution to a dilemma within a context. My creation of distinctions plays no role in what is relevant to figuring out the best solution to a problem within a context: the relevant factors are any factors (i.e., bits of information) within the context that could affect the decision.

    To clarify, distinctive knowledge is simply the awareness of one’s discrete experiences. Claims to their representations of a reality outside of the experience itself are not included.
    ...
    I can decide how detailed, or how many properties of the sheep I wish to recognize and record into my memory without contradiction by reality, as long as I don’t believe these distinctions represent something beyond this personal contextual knowledge.
    ...
    I cannot know that if I discretely experience something that resembles these distinctions, that the experience correctly matches the identities I have created without contradiction by reality.

    So, if I say that what is essential (i.e., relevant) to determining whether my dog is in that room or not is only the probability, that tells me nothing in-itself about what the actual set of relevant factors are to determining whether my dog is in that room or not in reality: I have to apply a test to figure that out, which is, by definition, applicable knowledge.

    My distinctive knowledge of what the relevant factors are, which is just my ability to cognitively enumerate different options and single out different entities, is really an asserted hypothesis of what they actually are; and I can only confirm this by application of a test.

    So far, you keep insisting that which set one will use is utterly determined by which factors are considered relevant, and the determination of what is relevant is merely distinctive—not applicable. However, this is wrong: there are an actual set of relevant factors to whether my dog is in that room and it is unconditioned by my distinctive knowledge of it.

    Take Set 1 when X and Y are not considered. Take Set 2 when X and Y are considered

    The problem is that you don’t get to decide what to consider in the context: the relevant factors are there in reality within that context. In the box example, the designs and the probability are relevant factors. All you are noting is the enumeration of which are more cogent depending on what they consider as relevant, but I am saying they don’t get to choose that part.

    Your choice of set, is not the hierarchy.

    Yes, this is fine; but there is an actual most cogent set to choose (over the others).

    But regardless, parsimoneous is just something we want, it doesn't make it rational.

    To be rational, is to be parsimonious, logically consistent, to assess the reliability of the evidence, to be internally + externally coherent, and empirically adequate—all to the best of one’s ability. If I say that X is true and false, then I am thereby being irrational: however, if I say that X is true but do not realize that I am also implying it is false, then I am not thereby being irrational; if I say that I can explain the data with X and still insist on explaining it with X + Y, then I am thereby being irrational: however, if I explain it with X + Y as I do not realize I can explain it with just X, then I am not thereby being irrational; etc.

    Also, as a side note, standards, which ground all reasoning and justification, are fundamentally grouned in wants (i.e., ought statements); but the idea is to try and hold what provides the best utility towards truth. Just because parsimony bottoms out at a want, which may be an intellectual seeming in this case, does not mean it cannot be a criteria for the standards of what being rational is. All of our epistemic imperatives (e.g., what we use to do science) and moral imperatives bottom out at wants. As a matter of fact, all of our reasoning does: our will’s are what furnish us with our principles that we think we are obligated to use during our derivations.

    A desire is not a rational argument.

    Desires, ultimately, are what define what “being rational” is. There’s no way around that. That I am irrational for violating the law of noncontradiction is grounded in my desire that I ought to define “being rational” as including “abiding by the LNC”. That doesn’t make my argument irrational.

    That is a separate question that must be asked of the distinctive knowledge sets themselves. Which if you understand this part, we can go into next.

    Yes, that is what I have been asking about with the H2 and H1 in S question.

    I look forward to hearing from you,
    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    Happy Saturday Bob!

    You too!

    you keep saying things that show you don't understand the paper...I think it would help you greatly to re-read the paper first

    Fair enough: I will re-read the OP and the papers (from the previous discussion board); however, I feel as though I am asking a very clear question that doesn’t require re-reading the paper, so I am going to ask again but with as much clarity as I possibly can.

    We both agreed that it is more cogent to pick H2 over H1 in S, so I was asking you why it would be more cogent under your view. An answer to that question is not to throw the question back at me: it doesn’t matter why I think it is more cogent: you said it was here:

    The most rational is to take both into account and assume that 49% of the boxes we find will be with air, and we believe that all of these boxes will have the X pattern.

    If you think it is more rational, then I can ask “why under your view is it more rational?”. You, then, cannot simply respond with “why do you, bob, think it is more rational?”.

    So, let me start over with the question and ask:

    Do you think it is more cogent to pick H1, H2, or neither (are more cogent) in S?

    If you answer one over the other, then I am asking a follow up question of:

    Under your view, what makes one more cogent over the other?

    Correct me if I am wrong, but we have discussed well enough for me to get an answer to those: we both agree that the cogency criteria within the hierarchies (H2 and H1) work perfectly fine, but is there any criteria in place to compare those hierarchies themselves?

    I think the aspect of the papers you are saying I am forgetting pertains to the claims I made about distinctive knowledge, but that is irrelevant to whether you can briefly answer those questions.

    This part alone should have been obvious to you if you've been listening to me, and you should have easily predicted how I would respond

    I said this in the example:

    This is another situation where the probability and possibility do not use the same relevant factors and, consequently, your epistemology is useless for figuring out what the most cogent thing is to do (regardless of the fact that it can calculate what is most cogent within the two hierarchies).

    The example wasn’t demonstrating that the cogency criteria within the hierarchies is flawed: it was noting, just like with the box example, that there is no way to determine the most cogent thing to do in the situation because there are two hierarchies that cannot be compared.
    You're smart as a whip Bob, but I think you're still in attack mode, not discussion mode, and you're not thinking through it correctly here

    I apologize if that is the case, but, as I showed above, I did not straw man your position and I did anticipate that response by purposely explicating that I am not claiming that in the example itself (as seen above). You just ignored or missed it in my response.

    Again, forget about my claim that H2 and H1 are within an over-arching hierarchy: that is irrelevant right now. Forget that I said that and deal with what I am saying right now.

    You are claiming that because it does not claim to have a rational comparison between identity sets, that its somehow broken. That's a straw man

    Agreed, that is a straw man. I am saying that because the two inductions do not use the same “identity sets”, I have, in the box and dog example, no way of determining which is more cogent to use because they are of two distinct hierarchies.

    You claim that it is more rational to pick H2. It seems to be a crux of your argument against the hierarchies inadequacy, so I want to know what justification you have for making that claim.

    Right now, I am asking you why you think it is more cogent to pick H2 (which you said, and I quoted above, in a previous message) if you can’t compare the hierarchies themselves (which is what you were also claiming). My rationale for why I think it is more cogent is irrelevant. For example, if I thought it was more cogent to pick H1 over H2, I could still validly ask why you think H2 is more cogent than H1 because it appears as though there is no way to determine this under your view.

    But since you asked, I will tell you why I think H2 is better than H1 in the box example: I think that, in that situation, in a nutshell, that the overwhelming experiential correlation of the BWA with design X and design X exclusively on BWAs outweighs the 1% increased probability that it is a BWOA; and so I go with it being a BWA (and thusly not with the probability). Why do I think it outweighs the other? Just because, in this situation, because to go with the other option is to have to makeup unparsimoneous explanations of the situation: it is more parsimonious, all else being equal, to say “yeah, that’s probably a BWA”.

    Again, I am asking you why you think it is more cogent when, as far as I can tell, your epistemology affords no means of determining it. Is it an intellectual seeming to you? Is it because it is more parsimonious? Is it something else? You still have yet to answer!!!

    No, its not clear, that's why I'm asking you to give your rationale! Also, lets not put ultimatums like "rebut or concede". Lets not make the discussion one sided, please address my points so that I can better address yours.

    Sorry, I am not trying to give you an ultimatum; but I feel as though you are avoiding the question (perhaps unintentionally or I am misunderstanding your response): I’ve asked the same question now four or so times and you haven't answered nor have you demonstrated why my question is currently unanswerable. You say we need to clarify some things about how the methodology works (as I am misremembering), but you can still answer the question with the terms from your methodology and then note if my response confuses the terms. You haven’t even responded.

    Perhaps, if the question is truly unanswerable, then please demonstrate why. If it is because I am misremembering something, then let me know what exactly (briefly). I don’t see why you can’t at least answer it within the schema of your methodology.

    I am not saying what a person should do, you are. You are saying they are acting irrationally, and I'm still waiting for why from you.

    Are you not saying that the hierarchy is the most cogent means of determining which induction to hold when they have the same identity sets? If so, then you are telling them what they should do.

    I am not going to respond to the distinctive knowledge stuff yet until after I re-read the essays. Plus, I think there’s plenty for you to respond to herein already.

    I look forward to hearing from you,
    Bob
  • US Supreme Court (General Discussion)


    Hello Mikie,

    But I’m not sure your characterization of AA is correct. There’s strong arguments in favor of it.

    Could you elaborate on some of them? Otherwise, I am unsure as to what about my characterization is incorrect.

    Every one of these controversial cases are along party lines. When things are so predictable, you know it’s not a matter of a fair assessment of evidence — it’s foregone.

    Everything you said along these lines is perfectly accurate: I am not trying to defend dogmatic political actions but, rather, I was just agreeing with their decision on AA (regardless of why they chose to decide such). I just happen to agree that AA is wrong, if that makes sense.

    Bob
  • US Supreme Court (General Discussion)


    Hello Mikie,

    Although helping the underprivileged is something we should all strive to do, I think that the use of someone's race or gender as a criteria or indicator of need is insufficient (to accomplish the goal of helping those in need), racist, "genderist", sexist, and immoral. If a college wants to allocate funds, admission slots, etc. for underprivileged candidates, then that is perfectly fine with me--but their race, for example, should have nothing to do with that decision.

    It is insufficient to achieve the goal (of helping the underprivileged), because you can most certainly have children of all races that are in desperate (or moderate) need of help and, without it, will definitely not have the majority of opportunities they otherwise could have had; and usually it is of no fault of their own. If someone uses gender, sex, or race, they are going to inevitably include people that don't need the help and will exclude people that do need it. Better indicators are financial indications (e.g., how much does their family make? What jobs do they have?), family indications (e.g., do they have a family? Are they abusive?), etc.

    It is racist, sexist, and "genderist" because no person should ever be punished or rewarded for merely the skin color, sex, or gender that they have: period. These are not indicators of merit, need, or otherwise.

    It is immoral because I believe that we should be striving towards a world with the maximal sovereignty of wills, and this entails disbanding from judging people on their race, sex, and gender.

    I think we can salvage the good intention of affirmative action while disbanding from the bad: we can allow schools to shift their criteria to allocate help for the underprivileged that does not consider directly their race, sex, or gender.

    I look forward to hearing from you,
    Bob
  • Knowledge and induction within your self-context


    Hello Philosophims,

    Its been logically concluded that a person can create whatever distinctive knowledge they want.

    You are confusing what a person can do with what they should do epistemically. It doesn’t matter if a person can act irrationally: it is still irrational because it isn’t what they should be doing. What they should be doing, is exactly what the epistemic theory is supposed to furnish us with.

    Which leads me to:

    The hierarchy is built off of the consequences of distinctive and applicable knowledge, not the other way around

    You are confusing what is most cogent to do with our expounding of it (to ourselves). Distinctive knowledge us just out ability to discretely parcel reality: it doesn’t tell us in itself what is most cogent to hold nor what is even most cogent to parcel. The epistemic theory is supposed to attempt to get at what in reality, beyond our mere distinctive knowledge, is most cogent to do.

    Philosophim, conceptualizing and abstracting what one thinks is most cogent to do is useless if it is not closely married to reality, which is what furnishes us with what actually is most cogent to do. If I want to survive and there’s a bear coming at me, then there is actually a best sequence of counter moves to maximize my chances of getting out alive—and my decisions in terms of what to distinctively classify and parcel could go against that most cogent sequence of events.

    If you are claiming that the hierarchy is contingent on the distinctive knowledge, then that’s another area of dispute us.

    Just because we have to use our discrete experiences to get at reality, that does not entail that what is most cogent is contingent on our discrete experiences (nor knowledge that we distinctively derive therefrom).

    Just as a start, it solves many problems in epistemology that have to do with induction.

    I know you have expounded this before, but can you briefly give a couple examples (so I can re-evaluate them)? I honestly don’t think it applies to most situations. Take a simple example that is analogous to the scenario which I gave you before: is my dog in the other room?

    Let’s say I know there is a 50% chance that he is in the room and that I am outside of the room (with the door closed). The probability was calculated by a person who, with me not looking our hearing anything, flipped a fair coin and if it is heads will put my dog in the room (and if tails, won’t).

    Now, to make this analogous and render your hierarchy useless to this situation, I am allowed to, after the coin flipping and placing of the dog (or not placing of the dog) is finished, stand outside of the room with the door closed. I clearly hear a dog barking in that room and, to put the icing on the cake, my dog’s bark matches that bark exactly (as I have experienced it for 60 years). This is another situation where the probability and possibility do not use the same relevant factors and, consequently, your epistemology is useless for figuring out what the most cogent thing is to do (regardless of the fact that it can calculate what is most cogent within the two hierarchies).

    This is true of vast majority of situations, including possibility vs. possibility (which can also have two which use different relevant factors), possibility vs. plausibility, etc.--I can keep adding example after example if you would like.

    Just like how I don’t get to distinctively say “well, I just don’t find the probability of flipping the coin relevant, so I am going to say it will be heads because that is what it was last time” — Bob Ross

    The reason why you don't get to do this is if you also add, "When I'm using the hierarchy of induction."

    This is irrelevant to what I was saying: just because I can decide to not use the hierarchy that does not entail that I am determining the most cogent solution. What I can epistemically do is different from what I should epistemically do. If I reject the hierarchy in a situation where it is clearly applicable in favor of something less formidable, then I am being less cogent. I don’t get to just say “well, you can’t complain because it is my distinctive knowledge”.

    But there are risks and consequences for doing so as I mentioned in my last post.

    You can’t say there are more risks in choosing A over B if you can’t determine A as a more cogent option than B: those claims go hand-in-hand. If there’s more risk in being wrong, then I would imagine that actually factors into the cogency of the decision.
    For the record, I actually do think that comparing hierarchies is within the over-arching hierarchy of the entirety of the inductions and, thusly, is a critique of your hierarchy; — Bob Ross

    But you're not arguing for it. You're not showing or proving it Bob. That's just a statement. Its why I asked you

    I was just clarifying the record: I am not going to derail into that right now. I would much rather you just answer the question. My statement here is irrelevant to the question:

    Why do you think its more reasonable to choose H2 than H1?

    I am asking that within the context that we have two hierarchies, H2 and H1, in context S and that is it: there is no over-arching hierarchy at play here. I think I made the question very clear. So, does your epistemology account for a method of determining the cogency of the hierarchies or not?

    I get the feeling that you're more interested in simply not accepting the hierarchy then you are in demonstrating why.

    I am refraining from derailing into why I think H2 and H1 in S are within a over-arching hierarchy, H3, because my critique here applies either way: it doesn’t matter. Forget about that for now.

    In the question I asked, I am granting the hierarchy itself is stable and legitimate: the critique is of the comparison of hierarchies themselves.

    So try to answer the question first. I'm not trying to trap you, I'm trying to see if you understand all of the terms correctly, and also get a better insight into why you're making the claims that you are.

    I definitely have an answer for you, but I feel that too much of these discussions has been going back to whether you understand the actual theory as defined instead of whether the theory is flawed or illogical.

    I think my question is very clear, and I am not going to speculate at trying to provide potential solutions to your theory if you already have a solution. The critique is of your theory, now it is time for you to rebut it or concede it.

    I already stated in the context of the question that the hierarchies are legitimate: it’s the comparison of hierarchies I am asking about. Imagine, perhaps if it helps, that I don’t have a solution to provide you: so what? If I don’t have a solution, then it doesn’t change the fact that either you do or you have to concede that your epistemology fails in this regard.

    Hope the week is going well for you Bob, I'll catch your reply later!

    You too my friend! (:

    Bob
  • A Case for Analytic Idealism


    Hello Mww,

    The self that thinks transcendentally is not meant to indicate a transcendental self;
    The notion of a phenomenal appearance of a self is an unwarranted intermingling of domains, leading to methodological incompatibilities, and from those arise contradictions;
    I see no reason to agree he is clearly explicating as you say.

    How can a self “think transcendentally” but not be a transcendental self? Those sound like the same thing to me.

    This is a very subtle exposition that the doing, the methodological operation, and the talking about the doing, the speculative articulation of such method, are very different.

    True.

    When thinking, as such, in and of itself, “I think” is not included in that act, but just is the act;

    Included in what act? I didn’t quite follow this part. The “I think” would be the act itself and different from the explication (to ourselves) of the act—but that, to me, is still a concession that the “I think” is transcendental and, thusly, not the mere aggregate of appearances of a self.

    That I am conscious that, is not the same as the consciousness of;

    I didn’t follow this part either: what do you mean?

    That I do this, presupposes the conditions of the ability for this.

    Correct.

    The Part in CPR on understanding is a Division consisting of 2 Books, 5 Chapters, 8 Sections, 24 subsections, covering roughly a 185 A/B pagination range in 214 pages of text, AND…a freakin’ appendix to boot!!!!….so to say he asserts anything flat out is a gross mischaracterization on the one hand, and at the same time stands as a super speculatively affirmative argument on the other.

    I honestly don’t think his works (especially CPR) are well written, so, yeah, I mean he tend to presuppose a lot of things slash say things that can be interpreted three or four different ways without clarification.

    I must agree with
    ↪Janus
    when he says you’re not listening.

    Well, I apologize, but I feel as though I have been. For example:

    I keep saying I’m persuaded yet you keep asking why I’m convinced, which is merely an insignificant microcosm but representative of a significant part of the present dialectic nonetheless.

    This is not evidence that I am not listening: I have said many times that I understand that you find whatever convinced you insignificant but that I would like to hear it anyways (if you don’t mind). And:

    Same with requests for proofs

    This is just semantics. By ‘proof’, I think I have been clearly asking for ‘whatever pursuaded you’ which is not equivalent to whatever you are using the term for with respect to theories. Bottom line is that something convinced you and if you don’t want to share it (for whatever reason) then that is fine. I was just curious.

    Also, theories have proofs; but I would reckon you are referring to something wholly different by ‘proof’ than me.

    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    I think you are understanding where my problem with your methodology lies (and what it is); and I think you are conceding that it doesn’t give an actually account of which hierarchy is most cogent—which, to me, is a major problem. For the applicability of your induction hierarchy itself is miniscule in the practical world: very (and I mean very) rarely does the possible inductions use the same exact relevant factors (i.e., essential properties); and, consequently, your hierarchy, and methodology in general (since it doesn’t account for a viable solution comparing them), is only applicable to one piece of sand in an entire beach. Sure, we can dissect that one particular piece of sand and understand the cogency hierarchy within that oddly specific scenario, but it isn’t relevant to the other millions of particles of sand. In order for a epistemic methodology to be viable, I would argue, it must, at the very least, be able to provide what is most cogent to hold generally in the vast majority of cases—not a small minority.

    Your issue is you are attributing what people decide as distinctive knowledge, and questioning what level of detail people should choose.

    I wouldn’t count it is valid to shift the determination of cogency to distinctive knowledge; that’s like me saying that people can choose what level of detail to use when it comes the hierarchy itself—e.g., I choose to exclude probabilities, so I choose this possibility. No, we both agree there is actually something most cogent to choose within each hierarchy: I don’t get to shift that into distinctive knowledge. Likewise, there is something most cogent in relation to the hierarchies we generate for the scenario: I don’t get to shift that into my distinctive knowledge. Just like how I don’t get to distinctively say “well, I just don’t find the probability of flipping the coin relevant, so I am going to say it will be heads because that is what it was last time”, I don’t get to distinctively say “well, I just don’t find the designs relevant in this case, so I am going to go with the probability of 51% that it is a BWOA”. Epistemology doesn’t leave these kinds of cogency decisions up to the user to arbitrarily decide.

    The answer I gave in the paper was, "Whatever outcomes would best fit your context."

    To clarify, this means that the crux of the cogency determination in the vast majority of cases is left up the person to arbitrarily decide for themselves; which renders the scope of your methodology to only oddly specific examples. Again, we normally do not face possible inductions that use the exact same relevant factors—the real world doesn’t work like that.

    If a bear is rushing quickly towards you in the woods, you don't have a lot of time to test to see if the bear is rushing towards you or something behind you. Another thing is to consider failure. Perhaps there's a lot pointing towards the bear not rushing towards you. But if you're wrong, you're going to be bear food. So maybe you climb a tree despite your initial beliefs that its probably not going after you.

    You aren’t giving a general account of what is most cogent: you are just saying that the person can do whatever they want, and that’s what is most cogent. I am not saying you have to have an incredibly oddly detailed equation for determining cogency—but you should have a general account.

    For example, I actually think that the best criteria of knowledge is a justified belief, and the factors for justification are: internal coherence, external coherence, parsimony, logical consistency, reliability of supporting data, intellectual seemings, and explanatory power. These are NOT super specific criteria, but I am not saying “you do you, and that’s what’s most cogent”: epistemologies are supposed to give general guidelines for how to acquire knowledge, but yours seems to revolve around a very miniscule scenario where the inductions have the exact same relevant factors.

    If you go back to the hierarchy now, you'll understand that your question is not about the hierarchy, its about determining what would be best, to include more or less details in your assessment of the situation

    For the record, I actually do think that comparing hierarchies is within the over-arching hierarchy of the entirety of the inductions and, thusly, is a critique of your hierarchy; but I understand you do not see it that way, so I am conveying my point in the form of “comparing hierarchies”.

    So I'm going to put the issue back to you. Why do you think its more reasonable to choose H2 than H1?

    I am not going to answer this yet, because that is what I was originally asking you and you still haven’t answered. So far, all I am understanding you to say is that your methodology doesn’t tell them what to do (i.e., “whatever fits best for you in the context”).

    Can you do so within the understanding of distinctive and applicable knowledge?

    I don’t use your exact epistemology, so that is why I am asking you for clarification; but I can attempt an answer (after yours) of a potential solution under your epistemology by my lights.

    Bob
  • A Case for Analytic Idealism


    Hello Mww,

    If you would like to not give a proof, then that is perfectly fine. But, to clarify, I am saying I would like to hear your proof (or argument) for what convinced you of it.

    My conviction regarding the fact of the categories is irrelevant. I’m sufficiently persuaded by the affirmative argument to think he’s come up with a perfectly fascinating metaphysical theory. That’s it

    It’s not irrelevant to me, and what is the ‘affirmative argument’? To me, Kant just asserts it flat out and super speculatively in CPR. I’m curious what convinced you, as you clearly interpreted the text differently than me.

    On your “Of The Originally Synthetical Unity of Apperception” quote, the very next line after what you posted, shoots your argument in the foot.

    I don’t think it did. Here’s the whole snippet:

    The “I think” must accompany all my representations, for otherwise something would be represented in me which could not be thought; in other words, the representation would either be impossible, or at least be, in relation to me, nothing. That representation which can be given previously to all thought is called intuition. All the diversity or manifold content of intuition, has, therefore, a necessary relation to the “I think”, in the subject in which this diversity is found. But the representation, “I think” is an act of spontaneity; that is to say, it cannot be regarded as belonging to mere sensibility. I call it pure apperception, in order to distinguish it from empirical; or primitive aperception, because it is self-consciousness which, whilst it gives birth to the representation “I think” must necessarily be capable of accompanying all our representations...the unity of this apperception I call the transcendental unity of self-consciousness, in order to indicate the possibility of a priori cognition arising from it

    He is clearly explicating that there is a phenomenal appearance of a self and a transcendental self; and that the transcendental self is the necessary precondition (i.e., subject: mind) for the former.

    Bob
  • A Case for Analytic Idealism


    Hello Janus,

    The categories of understanding are identifiable simply by reflecting on the ways we experience and judge things; nothing at all to do with the thing-in-itself.

    All I was asking Mww in the quote you made of me was to expound briefly the argument for the twelve categories that he hold and not that he is trying to prove that they are things-in-themselves nor a part of a thing-in-itself.

    Now, you correct that I do think that the 12 categories, if they existed, would be a part of a mind which, in turn, would be a thing-in-itself; but that is irrelevant to what I was asking Mww for.

    The problem is, Bob, I don't think you are listening to anyone else.

    I hope that is not the case: I do my absolute best to hear everyone’s perspectives; but I just think it is wrong to think that we can claim there are 12 categories of the understanding and not admit to ourselves that the understanding is either an (1) illusion or (2) a part of a thing-in-itself.

    I appreciate your chiming in and please continue to do so as you deem fit!

    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    But try as well to be as critical to your own argument too. You keep misunderstanding the hierarchy.

    Let try to clarify what I am understanding you to be saying and then explicate the problem I am bringing up; because I think I am agreeing with you more than you might be realizing.

    You are saying that in this scenario the two inductions (i.e., a probability and pattern) are within separate inductive hierarchies because they do not use the same essential properties (i.e., what I call “relevant factors”). Thusly, they cannot be compared to each other. It would look like this:

    Hierarchy 1 (H1): probability
    Hierarchy 2 (H2): pattern

    In the scenario, there are no other inductions that use the same essential properties (i.e., relevant factors) and since there are only two given the two hierarchies only contain one induction; which entails that within each hierarchy each induction is by default the most cogent to hold.

    I agree with you here (in terms of what I just explained) and correct me if I am getting it wrong.

    Now, perhaps to explicate this clearer, I am going to distinguish “comparing inductions” from “comparing hierarchies”. We both agree that both inductions are within their own hierarchies and thusly cannot be compared; however, I am talking about comparing hierarchies.

    In the scenario, which let’s say is context S, there are two hierarchies, H1 and H2. Although you can’t compare the inductions, you have to compare the hierarchies to decide which is most cogent to go with (because it is a dilemma: either use the probability or the pattern—there’s no other option). Now, if we are to claim that in S H2 is more cogent than H1 (and thusly go with the pattern), then there must be some sort of criteria we used to compare H2 to H1 in S. If not, then we cannot claim either is more or less cogent to each other and, consequently, cannot claim that using the pattern is more or less cogent than the probability and if that is the case, then it is an arbitrary decision between using H2 over H1.

    Now, the actual crux of determining the cogency, because otherwise it is arbitrary, is comparing H2 to H1 in S. If that is the case, then the hierarchy analysis that you keep giving, which would apply to H2 and H1, isn't doing any actual work in evaluating in S what is the most cogent decision to make. Do you see what I mean?

    I think you missed what I did then. I didn't compare the two different property setups, I simply overlapped them. I've said it several times now, but its worth repeating. The probability in the first case is regarding an identity with less essential properties than the second case. So I can very easily say, "All boxes have a 49/51% chance for air/not air". Since the probability does not consider X/Y pattern, it does not tell us the probability of air/not air for an X/Y pattern. So if we disregard the X/Y, we hold that probability. To help me to see if I'm communicating this correctly, what is the problem with this notion alone?

    I don’t have a problem with it, because you are just separating them out into different hierarchies, but then that is where the issue I am talking about arises (as explicated above).

    You can say there is a probability and a pattern and they don’t use the same essential properties (i.e., relevant factors), but you still can’t evaluate which is more cogent to use in S because you can’t compare H2 to H1 without it being arbitrary (so far) under your view.

    You seem very hung up on this idea that a probability is always more cogent then a lower portion of the hierarchy no matter the circumstance of context.

    To clarify, now that you clarified that you are separating them into different hierarchies, I am saying that H2 is more cogent than H1 in S.

    I understand that within H1, for example, if there was a pattern and a probability, then the probability is more cogent in H1: but you’ve now expanded this into multiple hierarchies.

    Second, I'm going to change the odds for a bit because we need to get you off this idea that the odds being miniscule make a difference.

    It makes a difference when you are comparing hierarchies: H2 to H1. All you are noting here is that within H2 or H1 the miniscule odds do not matter.

    Does he include the X/Y design as part of his potential identification of whether the box has air or not? Let say Jimmy's not very smart and doesn't see a correlation of the X/Y pattern with air/not air. Jimmy has two options then.

    You seem to try to resolve this issue I am talking about by just leaving it up to the person, but I am saying that it actually is more cogent to use H2 instead of H1 in S. Jimmy can disregard the pattern all he wants, and he would still be making an irrational decision.

    I think all your scenarios with Jimmy missed the point, because you are just, again, elaborating on what is most cogent in a hierarchy while missing the point that you have two equally cogent hierarchies according to your position: you can’t claim, without further proof or elaboration, that H2 is more cogent than H1 in S. I am trying to see if you think it is an arbitrary choice (i.e., left up to the person to decide) or if you think it is actually more cogent to choose H2 over H1 in S.

    I look forward to hearing from you,

    Bob
  • A Case for Analytic Idealism


    Hello Mww,

    My interest here is waning , sorry to say.

    Absolutely no worries! We can stop at anytime that you deem fit.

    Convinced of a proof grounded in an idea? Nahhhh….no more than persuaded, and that in conjunction with his claim that he’s thought of everything relevant, and needs nothing from me to complete the thesis. For me to think he could have done better, or that he trips all over himself, implies I’m smarter than he, which I readily admit as hardly being the case.

    I didn’t see a proof in that quote of the 12 categories of the understanding but, rather, a summary of transcendental philosophy.

    Funny, though, innit? To help you understand? You realize, don’t you, that is beyond my abilities? No matter what anybody says in attempting to help you, you’re still on your own after they’ve said whatever it is they going to say. And because you’ve rejected some parts, it isn’t likely you’re going to understand the remainder as a systemic whole, which necessarily relates to the parts rejected.

    Come on Mwww! (; If you are convinced that there are twelve categories, then you should be able to articulate the proof that convinced you. I am not asking for a super meticulous exposition of all of transcendental philosophy: I already understand the basic transcendental idealistic context for the argument.

    I am just curious, within that context, why you would think there are these twelve categories. Why not, for example, hold there is one: the PSR of becoming. Why place the function and purpose of reason in the understanding, such as concepts?

    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    I see what you are saying, but the problem is that there is not means of determining the cogency when comparing:

    After pulling literally two billion boxes and noticing there was a 100% match of design to air or not air, it seems silly not to consider it.

    According to the entirety of your methodology (and not just the hierarchy), there is no justification for this claim you have made here. You can’t say it is less cogent, even when it seems obvious that it is, for a person to say “no it doesn’t seem silly to just go off of the probability”. Without a clear criteria in your view, the vast majority of scenarios end up bottoming out at this kind of stalemate (because the hierarchy is unapplicable to the situation).

    You're still hung up on comparing that pattern to the probability though. You can't because you're not considering the same properties in both instances. It doesn't work that way. Stop it Bob. :D

    I totally am (; I mean:

    The most rational is to take both into account and assume that 49% of the boxes we find will be with air, and we believe that all of these boxes will have the X pattern.

    You can’t say this if you generated two separate, uncomparable hierarchies and there is nothing else in the methodology that determines cogency of inductions! Philosophim, you are admitting it is more cogent and that there’s absolutely no justification in your methodology for knowing that!

    I 100% agree with you that it is most rational, but the problem in your view is you cannot justify it.

    Let’s make the danger in having no means of determining cogency of the inductions more clear in this scenario: imagine that if you guess incorrectly they kill you. Now, we both agree that the obviously more cogent and rational move is to bet it is a BWA; but imagine there’s a third participant, Jimmy, who isn’t too bright. He goes off of the probability. Now, he isn’t misapplying your methodology by choosing to go off of the probability: he carefully and meticulously outlines the hierarchies involved in the context just like you, and realized (just like you) that he cannot compare them and is at a stalemate. He decides that he will use the probability.

    We are both witnessing this irrational decision, and we want to help Jimmy not make a collosally dangerous mistake here in his reasoning; but, according to your methodology, what are we to cite as his mistake in his reasoning? What is it philosophim?!? Absolutely nothing. He did everything by the books.

    The fact that people can misunderstand, misuse, or make mistakes in applying a methodology is not a critique on the methodology. Do we discount algebra because it takes some time to learn or master? No.

    But, Philosophim, Jimmy isn’t misapplying your methodology—it just doesn’t afford an answer to what is the most rational and cogent decision between the inductions! The mistake is in the lack of usefulness of the methodology, not Jimmy’s application of it!

    What justification would you give to Jimmy to try and save him?

    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    I understand better what you are arguing, but, in light of it, I think, by my lights, that is a concession that the hierarchy does not function (as I thought it was intended) in this scenario; and here is the crux of it:

    That depends on what you find essential in pulling the boxes.

    Correct me if I am wrong, but you seem to be admitting that these two inductions (which pertain to answering the same question in the same context) cannot be evaluated with respect to each other to decipher which is more cogent because you are generating two different hierarchies for them; and you are expressing this in the form of saying that it is up to the person to define what they think is essential. However, this is very problematic.

    Firstly, unless there is some sort of separate criteria in your methodology for what one should consider essential, then it seems like, according to your methodology, a truly arbitrary decision of what is essential. I am ok with the idea of letting distinctive knowledge be ultimately definitional: but now you are extending it to applicable knowledge.

    Secondly, because it is an arbitrary decision whether one wants to include the X and Y designs into their consideration, the crux of the cogency of their induction is not furnished nor helped by your induction hierarchy and, thusly, your methodology provides no use in this scenario. I think you are agreeing with me here implicitly by admitting that you had to generating to two competing but completely uncomparable hierarchies. This is a clear demonstration of the inapplicability of your methodology to determine cogency of inductions.

    Thirdly, I find that it would actually be less cogent to go with the probability (in that scenario) and someone merely saying they don’t want to include the designs as essential doesn’t seem like a rational counter. The strong pattern, in this case, clearly outweighs using the miniscule probability. So I think that, as far as I am understanding it, using this methodology in this scenario can lead people to making an irrational decision (in the case that they arbitrarily exclude their knowledge of the patterns).

    To me, this is the cost of claiming these two inductions as uncomparable, and it seems way too high to me to accept.

    Would you at least agree that this scenario demonstrates how your methodology affords no help in some scenarios?

    The bridge between them cannot be made (according to you) and so you have to arbitrarily pick which hierarchy to use.

    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    An example of the hierarchy
    Probability 49/51% of getting either A or B.
    Pattern I pull 1 billion A's and 1 billion Bs.

    Another example of the hierarchy:
    Probability of getting either A or B with design X is 75% or Y at 25%
    Pattern I always pull an A with X, and always pull a B with Y

    An example that is NOT the hierarchy:
    Probability 49/51% of getting either A or B.
    Pattern I always pull an A with X, and always pull a B with Y

    I understand what you are conveying, but this just segues into my questions because if you are saying that the inductive hierarchy doesn’t apply (like in your last example one above), then, by my lights, it is useless (since it cannot be applied) for practical examples. My scenario is an example of that. But to not get ahead of myself, please answer my previous questions directly.

    Bob
  • Knowledge and induction within your self-context


    My internet is down so I'm having to type these on the phone for now.

    Absolutely no problem! I will answer your questions, but my questions aren't related directly to yours; so if you could answer them as well that would be much appreciated!

    Take the situation with X and Y properties, then come up with a probability, a possibility/pattern, and a plausibility. Add no other properties, and remove none. Then show if a lower hierarchy results in a more cogent decision.

    My situation is an example of this, and to keep it simpler I excluded a plausibility: do you want me to add in a plausibility as well? I think it will just clutter up the discussion adding it in.

    After, do the same as above, but this time add in the X/Y consideration for all the inductions. All the inductions must now include the X/Y.

    To do that, in my scenario, we would have to add in the idea that each box has a 50.001% of it being a BWOA and design Y, and you've experienced design X <-> BWA and design Y <-> BWOA a billion times. I just don't see the relevance of this, as it is no longer the same scenario, but that is my answer.

    Bob
  • Knowledge and induction within your self-context


    And I should clarify for question #3 that by "question" I am referring to the same asked one within the context. Of course, I could ask question X in context Y and question X in context Z, but I am asking you if you think that all possible inductions formulated for answering question X in context Y are within the same context. If that makes any sense.

    I will refurbish it in the original post.
  • Knowledge and induction within your self-context


    Hello Philosophim,

    You usually do fair readings, but this time you're not. I've told you how the theory works, you don't get to say my own theory doesn't work the way I told you!

    I apologize if I am misunderstanding you! To better understand what you are saying, let me ask you these:

    1. In the scenario I gave, is the possibility or the probability what you would go with (or perhaps neither)?

    2. Do you agree with me that if you decide one over the other that you are thereby comparing them?

    3. Do you agree that all the possible inductions for a question within a context are thereby within the same context as each other?

    In this case, you're telling me the theory I made should be something different. That's a straw man...But insisting it is something it is not is wrong.

    Although I think I understand what you are saying, it isn’t necessarily a straw man to point out that a theory needs to be revised; and that is what I am trying to convey.

    From your perspective, you just disagree with that; but that doesn’t make it a straw man argument.

    I am going stop here for now because I need to know your answers.

    I look forward to hearing from you,
    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    I think we have finally pinned down the disagreement: so I am going to focus on that. Which can be summarized on your post as:

    If you introduce new properties which are of consideration within the probability, that is a new context.


    A^B != A^B & X^Y

    You can have two induction which use different relevant factors to infer a solution to the same question in the same context. The use of different relevant factors does not change the context.

    The context in the scenario includes both the information that the probability of pulling a BWOA is 51% and that you’ve experienced the design correlation which can be used to produce two induction that use different relevant factors in the context to derive a solution.

    To summarize:

    1. The context does not split into two contexts in virtue of the possible inductions using different relevant factors.

    2. You have to compare them, because you must either use one or the other as your inductive inference.

    #1, A ^ B & X^Y are a part of the same context: the inductions are what use varying aspects of that context. They are not two separate contexts themselves if they are a part of the same scenario and answering the same question with in that scenario.

    #2, It is very clear in the scenario that you have to compare them, so if you are saying that the induction hierarchy is unapplicable then it is useless for practical situations and thusly warrants a new methodology.

    You do agree that you are comparing them by saying that you think the pattern is most cogent, right? If so, then you are contradicting your claim that they cannot be compared (because they are different contexts).

    You are not asking the same question

    Perhaps the scenario wasn’t clear enough: within the scenario you know (1) probability of pulling BWOA and (2) a strong correlation with the designs; this is within the same scenario. The question that is asked is “does this randomly pulled box have air?”, and that question is within that same scenario. Since both bits of knowledge are within the same scenario, you can either to induce a conclusion; but not both (because they contradict each other). There’s no room for you to claim that it isn’t asking the same question.

    Obviously, the inductions themselves are using different relevant factors, and so they don’t reach the same exact conclusion (in this case) nor do they have the same reasoning behind them; but that doesn’t matter: you have to compare those two inductions to see which is more cogent to hold.

    Think of it this way. You say that the pattern is more cogent in the scenario, but I say it is the probability. If you also hold that you can’t accurately compare them, then you can’t claim my conclusion (to the same question within the same scenario) is less cogent. As a matter of fact, you wouldn’t be able to justify you own claim that it is cogent at all. Do you see the problem here?

    Otherwise its just a strawman argument.

    I honestly think this is an iron man argument, and your hierarchy is being demonstrated to break here. I think you agree that the pattern is more cogent, so we are getting closer to seeing why.

    You are just noting that the inductions themselves don’t use the same factors, but that is irrelevant to the dilemma: which are you going to use to make your educated guess? The one that is more cogent. But, wait, according to you, they can’t be compared! So, according to you, you could only say it is undeterminable. Do you see what I mean?

    Simply prove the coin flip example wrong, and then you'll be able to back that its not proven

    This is unproductive to say this. I already addressed this: in the coin flip example you are right, but it doesn’t imply that probabilities are more cogent than possibilities. The antecedent does not imply the consequent that you want it to.

    For example, if I claimed “giving someone a hug is always worse than killing them” and tried to prove it with the example:

    1. A person is skinning your wife alive.
    2. You go give them a hug.
    3. That was worse than if you would have killed them (to defend your wife).

    You could agree with the example and disagree that it proves the claim I made: no problem.

    In logic, you are a committing a fallacy of a claim about the some to the all, and I can demonstrate it in a predicate logic:

    ∃y∃x (Prob<x> & Poss<y> & Better<this: x, than: y>) ⊬ ∀y∀x (Prob<x> & Poss<y> & Better<this: x, than: y>)

    Thusly, I can completely agree with you on the coin example and claim that that is insufficient to prove that all probabilities are better than possibilities. You asking for me to disprove it disregards what I am telling you my position is: you would have to provide a proof that ever probability is better than a possibility, which is clearly unafforded by your example.

    After, do the same as above, but this time add in the X/Y consideration for all the inductions. All the inductions must now include the X/Y.

    They don’t. In real-world practical (and theoretical) scenarios, there are a range of possible inductions one could use that are (1) competing and (2) using different factors of that context. One has to pick one as the most cogent inference. Period. If you hierarchy cannot handle this scenario, then it isn’t complete enough.

    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    It sounds like you are in agreement with me that the best choice in the scenario is to use the pattern, but you disagree that it is an example of a possiblity outweighing a probability: is that correct?

    You say:

    If you do not consider the X and Y properties as relevant, you choose the probability. If you consider the X and Y properties as relevant, you do not have a probability that considers the X and Y properties. Therefore you choose the pattern. You're comparing an apple to an orange and trying to say an orange is more rational. You need to compare two apples and two oranges together.

    Which indicates to me you are agreeing with me that the pattern is the most cogent choice in the scenario, but you are disagreeing whether that conflicts with the probability. Is that right?

    We don't compare the two because they don't apply to the same situation, or the same essential properties. We compare coin flip with coin flip with what we know, and sunrise to sunrise to sunrise with what we know. The hierarchy doesn't work otherwise. You're simply doing it wrong by comparing two different identities Boxes without X and Y, and boxes with X and Y, then saying you broke the hierarchy.

    I honestly don’t understand how I could be misusing the hierarchy if the two options are a probability or possibility (fundamentally).

    The probability and the possibility are both being used to infer the same thing, so it is disanalogous to:

    Probability: A coin has a 50/50 chance of landing heads or tails.
    Possibility: The sun will rise tomorrow

    The implication with your example is that they are completely unrelated, but the probability and possibility in my example are both related insofar as they are being used to induce a conclusion about the same question. That’s why you have to compare them.

    Another way of thinking of this is that any induction used to infer a conclusion is related to other possible inductions thereof, because they fundamentally are trying to answer the same question. If they were completely unrelated (like you would like me to believe), then one would not be capable of deciding which induction is most cogent to hold.

    If you are right that the probability of a pulling a BWOA and the possibility of a BWOA having design X are completely unrelated, then you would not be able to determine which induction to use in the scenario because that requires you to compare them since they are both being used to make an induction about the same question. It’s impossible in the scenario for them to be completely unrelated!

    We don't compare the two because they don't apply to the same situation, or the same essential properties.

    Just to hone in on this: they absolutely do!!! The question is “does the box have air?” and they are both within that situation that I outlined: to answer that question you must compare them or answer with “undeterminable”. When I said “throw your hands up in the air”, I wasn’t meaning that you don’t like it, I meant figuratively (in a fun way) that you cannot determine which induction to use in the scenario if you are saying those two inductions (which are used to answer the same question) are completely unrelated. There would be, in that case, two inductions that could answer the question which cannot be evaluated as more or less cogent than the other.

    The point was to demonstrate that patterns are less cogent than probabilities. We both agree on this then

    We don’t agree on this. All your example demonstrated was that patterns extrapolated from random pulls from a sample are not more cogent than probabilities pertaining to that sample. That is not the same thing as proving that patterns are less cogent than probabilities.

    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    I hope your Saturday is going well Bob!

    To you as well!

    Disregarding your first point for a minute, this is what I'm trying to inform you of. A relevant factor is an essential property. A non-relevant factor is a non-essential property in regards to the induction. Anytime you make the design relevant to an induction, a pattern in your case, it is now a relevant, or essential property of that induction. Again, can you make the pattern induction if you ignore the design? No. Therefore it is an essential property of that pattern. .

    I don’t have a problem with this: you seem to just be noting that I wouldn’t have made that exact inductive inference without the pattern which, to me, is a trivial fact. If there’s three cards of 2 aces and 1 king and I hedge my bets that I will randomly pull an ace because there is a 66% chance, then, of course, I could not have made that exact inductive inference without the probability because that is what I used: but, my question for you is, why explicate this? What relevance does this have to the scenario I gave you?

    I agree that the calculated probability (which is not an inductive inference) is not considering Y and X while the inductive inference about X and Y is; but this doesn’t make it an unfair comparison; and the scenario hasn’t changed because of it: there is a probability you are given and there is an inductive inference you could make either (1) based off of that probability or (2) off of the experiential pattern. In this scenario, they are at odds with each other, so you can’t induce based off of both (as they have contradictory conclusions): so you have to compare them and determine which is more cogent to use. If you think this is an unfair comparison, then please elaborate more on what you mean.

    Also, a real example, like my scenario, can’t be negated by saying it is an “unfair comparison” because, in reality, you would have to compare them and choose (as described above). In the scenario, you wouldn’t just throw your hands up and say “UNFAIR COMPARISON!” (:

    Probability 49/51% of getting either A or B.
    Pattern I pull 1 billion A's and 1 billion Bs.

    This is disanalogous because in the scenario you didn’t pull a billion times design X → BWA (and ditto for the other one). I agree that if that were the case then the probability is a better pick.

    The point with the scenario is you are coming in with experiential knowledge that shifts how you will play the limited 100 sample game. You, in this example, are talking about knowledge you get by randomly pulling in the limited, small sample game.

    Probability of getting either A or B with design X is 75% or Y at 25%
    Pattern I always pull an A with X, and always pull a B with Y

    This is disanalogous for the exact same reasons as the above one.

    Probability 49/51% of getting either A or B, (X and Y not considered).
    Pattern I always pull an A with X, and always pull a B with Y (X and Y considered)

    Yes, and you can’t just say “unfair comparison”: in the scenario, like the real world, you have to compare them: I specifically made in so that you have to compare them to make an informed decision. You are going to induce it is either with air or not based off of either the probability or the pattern.

    Likewise, if you are saying that the probability is a better choice in the scenario, then you are thereby conceding that you can compare them.

    Why would it be more cogent to predict the next coin is heads rather then saying it could be either on the next flip?

    It wouldn’t. If all you know is that you are performing a 50/50 random coin flip, it doesn’t matter how many times you get heads: it’s the same probability. This is disanalogous to the scenario because your knowledge of the design correlations is not derived from the sample size.

    You are not comparing inductions properly. The first induction does not consider X and Y. You cannot say a later induction that does consider X and Y is more cogent than the first, because the first is a different scenario of considerations

    The scenario is the exact same: they are both a part of that scenario. In it, you clearly have to choose which you think is more cogent to go off of. You can’t just throw your hands up in the air.

    I hope this finally clears up the issue!

    I wish it did, but I still don’t think you have addressed the scenario properly. You seem to keep conflating it with a straightforward comparison of a probability vs. knowledge acquired from randomly pulling from a sample: obviously the former is more cogent. There’s no debate in that.

    This has forced me to be clearer with my examples and arguments, and I think the entire paper is better for it.

    Likewise, this has made me be clearer in my scenario (;

    Bob
  • A Case for Analytic Idealism


    Hello Mww,

    I only said what my mind is not. I’ve said before I don’t hold that minds are anything beyond an object of reason, which negates that I may be what’s referred to as a substance dualist.

    I see. Would you say that your mind does not exist in the things-in-themselves? If so, then what other possible options (to you) are their for where it “resides”?

    Ok. Why must it be? For a mind, or something else which serves the same purpose, to be a thing-in-itself makes necessary it is first and foremost, a thing. Says so right there in the name.

    It has to be a ‘thing” (either of a mental or physical substance) if it to be distinguishable from nothing: only things which do not exist are not of a substance. Are you saying that ‘mind’ is just an emergent property from something else (that is the thing-in-itself)? I am having a hard time pinning down what you are saying here. Bottom line, to me, the mind, or whatever it is emergent from, must be traced back to something which is a thing-in-itself. If it is not itself the thing-in-itself, then it is an illusion. If it neither an illusion nor a thing-in-itself, then it doesn’t exist.

    This looks like a way to force acknowledgement for the existence of a mind.

    It’s meant to force acknowledge that the mind is of something. Either it is the thing-in-itself, emergent of a thing-in-itself, or it simply doesn’t exist.

    The thing-in-itself is a physical reality

    How could you know that if things-in-themselves are purely negative conceptions?

    Which still requires an exposition for mental substance such that mind can emerge from it.

    One could claim that something is eternal and of a mental substance. It doesn’t necessarily have to be emergent from. I am just trying to pin down what you think a mind is, and so far is seems like just ‘reason’ and ‘the unknown’.

    Are you using Descartes for that exposition? It’s in Principia Philosophiae 1, 51-53, 1644, if you want to see how yours and his compare.

    Thank you for the reference, but, unfortunately I have not read that nor was I able to parse your citation to find it in a free PDF version of the book. Could you perhaps include in a the excerpt if you already know what you are referencing? Otherwise, no worries.

    I am not accounting for reality; I’m accounting, by means of a logical methodology, reality’s relation to me.

    yes, but you are fundamentally saying that reality, true reality, is beyond our epistemic limits. And this entails a long of, in my opinion, unparsimonous positions (e.g., cannto know of object permanence, minds, one’s mind being a representative faculty, etc.).

    But I know with apodeictic certainty the conditions under which the relations logic obtains, and from which my experiences follow, do not contradict Nature, which is all I need to know.

    How do you know it doesn’t contradict nature if you can’t know anything about true nature? These are the kinds of weird implications I see if I were to commit myself to transcendent idealism.

    Do you see that neither of your follow-up’s relate to what I said?

    No I don’t see that. But let me try to address:

    Possible knowledge, knowledge not in residence, cannot be from experience that is.

    Why can possible knowledge not be from experience? Wouldn’t you have to know that your mind isn’t producing the objects? And wouldn’t that requiring knowledge of the things-in-themselves?

    To experience is not necessarily to know, but to know is necessarily to experience.

    Agreed.
    Justification for claiming things-in-themselves are being represented in experience, should never be a question up for debate, and if it does arise as such, it can only be from a different conception of it.

    Why??? This is just a flat assertion: I am asking exactly that! If you can’t know anything about the things-in-themselves, if you are truly trapped within your phenomenal experience, then why would you even know there are things-in-themselves? It seems like you are just appealing to an intuition here. I have no problem with that BUT I can do the same exact thing about things-in-themselves.

    To represent a thing-in-itself in its original iteration, is self-contradictory, insofar as the thing-in-itself is exactly what is NOT developed in the human intuitive faculty for representing sensible things.

    Again: why??? It’s just flatly asserted that we can’t question (e.g., ‘it’s self-contradictory’) that there are things-in-themselves, but all of my knowledge that I am represented something is phenomena! I thought those shouldn’t tell us anything about the things-in-themselves? The schema Kant as come up with here undermines itself to me.

    Then why isn’t such cogent account given by the understanding that’s already dictated our understanding of the world?

    What do you mean? I didn’t follow this part.

    So it turns out, not only does reason ask understanding to bend its own rules, but justifies the request because it has already bent its own principles

    If I am understanding you correctly, then you are using the “understanding” vs. “reason” semantics from Kant (which is fine). If so, then I would say that (1) your ability to acquire the knowledge of the ‘understanding’ is just metaphysics (and is no different than what I am doing) and (2) I reject Kant’s formulation of it as merely an exposition of ‘reason’ as opposed to the ‘understanding’. Maybe if I was convinced that we really had these twelve categories of the understanding and such, then I would be metaphysically cut off from further inquiry beyond that.

    Maybe expound whatever proof you found convincing for Kant’s twelve categories: that might help me understand better.

    If that happens, there are no checks and balances left at all, and there manifests an intellectual free-for-all where anything goes, an “…embarrassment to the dignity of proper philosophy….”, so those old-time actual professional philosophers would have us know.

    As of now, I don’t buy this. We use parsimony, coherence, intuitions, reliability, consistency, empirical adequacy, etc. and this doesn’t require us to limit ourselves to transcendental investigations.

    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    It is correct that the essential properties of a known identity, and the essential property of an induction about that identity are not the same.

    It seems now that you are referring to two things by “essential properties”: what is essential to what one induces something is and what is essential to what that something is. Is this correct?

    Firstly, when I say that the design is not an essential property of what they are, I am not referring to “the essential property of an induction about the identity” of them: I am referring to “the essential properties of a known identity”--in this case, the box.

    Secondly, I am also not even claiming that the designs are essential to inducing what box it is (which would be the latter thing in your quote), because that would imply that if I didn’t know the design then I couldn’t induce at all what box it is—which is clearly wrong. I am saying that it is a relevant factor. To say it is essential means that I couldn’t decide what it is (inductively) without knowing the designs. If I didn’t know the designs in the example (with all else being equal), then I would go off of the probability.

    If by “essential property of the induction” you just mean that I am using designs to make my induction, then I have no problem with that; but that has nothing to do with the substance of the scenario nor does that entail that it is essential to the induction. The point is that the colossally observed pattern of design → box, in this particular context, outweighs going off of the minuscule probability.

    If you agree that there is a separation between these two things (which you seem to agree in the above quote), then I don’t know why you said:

    True. But if you're going to later include, "I believe property X is a property that indicates it has air," then you've made it an essential property to identifying whether it has air. Basically you're saying its not an essential property, but then in your application, it is

    It is not an essential property of the identity of the boxes, but is relevant to inducing which box it is. You are simply noting that, at a minimum, that the designs are relevant to inducing which box it is and then conflating that with my claims that they are not essential properties of the identity of the boxes.

    If it was non-essential, then it would have nothing to do with your induction of whether the box has air or not.

    This is false. Something being essential means that it cannot be removed, so to say that it is “essential to an induction” is to say you cannot induce either way without that essential factor. However, if the situation changes, then the induction changes. For example, in the scenario where I’ve always experienced gravity pulling things to the ground for 40 years, I am going to induce that the next thing will fall that I drop. However, if it were the case that I’ve experienced gravity not work in those 40 years 500,000 times more than it work, then I would say it won’t fall (all else being equal). Gravity working in the first example is not an essential property of my induction of whether the object will fall when I drop it, because my induction would change if the factors changed. Sure, I wouldn’t have made the same inductive conclusion if that factors changes, but it is not essential to know gravity is working all the time to be able to make an inductive inference in this case.

    If you include the "non-essential" property as essential for your induction to the outcome of the box, then it is no longer non-essential to your belief in the outcome of the box's air or not air identity.

    This is irrelevant. Again, you just noting that I am using the design in my induction, which doesn’t negate the fact that design is not an essential property of the identity of the boxes.

    If that’s all you are saying, then it doesn’t matter for the scenario. You can’t somehow deduce the probability of the designs in the sample of 100, and that was the whole point. Since the person calculated the 51% probability off of the essential properties of the identity of the boxes, which doesn’t include designs, there is no way to know probabilistically which design they will have: it is an induction.

    Regardless of the pattern of design, we still know that any box has a 51/49 probability in regards to its air. But if we later consider the design in believing whether the box will have air or not, its now essential in that belief

    It is not essential to the belief about the probability, because it wasn’t used in the calculation of it. Just because you use the designs in your inductive inference, does not mean it has any relation whatsoever to the probability of pulling a type of box. That’s a non sequitur.

    You don't get to decide what's essential or non-essential in application. In application, the design is now essential in your belief on whether it holds air or not. You can deny it, but you haven't proven it yet.

    It is relevant to whether the box has air or not; and this has nothing to do with whether it is an essential property of the identity of the boxes. So I am failing to get what your point is here? You have seemed to veer off into an unrelated observation (but I could be wrong). The probability is still 51% that you get a BWOA, and that BWOA could have design X (despite you experiencing strong evidence to support otherwise).

    And the miniscule difference is irrelevant. Its still 1% more rational. Or .0005% more rational.

    And this is really what is under contention: for you, it seems as though really strong inductive observations don’t matter if you know a probability and I disagree. If I have experienced a BILLION times design X ↔ BWA and I join a thought experiment where they have 100 boxes (of BWA and BWOA) and they tell me there is a 51% chance of getting BWOA and the box presented to me has design X, then, on the first pull, I am going with it being BWA.

    You are saying that having a 1% more chance of getting BWOA is a better bet (inductively) and, consequently, that this box is going to be the first you’ve experienced (out of a previous BILLION) that does not have design Y.

    In order for that to be the case, you would have to argue for a really unparsimonious general account of what is happening in the thought experiment. E.g., you would have to argue that perhaps the gamer makers are deceptive, that they have got a hold of a really rare manufactured set of BWOAs without the normal design that is associated with and decided to use those rare ones with you (a normal person), perhaps broke the law, etc. These are just examples, but you get the point.

    Now, where I think you are right, is if the probability is not miniscule (e.g., 99% it is a BWOA). Since that calculation is deduced from the sample of 100, then that means there are 99 BWOAs. In that case, I think the probabilistic odds outweigh the experiential evidence of the correlation, and so it is more reasonable to go with BWOA.

    I can even make the scenario even more specific to prove my point: imagine that, on top of what has already been said, in this scenario you also have strong inductive evidence that, although there can be a BWOA with design X, it costs an insane amount money to manufacture it with any design other than Y. You, as a normal person, engaging in a basic thought experiment (of pulling a box from 100 sample size) should not expect, given a miniscule 1% difference, that a BWOA has design X.

    My point is that the entire situation matters, and it isn’t as easy as saying “probability > possibility” when making informed inductive decisions. If that were the case, then we end up with really unparsimonious explanations of things.

    If X > Y, and no other considerations are made, its always more rational to choose X

    Correct. But this is a scenario where other considerations are made. So this is irrelevant. If all you knew was that there was a 51% chance of it being a BWOA and all else being equal, then, yeah, go with the probability.

    Patterns are a more detailed identity of a cogent argument than possibility alone,

    Correct. I am saying that the patterns in this case weigh into the inductive inference: it isn’t as easy as going with the probability for the sake of going with it.

    Bob
  • Knowledge and induction within your self-context


    I think what you are trying to say is that if one is using something as a consideration of what something is (i.e., its identity), then it should be an essential property: but that just simply doesn't follow from you methodology. I can claim that I distinctly know that a BWA is just a (1) box and (2) has air in it.

    I can experience design X with BWAs my whole life and never refurbish its definition to include design X as an essential property: and that is how the scenario is setup. So you can't side-load the designs into the 51% probability because they should be essential properties because in the scenario they are not. If that makes any sense.

    The probability was calculated with only the aforesaid two essential properties. The designs were not considered for it.

    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    Here is where you also have to clarify. Does the design of X or Y have anything to do with the probability?

    No they do not. It is a 51% chance that it is a BWOA, and that is calculated solely of it’s essential properties, which is that it (1) is a box and (2) has or does not have air in it.

    For example, if the ration of X airs to Y airs was 3/4, then X and Y are essential properties to the probability. Both of these can co-exist.

    So on one hand we could say overall, there's a 51% chance of no airs vs airs, not considering X or Y. Then we can drill down further, make X and Y a part of our observations, and note that X has a 75% chance of being no air, while Y has a 25% chance of being air. These are two different probabilities, and we could even math them together for an overall probability if we wanted to.

    The scenario does not give you a probability of a box being design X or Y and you cannot calculate it given the information for the scenario. So, although you are correct that that probability would be separate from the probability of pulling a BWOA or BWA, you can’t use that probability. That fundamentally changes the scenario if you did.

    Once you start including an attribute in your probability, it is now essential to that probability. While you are considering X and Y, you're not considering the how heavy they are right? Anything you don't include in the probability is non-essential. Since you don't care about the weight of each box, it doesn't matter. Once you notice X and Y designs, and start actively noting, "Hey, X's so far have all been with air," then you've created a new probability, and X is essential to that probability.

    No. Again you are confusing what is essential to calculating the probability, which in this case is the essential properties, and what is useful for infering what a thing is (when you can’t know that it meets the essential properties).

    Although in this scenario weight is not something to consider (to keep things simple), how heavy it is could play a factor into what you guess if you experienced a strong correlation of two distinctly different weights with BWOAs and BWAs a BILLION times. The weight is still an accidental property, but the sheer correlation in the actual world (which can happen) makes it cogent to factor that into consideration since you can’t know by looking at it if it has air or not.

    If it is known information that the X or Y is irrelevant to the design, then you cannot make a probability based off of it when referring to the boxes in general

    There’s no probability afforded to you of whether has a design X or Y. So correct. But that was never the claim I was making. The billion experiences of X → BWA and Y → BWOA is inductive evidence: it doesn’t give you a probability and that is the whole point.

    If it is unknown whether the X or Y is relevant to the air inside of the box, then you could start to note a probability that is again, separate from the box disregarding the design.

    No you couldn’t: the antecedent there doesn’t necessitate the consequent. If I am unsure if X and Y are relevant to whether it is a BWA, I don’t thereby gain knowledge of the probability of nor do I gain inference-like knowledge that it does. I think you may be confusing an inductive inference with a probability proper. Unless you know the numerator and denominator (and divide them) then you cannot claim to know anything about what is probable or improbable.

    I think the part of confusion Bob is you keep making non-essential properties essential to an induction, but think because its non-essential in another induction, its non-essential in your new induction. That's simply not the case. Once you start including the X or Y as a consideration, it is now an essential consideration for your new induction. That's your contradiction.

    There’s no contradiction (that I can see): maybe explain where it is in more detail.

    Let me clarify something though: what is essential to the inductive inference is not the same thing as what is essential to the identity of a thing. I think you may be conflating those two here.

    Personally, I would say that it is useful and more rational to the inductive inference to go with BWA (in this scenario) (rather than saying it is essential: I am not sure what that would entail, as we are not talking about essential properties there).

    I can say the designs are not essential properties of the identity of a BWA and BWOA while holding that the designs, given the inductive evidence and super low probability given of pulling BWOA, are relevant to inferring (guessing) what it is (even though it isn’t an essential property of it). Again, I will refer you to the example of the human drawings.

    There is absolutely no contradiction here.

    Non essential properties never weigh in or outweigh the probability of something occurring. If they do, they are now essential to that probability

    Correct. You keep focusing too much on the probability. The idea is that there is a probability which is calculated independently of the designs, but it is a miniscule difference. Now, couple that with the inductive knowledge that the design is always consistently (a BILLION times) associated non-essentially to the boxes and that knowledge outweighs going off of the probability.

    A reason or a factor is a property of something. If you wish to interchange it, its fine. The point still stands.

    If by this you just mean that we use the essential properties to calculate the probability, then I agree. But reasons are not properties of the things.

    I am saying it is less rational to go with the 1% chance or 0.00000001% chance that it is a BWOA as opposed to a BWA in this specific scenario. — Bob Ross

    Only if you consider the X, Y design of the box. In which case, it is now an essential property of your induction, and you've made the separate probability as I noted earlier.

    It’s not though. I am saying that there is a strong correlation between this design and this type of box, and I know that this box doesn’t have to have this design--but it has had this design a BILLION times anyways. There’s a difference between something being essential to what a thing is, and it being accompanied by something else consistently.

    Bob
  • A Case for Analytic Idealism


    Hello Mww,

    Things-in-themselves concerns things. Minds are not things. Things-in-themselves do not include minds.

    Are you a substance dualist? It sounds like you are saying there are minds which are of a mental substance and there are things-in-themselves which are a part of a physical substance. Otherwise, I do not know what you mean here.

    I am not a mind; I am a conscious intelligence, a thinking subject

    But, traditionally, a mind is a conscious intelligence—a thinking subject which has qualia.

    Notice the conspicuous lack of mention for the thing-in-itself. My body is never absent from my representational faculties, insofar as they are contained in it, thus is always a thing and never a thing-in-itself.

    I agree that the body is not a thing-in-itself, but the mind (or something else) must be. Even if the mind is not a ‘thing’ in the sense of being of a physical substance, it is a ‘thing-in-itself’ of a mental substance. ‘Thing’ here is being used more vaguely as a purely negative conception (like Kant used it). It could be a mental ‘thing’ or a physical ‘thing’.

    I didn’t say mind was merely reasoning.

    Sorry, I must have misunderstood then. What is a mind to you then?

    It is not impossible what I consider as thinking really isn’t, but is in fact merely the material complexity of my brain manifesting as the seeming of thought. So, what…..you’re trying to say that because it is not impossible for thinking to be other than it seems, the door is thereby left open for my thinking to be a manifestation of something even outside my own brain? Perhaps that’s no more than the exchange of not impossible regarding brains, for vanishingly improbable for external universal entity.

    I am saying that you can’t prove, because you think we cannot know anything about the things-in-themselves (even probabilistically speaking), that (1) other people have conscious experience and (2) that your own thoughts are associated with an ‘I’ which is beyond the phenomena.

    You can’t appeal to probability nor plausibility for #1 or #2 because you are saying we cannot know the things-in-themselves, and those claims are about them: even if it is about fundamentally mental ‘things’.

    Time and space aren’t properties of objects per se, but you are, under transcendental idealism, producing them under space and time. — Bob Ross

    No. I am not producing objects. I am producing representations of them, and those under, or conditioned by, space and time.

    Correct, I misspoke: I was saying that space and time are produced by your mind, not that the objects themselves are. They are produced by your mind because they are the pure forms of intuition that your mind uses to represent objects.

    Saying that the objects only exist in your perception is just to say that there no corresponding object beyond those forms of space and time — Bob Ross

    Sure, but no one has sufficient justification for saying objects only exist in perception, which makes the rest irrelevant.

    You can’t appeal to the lack of justification for objects existing in perception because, according to Kant and you, we cannot know anything about the things-in-themselves: we can only know the phenomena. If there’s no justification for saying there are objects (of which you can’t make if you can’t claim stuff about things-in-themselves), then we simply cannot know. If we cannot know, then you can’t say there is object permanence. My point is that you cannot refute (even probabilistically) the claim that your mind produces the objects without making an assumption about the things-in-themselves, which you aren’t supposed to be able to do.

    Semantics, huh? Why don’t we just agree that if you know a thing, you’ve experienced it.

    I said:

    It can agree with this, as a matter of semantics, if you are saying that possible knowledge is that which one experiences; but then this just pushes the question back: why can’t we say that possible knowledge goes beyond our experiences?

    If by “if you know a thing, you’ve experienced it”, you just mean that you’ve experienced something, then, sure, that is true; but it is an uninformative tautology.

    The question up for debate here is whether you have justification for claiming there are things-in-themselves that are being represented in that experience—not that having an experience is having an experience.

    Why wouldn’t that be true? The truth of that doesn’t affect the premise that if a thing is known it must have been an experience, and doesn’t affect possible experience.

    You can’t claim that possible knowledge goes beyond our experiences (quite frankly: your experiences) because that is non-phenomena, which are, by definition, things-in-themselves; and you cannot know anything about those.

    Of course. The categories are nothing but theoretical constructs. It is merely a logically consistent speculation that understanding relates pure conceptions to cognition of things. Pretty hard to experience a theory, right?

    But this is exactly what people who venture into understanding the things-in-themselves do! They speculate about them and come to the most reasonable conclusion in coherence with experience. So why think we cannot do that?

    Now, for me, this is exactly backwards. I mean…what comes first, the appearance of a thing, or the representation of it? Our understanding of the world is dictated by our representational faculties.

    The appearance, then the representation. We come to know that is the case from the other way around.

    We extrapolate that the representations we see are just that: representations. And there are appearances that come first to those representations. Our understanding of the world is dictated by our representational faculties, but that doesn’t mean we can’t give cogent accounts of beyond that; which includes the claim that we are beings that exists in a transcendent world with representative faculties.

    Ehhhhh…..we just have different ideas of what entails metaphysics.

    That’s fine! As long as we understand what each other are saying.

    While it may be fine to say it is understood for something to be beyond the possibility of all experience, it remains the case that understanding is not authorized to say what that something is

    Then you can’t claim that you have a mind. You can’t claim that you have representative faculties. You can’t claim that your representative faculties use pure conceptions. These are all beyond the possibility of all experience.

    Understanding cannot inform what things are not conditioned by the categories,

    But you cannot equally know, by your previous claim quoted above this one, that there are categories: they are likewise beyond the possibility of all experience.

    Yours wants the content of a conception as metaphysical, which is an exposition of it; mine wants that there are conceptions, including their content, not thought spontaneously as in understanding in conjunction with a synthesis of relations, but given complete in themselves from a pure a priori source.

    Correct. Which, under yours, then, you must be, by my lights, an epistemic solipsist. You must not know if there is object permanence, etc. because all your reprsentations are purely a priori and cannot be derived farther back than that. It just seems to me like an incredibly unparsimonious account of reality.

    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    "The odds of any box being without air are 51%, and the only thing that matters to the identify of the box, is that its a box,"

    To clarify, I am saying that the odds of any box being without are is 51% and the only thing that matters to the identity of the box is that it (1) is a box and (2) has or does not have air in it.

    Something can be useful to identifying a thing without it having to do with the identity (essence) of that thing. For example, it is not a part of the identity of being human to draw art. You can be a human and never have drawn art and you may be so disabled that you literally can’t do it. However, if I am walking in the park and see some extravagant art (perhaps graffiti), then I can reasonably inductively infer that that was a human that did it. Philosophim, I identified the origin of the art to be human, while never conceding that the identity (essence) of a human is that it draws art. Likewise, it is actually and logically possible that an alien drew the art, or it formulated naturally as a freak accident.

    Accidental properties of a thing, such as experiences a strong correlation between humans and drawing and never experiencing anything else doing it, can influence rationally what we identify as being the case.

    then the non-essential properties of the box do not matter to the probability. If X and Y are non-essential, they don't matter to the probability then. I think that's a straight forward conclusion right?

    That is correct. But I think you are perhaps misunderstanding: the probability is given to you by a person who knows whether each box (out of the 100) has air in it. They are using strictly essential properties to calculate the probability because they can know whether the boxes meet those 2 essential properties. However, when they present it to you, you can’t know if the box meets your criteria because you don’t know if it has air in it: that’s the whole point!

    Now, the probability being unaffected by the unessential properties does not entail, in itself, that it isn’t rational, depending on the circumstances, to use them to infer what you think it is.

    Are you saying that the probability of 51% is only a guess?

    No. It is the actual probability of pulling a box without air.

    Or that we only think that the design of the box is irrelevant?

    Irrelevant to what? To the probability that was calculated? Yes. To your evaluation of what you think it is, no.

    The scenario is only granting implicitly the former, not the latter.

    In other words, is our 51% open to change, and do we not know if it depends on X or Y?

    The probability stays the same regardless of what design they have because the guy in the back room knows whether they are (1) boxes and (2) they have air in them. Those are the essential properties, so he uses that to calculate the probability. We, on the other hand, only know they are boxes and that they either do or do not have air; but with consideration of the rest of the context as well (e.g., we’ve experienced them strongly correlated with those designs a billion times each).

    , if X and Y are unessential to the probability, then they are unessential to the probability. Any results from experience, if we know the probability is correct, would not change the probability. Therefore no matter if we simply pulled 99/1 airs to no airs, that doesn't change the probability. The outcome of the probability does not change the probability.

    I agree with you here, but I think you are focusing too much on the calculation of the probability and not that it is a minuscule difference in probability. Imagine there was a %50.00000001 chance that you will pull a BWOA. Now imagine that you’ve experience in your lifetime (1) only BWAs having design X, (2) design X only being on BWAs, and (3) you have experienced #1 and #2 a BILLION times. Imagine, likewise, same thing for BWOA but with design Y. This extra info doesn’t change the fact that you are 0.00000001% more likely to pull a BWOA.

    Now, imagine that the box pulled has design X. Given that there is only a 0.00000001% chance more of pulling a BWOA and the sheer incredibly correlations you’ve experienced inductively of BWA → design X. I think that you are warranted in claiming it is a BWA instead of going with the probability. It is more probable that it will be a BWOA, but by 0.00000001%.

    Take away the probability for a second, just think of the inductive aspect I am talking about. If you only ever saw design X on BWA and never anywhere else a BILLION times throughout your life. All through society where you have gone and travelled, it’s always the same o’le design X → BWA. You confirmed each time (a billion times) that the box did have air in it and it had design X. The next time I show you a box with design X, forgetting about probability for a second, what would be the most cogent answer? Clearly that it is a BWA. This is no different than thinking that gravity will work the next time you drop something. Actually, in this case, since it is a billion times, you have stronger reasons to think that a design X → BWA than gravity working next time you drop something (as I doubt you’ve experienced thinks drop a billion times yet in your life).

    Now, the only extra information we add into the scenario is that there is actually a 0.00000001% in the sample of 100 that this design X is not BWA (as its BWOA). By my lights, if you go off of the probability, then you are saying that you would rather hedge your bets on a 0.00000001% difference that this design X box presented to you is the first box out of a billion and 1 that is going to break that life-long correlation you have experienced. To me, that 0.00000001% difference doesn’t outweigh.

    Now, if the person told me that after each guess the presented box is not returned into the sample and they tell me if my guess is correct, then each time I guess I do have to consider that the probability is changing and eventually that outweighs my experiential knowledge of the correlation. If there’s a 99% that it is a BWOA, then I am definitely going with that.

    I don't consider confirmation bias irrational by the way, I think that's a bit harsh.

    As far as I’ve understood confirmation bias, it is the tendency to seek out a result without sufficient justification for it. It’s like placebo effect: if I think that aliens are ruling the world and I start actively seeking out reasons to believe it, then I will definitely find them. Not because it is true, but because I am intensely trying to fit the world to my narrative. This is irrational.

    Confirmation bias is not, as far as I understand, the same as inductive and abductive reasoning. You can assume rationally that gravity will work for the next thing you drop because you have sufficient evidence, which wasn’t just a result of you trying to fit the world to you wants, that that will be the case.

    Back to your point where I feel you changed the context a bit. You noted that it wasn't possible for you to have experienced a Box with Y that did not have air. I had assumed you had. That's true, you don't know if its possible for you to pull that box. Despite the odds, you never have. And yet you know its probable that you will, and its only incredible luck that you haven't so far.

    I was just clarifying that, under your terms, you couldn’t claim to know it is possible for a BWOA to have design X.

    Likewise, in the scenario, I am not saying that you know that there actually is at least one BWOA which has design X. You don’t have that information. You just know that it is logically and actually possible that a BWOA could have a design X: but “possible” here is being used in the standard philosophical sense and not your sense (because you hold that you have to experience it for it to be possible). So, in your terms, you cannot claim it possible despite it being logically and actually possible.

    If the odds for the air or not air do not depend on X or Y, then each X and Y has a respective 49/51 split as well. This is just a logical fact.

    No they don’t. The probability of one having design X or Y is completely unknown to you. The probability of picking a BWOA or BWA is irrelevant to the probability of it having a particular design. To know that you would have to know how many in the actual sample have design X and how many have design Y and divide that by 100: you simply do not know this in the scenario.

    If you flip a coin ten times and it comes up heads ten times, does the non-essential property of you being in your living room change the odds of the coin's outcome? Of course not

    That’s disanalogous: I am not saying that non-essential properties always weigh in or outweigh the probability of something occuring. That’s why I picked this very specific example scenario.

    Also, you being in your living room wouldn’t be a non-essential property because it isn’t a property of the probability. Is an unessential reason or factor: not a property.

    However, if you’ve experienced a billion times living rooms having a strong gravitational pull than non-living rooms, then, yeah, I think that unessential factor becomes at least a relevant factor. I think this is what you mean by:

    "Every time I flip a coin in the living room, it changes the odds to where I always flip heads," then the living room is no longer a non-essential property to the coin flip, but has now become, in your head, an essential property of the coin flip.

    Same as if after you count all the X and Y boxes that have ever been made, and sure enough, it turns out that all X's are airs, while all Y's are not airs. The odds didn't change

    Sure. I already agree that the probability itself wouldn’t change.

    you could say that all boxes with X have air, while all boxes with Y's don't, and applicably know this. It just so happens that there are 49 billion X's, and 51 billion Y's.

    No, the sample size for the scenario that you are drawing from a sub-collection of them in the real world. You don’t know that there are 49 billion X’s and 51 billion Y’s but, rather, only that in this sample of 100, there are 51 BWOAs and 49 BWAs. That’s it.

    Perhaps the issue you're really holding here is that you want to make decisions that are less rational sometimes.

    No philosophim, I am saying it is less rational to go with the 1% chance or 0.00000001% chance that it is a BWOA as opposed to a BWA in this specific scenario.

    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    1. Probability is 51% that the box does not have air.

    To be clear, this means that any box given has a 51% change that it does not have air in it. So regardless of box design, its a 51% chance that it does not have air.

    Correct. The box is, at random, picked from the group and presented to you. The probability is 51% that it does not have air.

    The only essential property for a box is that it is a six sided box.

    This definition is circular. But I get the point and, for intents and purposes, let’s go with that for now.

    If it has air, its a box with air. If it doesn't, its a box without air. Anything else is non-essential.

    Correct.

    We'll call call a box with air a BWA, and a box without air a BWOA because I'm tired of typing those phrases. :)

    Lol. Sounds good! (;

    Any box you pick has a 49% chance of being a BWA, while it has a 51% chance of being a BWOA.

    Correct.

    Now lets include some non-essential properties. What they are is irrelevant. Lets call them properties X and Y.

    They are not irrelevant: they are irrelevant to the identity of the thing. That is not the same thing as them being irrelevant flat out.

    So I can have a BWA with a X, and a BWA with a Y.

    Yes it is logically possible (I am not using “possibility” in your sense here). On top of that, in your terms, it is possible that a BWA has design X and not proven to be possible it has design Y (because you have only experienced it with X). It is, likewise, possible that a BWOA has design Y and not proven to be possible that it has design X (ditto reasoning).

    Does this change the probability of the BWA being picked? No. Its still a 49% chance

    Correct.

    What about a BWOA with a X and a BWOA with a Y? No, still a 51% chance of being picked.

    Correct.

    This is because we know that X and Y are non-essential the the probability.

    For fear of you equivocating here, I am going to stress that all this means is that the probability is independent of whether they have design X or Y. The wording “non-essential” could be equivocated there as having to do with non-essential properties, which has nothing to do with this claim.

    Lets say that I pull any number of boxes. It turns out that I only pull BWAs with X's and WBOAs with Y's. I've never pulled a BWA with a Y or a BWOA with a X, but its still within the odds that I can.

    Is is possible that I could? Of course.

    It is not provably possible under your terms that a BWA could have a design of Y because you haven’t experienced it before. Just to clarify.

    But does that change the probability? No, non-essential properties don't affect the probability.

    Correct.

    Therefore it is still more rational to assume over the course of picking more boxes that I should always guess that I'll pull a BWOA, whether that's a X or a Y.

    No. You are forgetting that you have experienced this correlation a billion times each (and none vice-versa). Yes, it is logically possible (note: I am not using “possible” here in your terms) that, even after experiencing X with BWA a billion times, the box is design X and BWOA but you are more justified in inferring that it is a BWA since the probability is so close to each other.

    Let’s make it even more obvious what I am getting at: imagine that in the scenario you also know that, although you don’t know which design the box will definitively have (because it is a non-essential property), only design X and Y have ever been associated with either a BWA or BWOA. Now, to clarify, this does not make the designs essential properties: I am saying that these unessential designs have, by happenstance or purpose, been associated (correlated) with them in the past. Maybe there’s a law in place that you have to make BWA’s with X and BWOA’s with Y, but the actual definition of them both doesn’t include X and Y as essential properties (which is entirely possible).

    Now you have really good reasons to believe that when you see a box presented to you with design X, although designs aren’t essential properties, that it is a BWA. Is it logically and actually possible that someone broke the law (or what have you) and made a BWOA with X? Absolutely. But guessing BWOA on the X designed box when there is merely a 1% more chance it is such isn’t very cogent given these circumstances.

    If you believe that because every BWA you've pulled so far is a X, therefore its more reasonable that a box with a X is going to be a BWA, that's not rational, its just confirmation bias.

    Firstly, I am not saying that you have drawn a billion times from the sample of 100. I am saying that you have experienced, independently of drawing from the 100, a billion times each correlation.

    If I were saying that just because I pulled a BWA last time that the next will be BWA, then I would agree that is irrational and confirmation bias: that’s not what I’ve been saying.

    Secondly, if you would like to call what I just clarified as irrational, then you would have to say all inductions and abductions are irrational because that is how they work. Take Hume’s problem of induction, which you mentioned in your OP: you would have to say it is equally irrational to hold that the future will resemble the past. But this is nonsense: it isn’t irrational to induce or abduce: it can be quite rational.

    Your biased results don't make something more or less cogent. It is always more rational to believe that the box will be a BWOA whether its an X or a Y.

    Wrong. If I know that the designs X and Y have always been associated with either box and that there is a colossal correlation between X → BWA and Y → BWOA and the probability of one is only 1% greater than the other of occurring, then I am rationally justified in thinking that an X will be accompanied by a BWA (although I could most certainly be wrong). So when they present one at random to me and I see it is an X and only 1% less likely that it is a BWA, I am justified in claiming it is a BWA.

    You are basically hedging your bets on a minuscule 1% difference and expecting, given the contextual background knowledge you would have, that this next one will be the only one out of a billion and out of every single one that you have seen that will break the correlation.

    I think you are right to assume that if we were to keep drawing, returning, amd re-shuffling the boxes that it would even out over time to 51% being BWOA—but we are talking about one selection here.

    With that simplified, does that answer your question?

    No. I think the above explains why I think that.

    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    Why are the boxes accidental? Lets not just say they are. Lets prove they are.

    They are accidental because they have been defined as non-essential: distinctive knowledge is definitional. That’s all the proof that is required.

    For the personal knowledge within the scenario, I defined a box as “a container with a flat base and sides, typically square or rectangular and having a lid” and defined an “box-with-air” as a “box” + “air”. That is the proof that it is unessential what design the boxes have, because I have defined “box-with-air” and “box-without-air” to have everything but those two factors as unessential.

    It is known that they randomly switch between box designs for air and not air, and it turns out the box design X and Y have exactly 50% change of having air or not air.

    Firstly, in the scenario I gave, there is no probability known about how many they design in X or Y fashion; so this isn’t analogous. Secondly, that can be your reason for defining a “box-with-air” and “box-without-air” as having designs that are always accidental, but an accidental property is just an unessential property; which does not necessitate that there is a 50/50% chance of it occurring.

    Now, lets say that I receive a billion boxes of X, and a billion boxes of Y. low and behold, it turns out all the X's have air, while all the Y's don't. Its an incredibly improbable scenario, but it can be independently verified that yes, its completely a 50/50 chance that either box has air or not.

    This is irrelevant, because you don’t know in my scenario that it is a 50/50 chance; and it is not, by definition, true that something which is an accidental property has a 50/50 chance of occurring.

    This contradicts your own definition of accidental properties:

    The properties which I find are important to me for my memory, the curly fur and hooves, are identities of the sheep I call essential properties. Properties I observe which are irrelevant to my identity of the sheep, I call accidental properties. Accidental properties allow me to remark on how the identity is affected beyond its number of essential properties.

    Your definition simply does not equate “accidental properties” with “something which is proven beyond definition as non-essential”. It clearly defines it as “that which isn’t an essential property of the thing in question”. Regardless, in my scenario, when I say that the designs are accidental, I do not mean that they have an equal chance of occurring nor that there is a defect in my ability to identify (such as you color blind analogy): I mean more broadly that the designs are unessential properties.

    The designs are accidental, not an accidental property then. If you have no foreknowledge of whether box X or Y should or should not have air, then you have not yet decided whether X or Y design are essential or accidental to the identity.

    Firstly, by your own definition of it, any non-essential property is an accidental property (i.e., “ Properties I observe which are irrelevant to my identity of the sheep”)—it is irrelevant if you have foreknowledge of all the potential properties of a thing. If you define a sheep as [X, Y, Z] essential properties, then it is necessarily the case that a property which is not X, Y, or Z is unessential and thusly accidental. Now, you can refurbish accidental properties to become essential ones given new knowledge; but that is different than your lack of knowledge of a property being undecided yet.

    Secondly, whether the first point is true is irrelevant for the scenario I gave: I said definitively that what I am distinctively calling a “box-with-air” and “box-without-air” is those two aforesaid properties, and all the rest are unessential ones. So you can’t validly claim that my accidental properties are no longer accidental. It is a matter of definition, which is distinctive knowledge.

    Also, we have to clarify what we're referring to here. If we're referring to the core identity of the box itself as a particular type of measuring tool where air doesn't matter, X and Y are accidental. If we're referring to the probability of whether a X or Y box has air or not, then the box design is no longer accidental to our point!

    I think you are thinking beyond the scenario, when I am looking for you to address specifically the scenario given. I am saying that the core identity of a box and “with-air” vs. “without-air” is those two aforesaid properties, and everything else is an accidental property. I can do that because it is distinctive (and not applicable) knowledge in the scenario, which is definitional.

    No. In the scenario when you are determining the most cogent solution, the box design is not an essential property of anything. I am specifically saying that the design is irrevelant to the definition thereof: I am not saying that the design for the airless box has some necessary component to it that enables it to vacuum out the air.

    Likewise, you aren’t calculating the probability of it having air in the box: you can’t. You will never be able to calculate the numerator and denominator for that question: the only probability you know in the scenario is that there is a 51% chance that the box does not have air.

    Taken another way, a type of dog can be green or blue. Whether its blue or green is irrelevant to knowing the identification of the dog. However, you later discover that 74% of these dogs are green, while 25% are blue, and 1% could be any other color. When you are asking, "Is this dog that I cannot see behind a screen green or blue," at that point the probability of the color becomes an essential set or properties in knowing the outcome

    It is not an essential property of what a “dog” is (which I think you agree with me on that) and it is not an essential property of anything—it is essential to answering the question nevertheless. An essential property is a property first and foremost, which is of a concept (i.e., distinctive knowledge of a thing): you have no concept here to attach the color to. This is implicit in your example: “ The properties which I find are important to me for my memory, the curly fur and hooves, are identities of the sheep I call essential properties”. I think you are confusing something being essential for answering purposes with an essential property.

    To sum up an accidental property - A property which is completely irrelevant to one's assertation or denial of the identity.

    This contradicts your definition in your OP:

    Accidental properties allow me to remark on how the identity is affected beyond its number of essential properties.

    These definitions are incompatible with each other. If an accidental property is actually something which is completely irrelevant to its assertation of the identity of a thing, as opposed to merely being not within its set of essential properties, then not all non-essential properties are accidental (i.e., not all non-essential properties meet your first quote here of a definition just because they meet the second quoted definition). I think you are thinking that in virtue of a property being non-essential it doesn’t matter for identifying the said thing, but that is a separate claim than that it is non-essential (and currently in dispute).

    I am saying that, although it doesn't matter for meeting the definition of a thing, the accidental properties play a role in identifying it pragmatically (and am thusly questioning your separate claim that non-essential properties are irrelevant for identification purposes).

    To see if you understand, take your example again and try breaking it down into clear and provable accidental or primary properties for the context.

    I already did this in my post outlining the scenario:

    2. You hold that the only essential properties of a box-without-air is that it is a box (i.e., a container with a flat base and sides, typically square or rectangular and having a lid) and it is not filled with air in its empty space (within it).
    3. You hold that the only essential properties of a box-with-air is that it is a box (i.e., ditto) and it is filled with air in its empty space (within in).

    I am shaping an identity distinctly out of discrete experience. There’s no further proof needed. If I say, for this example, “green” is “the number one”, then the set of essential properties for “green” is [“1”]. There’s no further proof required.

    Second, clearly demonstrate what is a possibility, probability, and plausibility.

    A possibility is something which has been experienced before at least once. In the scenario, the billion experiences of each are the experiential context for it being possible that the box presented is filled with air or not.

    A probability is a quantitative likelihood: a numerator divided by a denominator, where the latter is the whole quantitative sample size and the former is the selected items within the sample size that one wants to know the likelihood of occurring. In the scenario, the only probability given is that there is a 51% chance that the box is a “box-without-air”, and any deducible information therefrom (e.g., there’s a 49% chance that it is a box-with-air).

    Only after that careful dismantling, try to prove that you can make a plausibility more cogent than a possibility.

    I am saying that:

    Since the probability that it is a box-without-air is negligible (because it is only a 1% difference) and the experiential association of the box-with-air with design X, although the design is not a part of its essential properties, so many times (viz., a billion) warrants claiming that the first random box pulled from this sample, being of design X, is a box-with-air.

    In this scenario, the incredibly strong correlation between design (X or Y) and the box type (air or airless) outweighs merely going off of the probability. This doesn’t mean that a strong correlation between design and box type always outweighs probability.

    Bob
  • Knowledge and induction within your self-context


    Hello Philosophim,

    If these are truly accidental properties, then they are not in consideration

    Why would resemblance and inductive association to the accidental properties in relation to the essential thing not be a consideration?

    I am saying that, in this hypothetical consideration, the designs are accidental: it isn’t a question of whether people are implicitly claiming them as essential properties (in this scenario).

    The definition of an accidental property is just that it is non-essential: that doesn't mean that it is irrelevant to the context of the situation.

    Since the probability that it is a box-without-air is negligible (because it is only a 1% difference) and the experiential association of the box-with-air with design X, although the design is not a part of its essential properties, so many times (viz., a billion) warrants claiming that the first random box pulled from this sample, being of design X, is a box-with-air.

    As a reminder of an accidental property, these are properties that are variable to the essential. So a "tree without branches" would have no bearing on its identity as a tree. So we can eliminate the variables X and Y from our consideration.

    Thank you for the clarification, but I was under that understanding as well. My point is that the accidental properties are not removed absolutely from the consideration of what is most cogent to hold. This scenario is a great example to me.

    As it is irrelevant whether the design matches X or Y, if I am given a box and I know that probability is 51/49%, then the more reasonable guess is to guess that the box I am given is the 51% chance that it does not have air.

    This is true if you are removing a large portion of the context of the scenario I gave which, arguably, isn’t the scenario anymore. Are you claiming that the scenario in which you are simply given the knowledge that there is a 49/51% chance is equivalent to the scenario I gave for epistemic purposes? I find that hard to believe that you would disregard all of the rest of that context.

    Implicitly, what most people would think in this context is, "Box X is designed to have air, Box Y is designed not to have air." These would become essential properties for most people in their context of encountering billions of each kind and having the same outcome in regards to air.

    To clarify, this is irrelevant. The scenario outlines explicitly that they are accidental properties.

    If its truly accidental, then the person would not even consider Box X or Box Y as being associated with having air, because it doesn't matter.

    In the scenario, as I hold the possibility is more cogent than the probability, I can say that I do not hold that the design X of the box has anything to do with its essential properties but yet it factors into what is most cogent to bet on. Resemblance and inductive association matter to me.

    The examples so far are doing nothing to counter the underlying claims about essential and non-essential properties, they're really examples in which you need to correctly identify if a property is essential or non-essential based on the person's context. Once that identity is complete, everything falls into place.

    But the whole point is that I outlined it very clearly what the essential and accidental properties were. Pointing out that most people wouldn’t assign them that way is irrelevant to the thought experiment.

    You don't have to have an example at all to question my conclusions Bob, its like an equation.

    The point here is that I think the equation is incorrect because, in this scenario, it is not more cogent to claim that the box is a box-without-air (due to there being a 51% probability) when the rest of the context is expounded. Without that context, then my claim would be different. You basically just countered a straw man of the scenario: one in which a box is handed to you and you are only given the knowledge that there is a 51% chance it is a box-without-air and a 49% chance it is a box-with-air (and there are no other options). But that wasn’t the scenario, unless you would like to claim that they are equivalent for intents of epistemic evaluation?

    Bob
  • Knowledge and induction within your self-context


    I would like to add a 7th aspect to remove any ambiguity:

    7. Design X and Y look absolutely nothing alike.
  • A Case for Analytic Idealism


    Hello Mww,

    I don’t think there’s sufficient warrant to claim there are other minds in any case, but it is nonetheless reasonable to suppose there are.

    Why would it be reasonable if you cannot know anything about the things-in-themselves, which would include other minds? Wouldn’t it be most reasonable to be an epistemic solipsist?

    I recognize the ubiquity of the conventional use of the word, but I personally don’t hold with minds as something a human being has. I consider it justified to substitute reason for mind anywhere in a dialectic without detriment to it, given the fact it is impossible to deny, all else being equal, that every human is a thinking subject. On the other hand, I am perfectly aware I am a thinking subject, which authorizes me to claim reason for myself, and that beyond all doubt.

    But there are things about you as a mind you cannot prove of others without venturing into metaphysical claims about the things-in-themselves. Yes, we all reason, but that’s really not what a mind is in the context of solipsism. It just seems like an evasion (inadvertently) of the real issue I am trying to address here to say that ‘mind’ is merely ‘reasoning’.

    Likewise, you can’t prove, even if that is the case that we all reason, that ‘we’ are the ‘ones reasoning’. Do you agree with me on that?

    The absurdity resides in the notion that if non-perception implies non-existence, then my perception is necessary existential causality itself. But it is absolutely impossible for me to cause the existence of whatever I wish to perceive, as well as to not perceive that of which I have no wish whatsoever, which makes explicit the only existences I could possibly be the causality of, is that which was already caused otherwise, which is all my perceptions could ever tell me anyway.

    Yes, but if you can’t know anything about the things-in-themselves, then you can’t know that it is absurd for your mind to be producing it all.

    Then there is time. If I am the cause of an object’s existence merely from my perception of it, then the time of my perception is identical to the time of the object’s existence, which is the same as my having attributed to that object the property of time. But time, as well as space, can never be assigned as a property, therefore the time or space of the object’s existence cannot be an attribution of mine

    Time and space aren’t properties of objects per se, but you are, under transcendental idealism, producing them under space and time. Saying that the objects only exist in your perception is just to say that there no corresponding object beyond those forms of space and time: it isn’t to say that the objects themselves can be attributed the property of time in the same manner as the property of being red.

    In order to know a thing in the strictest sense, it must manifest as an experience. What is impossible (in terms of knowledge) about that, is that minds of any form are never going to manifest as an experience.

    It can agree with this, as a matter of semantics, if you are saying that possible knowledge is that which one experiences; but then this just pushes the question back: why can’t we say that possible knowledge goes beyond our experiences?

    Also, as a side note, wouldn’t it be impossible to know that, for example, your mind uses pure conceptions of the understanding to produce the world if we are defining possible knowledge as only that which we experience? Because we definitely don’t experience that.

    how would such knowledge be possible? How is it that you think that which the judgement represents, can be known?

    Because we can tell that our perception of the world is dictated by our representative faculties. For example, there are color blind people: this is due to their minds representing the world with disabled functionality.

    That we cannot know the thing-in-itself has nothing to do with metaphysics. Metaphysics proper concerns itself with solutions to the problems pure reason brings upon itself, of which the thing-in-itself is not one.

    It most certainly is. Metaphysics is about understanding that which is beyond all possibility of experience, and that includes transcendental philosophy.

    Things-in-themselves are beyond the possibility of all experience.

    Good vs bad logic in conjunction with experience or possible experience, for whatever metaphysics, has better service.

    Metaphysics predicated solely on logic is bad metaphysics. That only gets them to a logically consistent view. Parsinomy, coherence, empirical adequacy, and intuitions are just some examples of pertinent non-logical factors.

    Ahhhh…that’s it? Transcendental idealism shifted the entire idealistic paradigm, so I figured that which attempts to shift it again, would shift from that.

    Analytic Idealism, I would say, is just pure ontolotical idealism; whereas transcendental idealism is really only epistemic idealism—it isn’t idealism in the ontological sense. So I wouldn’t say analytic idealism has shifted the paradigm again, this is an old view starting with schopenhauer, plato, etc.

    There is a short missive in CPR which sets the ground for its doctrine, which says metaphysics is predicated necessarily on the possibility of synthetic a priori cognitions, then goes about proving there are such things which validates the ground initially set as a premise. That to which synthetic cognitions are juxtaposed, are analytic, so….I just figured the new style of idealism wanted to be grounded in pure analytic cognitions, which are mere tautologies necessarily true in themselves, which, of course, a universal mind would have to be, re: self-evident

    I am not sure I completely followed this, but the idea would be to say simply that the universal mind is the best explanation for the world that is given to us. Also, I don’t think it uses Kant’s synthetic vs. analytic distinction; or if it does then it denies that the reality which we can know is purely synthetic of our minds.

    Bob