Comments

  • A Methodology of Knowledge
    Hello @Philosophim,

    I am glad you dived into "applicable" vs "distinctive" knowledge, because I think I was fundamentally misunderstanding your epistemic claims. I was never under the impression anything was related to a "will" in your epistemology, albeit I understand the general relation to the principle of noncontradiction.

    I think we have finally come to a point where our fundamental differences (which we previously disregarded) are no longer so trivial. Therefore, as you also stated, it is probably time to dive into "reason", which inevitably brings us back to the general distinction between our fundamentals. Previously, I understood the distinction between our fundamentals like so (as an over-simplification):

    Yours: object <- discrete experiences -> subject
    Mine: object <- discrete experiences <- subject

    However, "subject" was, and still is, a term with vast interpretations, therefore it is more accurate, as of now, to demonstrate mine as:

    object <-discrete experiences <- reason -> subject

    However, now you seem to be invoking "will", which adds some extra consideration on my end to my interpretation of your fundamental (and I am invoking "reason" which probably is confusing you as well). When you say:

    All I noted in the beginning was that there was a will, and that reality sometimes went along with that will, and sometimes contradicted that will.

    I didn't understand this from your essays (unless, and this is completely plausible, I am forgetting): the fundamental was "discrete experience" which was postulated on the principle of noncontradiction. A "will", in my head, has a motive, which is not implied at all (to me) with "discrete experience". I think we are actually starting to converge (ever so slowly), as I would claim that there are "wills" (as in plural) is in relation to reason. I think I would need a bit more explication into your idea of "will" to properly address it.

    The only reason we have a definition of reality, is that there are some things that go against our will.

    Reality is the totality of existence that is in accordance with our will, and contrary to our will.

    I think you aren't using "reality" synonymously throughout your post. The first statement seems to contradict the second. You first claim that we only can define "reality" as that which goes against our "will", yet then, in the second, claim that "reality" is both what goes against and what aligns with our "will"--I don't see how these are reconcilable statements. Your first statement here is only correct if we are talking about the distinction between "object" and "subject", generally speaking, not "reality" in its entirety. The entire "reality" could be aligned with all of my "will" and still be defined as "reality". I sort of get the notion that you may be using "will" synonymously with "principle of noncontradiction"--I don't think they are the same.

    Because there are things we can do in our own mind that go against our will. Lets say I imagine the word elephant, and say, "I'm not going to think of the word elephant." Despite what I want, it ends up happening that I cant' stop thinking of the word.

    I was misunderstanding you: distinctive knowledge is what you are claiming is given because it is simply discrete experience, whereas applicable could be within the mind or the external world.

    First and foremost, I need to define "reason" for you, because it probably is something vague as it currently stands. Reason is "the process of concluded". This is not synonymous with "rationality", which is a subjective and inter-subjective term pertaining to what one or multiple subjects determine to be the most logical positions to hold (or what they deem as the most logically process to follow in terms of derivation): "rationality" is dependent on "reason" as its fundamental. "Reason" is simply that ever continuing process of conclusions, which is the bedrock of all derivation. 1 + 1 = 3 (without refurbishing the underlying meaning) is an exposition of "reason", albeit not determined to be "rational". If, in that moment, the subject legitimately concluded 1 + 1 = 3, then thereby "reason" was invoked. As a matter of fact, "reason" is invoked in everything, and a careful recursive examination of reason by reason can expose the general necessary forms of that reason: it abides by certain inevitable rules. To be brief, principle of noncontradiction, space, time, differentiation, and causality (and debatably principle of sufficient reason). The first and foremost is the principle of noncontradiction, which is utilized to even began the discovery of the others. To claim that I discretely experience, I concluded by means of pon. This was "reason" and, depending thereafter how "rationality" is inter-subjectively defined, may have been "rational". There's definitely more to be said, but I'll leave it there for now.

    Distinctive knowledge comes about by the realization that what we discretely experience, the act itself, is known.

    I think this is false. The act itself is not just known (as in given), it is determined by means of recursive analysis of reason. You and I determined that we discretely experience. And, if I may be so bold, the act of discretely experiencing does not precede reason: it becomes a logical necessity of reason (i.e. reason determines it must be discretely experiencing multiplicity to even determine in the first place--but this is all dependent on reason). When I say logical, am not referring to "rationally" determined logical systems, merely, in this case, principle of noncontradiction (I cannot hold without contradiction that the aforementioned is false).

    Basically, when your distinctive knowledge creates a statement that the act of the discrete experience alone cannot confirm, you need to apply it. I can discretely experience an abstract set of rules and logical conclusions. But if I apply those abstract rules to something which cannot be confirmed by my current discrete experience, I have to apply it.

    I think, as I now understand your epistemology, I simply reject "distinctive knowledge" in literal sense (everything is always applied), but am perfectly fine with it as a meaningful distinction for better understanding for the reader (or as a subset of applied knowledge). Anything we ever do is a concluded, to some degree or another, which utilizes reason, and any conclusion pertaining to reason or discrete experience is application.

    So, if I construct a system of logic, then claim, "X functions like this," to know this to be true, I must deduce it and not be contradicted by reality.

    The only reason this is true is because you have realized that it would be a contradiction to hold that the contents of the thoughts of a mind can suffice pertaining to what the mind deems objects. This is all from reason and, depending on what is considered rationality, rational.

    Once it is formed distinctively, It must be applied, because I cannot deduce my conclusion about the world from the act of discretely experiencing alone. I can discretely experience a pink elephant, but if I claim the elephant's backside is purple, until I discretely experience the elephants backside, I cannot claim to applicably know its backside is purple. This is all in the mind, which is why I do not state applicable knowledge is "the external world".

    I think I understand, and agree, with what you are saying--with the consideration that they are both applied. We can define a meaningful distinction between "distinctive" (that which is discrete experience) and "applicable" (that which isn't), but only if we were able to reason our way into the definitions. No matter how swift, I conclude that I just imagined an elephant--I am not synonymous with the discrete experience of an elephant (I am the reason).

    When you say we know our discrete experiences by reason, I've already stated why we know them.

    We know discrete experience by reason: the principle of noncontradiction--therefrom space & time, then differentiation, then causality.

    We know we discretely experience because it is a deduction that is not contradicted by reality.

    Your using reason here. You applied this to then claim we have distinctive knowledge that is not applied, but there was never anything that wasn't applied. In other words, you, by application, determined some concepts to be unapplied: given. That which you determined was given, was not given to you, it was obtained by you via application. Nothing is given to you without reason.

    However, I've noted that "reason" is an option. It is not a necessary condition of being human.

    For me, reason is a necessary condition of being human. Not "rationality", but reason.

    There is nothing that requires a person to have the contexts of deduction, induction, and pon

    We can most definitely get into this further, but for now I will just state that pon is the fundamental of everything: everyone uses it necessarily.

    You are a very rational person, likely educated and around like people. It may be difficult to conceive of people who do not utilize this context. I have to deal with an individual on a weekly basis who are not "rational" in the sense that I've defined.

    Thank you! I appreciate that, and I can most definitely tell your are highly rational and well educated as well! To be clear, I am not disagreeing with you on how people are not all rational: I am also around many people that shock me at how irrational they are. I am making a distinction between "reason" and "rationality" to get more at what is fundamental for everything else (reason) and what is built off of that as the best course of action (rationality). One is learned (the latter), the other is innate (reason). It may be confusing because being "reasonable" and "rational" are typically colloquially utilized the same: but I am not.

    So I have defined the utilization of reason as having a distinctive and applicable context of deduction, induction, and lets go one further, logic. I have also claimed that there are people who do not hold this context, and in my life, this is applicably known to be true. But, that does not mean that is what you intend by reason. Could you give your own definition and outlook? Until we both agree on the definition, I feel we'll run into semantical issues.

    I agree, I think there is much to discuss. I think that, in terms of logic as derived from rationality (such as classical logic)(which may require the subject to learn it), you are absolutely right. But in terms of logic in the sense of pon, I think everyone necessarily has it. Now I know it's obvious that people hold contradictions (in colloquial speech), but that isn't what pon is at a more fundamental level (I would say).

    What is addition in application, versus abstraction?

    I find nothing wrong with your potatoe analogy anymore, I think I understand what you are saying. The application is the abstraction, which, in your terms, is not "distinctive" knowledge--so we agree on that (I think).

    We distinctively know math.

    I think we applicably know math. Reason derives what is mathematical and what doesn't abide by it. Solving x = y + 1 for y is application, not distinction. Even the understanding that there's one distinct thing and another one is application (of pon). What exactly is purely distinctive about this? Of course, we can applicably know that there's discrete experience and that we could label discrete experience as "distinctive knowledge", but all that is application. There's never a point at which we rest and just simply know something without application. Is there?

    In terms of space, I am not completely against the idea of labeling the holistic space as distinctive, but that was also applied. To know that space is apodictically true is application of reason inwardly on itself in an analysis of its own forms of manifestation. I could rightfully distinguish apodictically true forms of reason as "distinctive knowledge" and that which is derived from them as "applicable knowledge", which I think (from my perspective) is what you are essentially doing. But my point is that they are all applied: when do I ever not apply anything?

    In regards to when is something cogent enough to take action, that is a different question from the base epistemology. I supply what is more rational, and that is it.

    My question essentially pertained to when something is considered a "historical fact", considering most historical facts are speculations, when we are simply determining which induction is most cogent. I think you answer it here: seems that you think that it isn't a base concern of the epistemology. I think this is a major concern people will have with it. Everyone is so used to our current scientific, historic, etc institutions with their thresholds of when something is validated that I envision this eroding pretty much society's fundamental of how knowledge works. It isn't an issue that it erodes the fundamentals of "knowledge" hitherto, but not addressing it is. You don't have to address it now if you don't want to, but feel free to if you want.

    Explicitly, what you are stating is, "I believe Jones could have 5 coins in his pocket." But what is the reasoning of "could have" based on? A probability, possibility, speculation, or irrational induction?

    The point is that it isn't based off of any of them. And it isn't simply using a different epistemology, it is that your epistemology completely lacks the category. The way I see it, "could have" was colloquially "possibility". Now "possibility" is about experiencing it before, which is only half of what possibility used to mean. The other "could have" was not that the person had seen it before, it was that it had potential to occur because they couldn't outright contradict it. This is still a meaningful thing to say in speech: the only affirmation being the affirmation that one cannot contradict the idea outright. However, I think I may be understanding what you are saying now: potentiality isn't really inducing an affirmation. It is more like "I cannot contradict the idea, therefore it may be possible". Maybe it is the possibility of possibility? But that wouldn't really make any sense (in your terminology). For example, mathematics. I could abstractly determine that I could fit that particular 5 foot brick into that particular 100 x 100 foot room, but, as you noted, until I attempt it I won't know. What I am trying to get at, if I haven't experienced it before then it is not possible. If I have no denominator, then it isn't probable. If I can't contradict it, then it is not irrational. I guess it could be called a speculation, but I am not saying that I can fit the brick in the room, just that I can't contradict the idea that it could. In other words, I am thinking of "speculation" as "that brick will fit into that room" (given it is possible, probable, or irrational), but what about "I can't contradict the idea that that brick will fit into that room". Are they the same? Both speculations?

    "There's a difference between claiming there is colloquially a possibility that something can occur and that you actually believe that it occurred." -- Bob

    Just to ensure the point is clear, both situations exist in the epistemology.

    I'm not sure if they both do. You do have "something can occur" in the sense of experienced before, but is "something can occur due to no contradictions" simply a speculation without affirmation?

    If something did not have potential, this translates to, "Distinctive knowledge that cannot be attempted to be applied to reality." This seems to me to be an inapplicable speculation. Which means that any induction that could attempt to be applied would be considered a "potential', even irrational inductions.

    As I have proposed it, inapplicable speculations do not exist: they have been transformed into irrational inductions. Speculations entail that it is applicable. Therefore, this is not an appropriate antonym to potentiality. The antonym is "that which is contradicted".

    Exactly. So Jones is claiming, "I have an induction but I'm not going to use the hierarchy to break down what type of induction I'm using".

    Leaving the individual voiceless in a perfectly valid context is not purposely not using the epistemology: it is the absence of a meaningful distinction that is causing the issue. There is a meaningful distinction, as you noted, between asserting affirmation, and simply asserting that it isn't contradicted. Or is that simply not within the bounds of your epistemology? Or is it also a speculation? I am having a hard time accurately defining it within your terminology.

    I look forward to hearing from you,
    Bob
  • The Bible: A story to avoid
    Hello @Edward235,

    I am seeing reply posts that are generally inspired towards what I wanted to say, but I would like to provide further explication. Firstly, I am not a Christian myself. No offense meant, but I think that your post is over generalizing Christianity as a whole into on oddly specific classification (flavor, if you will) called biblical literalism. However, even biblical literalists wouldn't subscribe to most of what you said (I would say). Here are my thoughts:

    The Bible presented among Christian believers, is a collection of stories written by supposed divine inspiration...yet Christians sit here and preach that we must do what the Bible tells us word for word

    Although I may just be misunderstanding you, "divine inspiration" is not equivocal to inerrancy ("word for word"). Some Christians claim that the bible is divinely inspired with the careful consideration that, due to it being produced by faulty humans, it is not inerrant. Others claim both. To be quite frank, most Christians do not, even if they claim inerrancy, believe that one should obey the Old Testament literally: they typically either believe it is merely allegorical/metaphorical or/and the New Testament is a "New Covenant". The biblical literalists typically view it in the latter sense, which means they will not agree with you that, although they do think it literally happened, every one should slaughter an animal for God to forgive them of their sins. This is of the "Old Covenant", not "New Covenant" that succeeds Jesus' sacrifice. I've never met a single Christian biblical literalist that genuinely believes that every should follow the rules decreed in the Old Testament without the consideration of the New Covenant.

    The stories within the Bible show us scenes of gore, rape, slavery, and so many more violent acts

    As others have pointed out, it can be interpreted metaphorically, allegorically, or as a parable. Sometimes stories are not meant to be analyzed literally. Although I understand your quarrel with the Old Testament (in a literal sense) and share your frustration, which I do concede has many abhorrent depictions of actions, most biblical literalists take a position that typically holds that essentially when God does it, it cannot be morally wrong: He is the standard of good. They typically subscribe to a very absolute objective morality. Now, I'm not advocating that they are right, I merely attempting to provide you with a bit more exposition into Christianity (specifically biblical literalism). From their perspective, if God outright strikes you down where you stand, He can do so because He created you and you will ascend into heaven for eternity. Imagine that you genuinely believed that if God zapped you dead where stand right now, you will be freed from this life of suffering and ascend into a paradise forever. That is, from my conversations with many literalists, what they generally claim in a nutshell.

    Imagine if the Bible wasn't written as a prophetic work, but instead a warning from the divine.

    There's also the Gnostics who believed that the Old Testament God was actually Satan and the New Testament God was the true God that sent his son to fix the damage. My point is not that any of these interpretations are necessarily right, just that there are indeed many interpretations. Personally, I only find any value in what is metaphorical or allegorical in the Bible (simply the literary aspect), so I am hesitant to agree with you that the Bible is holistically a warning message from an actual "God" and none of it is prophetic. I guess it depends on what you mean by "prophetic" as well: I don't think there's any truth to any alleged prophecies in the Bible, but prophets are a literary archetype, which is common amongst all the most well known western books of literature (which I find nothing wrong with at all).

    A warning like Noah. Noah was created to warn the people of the flood, and no one listened. Then, after Noah built the ark, God flooded the world. It doesn't seem like a prophecy, but a warning instead

    The main objective of the Great Flood was for God, in a literal sense, to press the reset button. He was so horrified by the evil that humanity had produced that he decided to wipe it out (quite frankly, and this is my bias coming out, mass genocide). As far as my knowledge goes, it wasn't a prophecy or really a warning at all (sure, there's a bit of dialogue about people laughing at Noah building the ark, but God wasn't really interested, as the story goes, in getting Noah to convince everyone to get on the ark: they weren't welcome): God was simply sparing Noah's life (8-9 people, if I remember correctly). Holistically, it may have been a warning to the reader to hope humanity doesn't get evil enough where God decides to hit that reset button again, but, besides that, I'm not sure I follow you here.

    Every person mentioned in the Bible died, yet God promised they would live forever if they relied on Him.

    If I may be so bold (and no offense meant): this is utterly incorrect. God never once in the Bible promised that anyone would live forever on earth. Although the Bible does claim two men never died as a result of their unwavering faith to God; For example, Elijah was ascended into heaven without ever dying in a chariot of fire (according to the Bible, of course). God promises in the Bible "everlasting life", which has no correlation to how long one lives on earth: it is an eternity in heaven.

    Maybe, just maybe, the Bible tells us what the men and women of that time were doing was wrong.

    I think I would need you to go a bit more in depth here to properly respond, as I would say most Christians would agree with you to a certain extent. Many verses in the Bible pertain to exactly what people were doing that was considered "evil", and proclamations to not do it. But the Old Testament is not so clear about what I am presuming you are talking about (such as slavery).

    They turned God into an idol.

    I think the terminology is incorrect here: an "idol" is defined as that which is deviation from God. In other words, "idol" only has a meaning relative to what one thinks they should be giving to God. That is why money is considered an idol: it can possess a man into deviation from worshiping God. Therefore, it makes no sense to say God is an idol: that is the same thing as saying that one is deviating from worshiping God to worship God instead.

    Maybe, Christians are misunderstanding the text

    I am honestly not seeing how anything you said supports this claim. What Christians? Which sector? Which flavor? All of them? When you say "They coveted their neighbor's houses", most Christians agree that coveting is a sin.

    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    I think we are still misunderstanding eachother a tad bit, so let's see if I can resolve some of it by focusing on directly responding to your post.

    They are both obtained in the same way. Knowledge in both cases boils down to "Deductions that are not contradicted by reality." Distinctive knowledge is just an incredibly quick test, because we can instantly know that we discretely experience, so what we discretely experience is known. Applicable knowledge is distinctive knowledge that claims knowledge of something that is apart from immediate discrete experience. Perhaps the word choice of "Application" is poor or confusing, because we are applying to reality in either case. Your discrete experience is just as much a reality as its attempts to claim something beyond them.

    This is why I think it may be, at least in part, a semantical difference: when you refer to "application", you seem to be admitting that it is specifically "application to the external world" (and, subsequently, not the totality of reality). In that case, we in agreement here, except that I would advocate for more specific terminology (it is confusing to directly imply one is "application" in its entirety, which implies that the other is not, but yet claim they are both applications).

    The other issue I would have is the ambiguity with such a binary distinction. When you say "Applicable knowledge is distinctive knowledge that claims knowledge of something that is apart from immediate discrete experience", fundamental aspects of the "external world" are necessarily aspects of our experience (as you note later on). This is different (seemingly) to things that solely arise in the mind. My imagination of a unicorn is distinctive knowledge (pertaining to whatever I imagined), but so is the distinction of the cup and the table (which isn't considered solely apart of the mind--it is object). It blends together, which is why certain aspects cross-over into the external world from the mind. But more on that later.

    Likewise, when you state "Your discrete experience is just as much a reality as its attempts to claim something beyond them": the subject cannot rationally claim anything beyond discrete experience, that is all they have. I cannot claim that the table is a thing-in-itself, nor can I claim it is purely the product of the mind: both are equally inapplicable. However, if what you mean by "attempts to claim something beyond them" is simply inductions that pertain to the discrete experience of objects, then I have no quarrel.

    It is why I avoided the inevitable comparison to apriori and aposteriori. Apriori claims there are innate things we know that are formed without analysis. This is incorrect. All knowledge requires analysis. You can have beliefs that are concurrent with what could be known, but it doesn't mean you actually know them until you reason through them.

    This is not how I understood Kant's a priori vs a posteriori distinction: it is not blindly asserted. It is analyzed via reason by means of recursively examining reason upon itself, to extrapolate the apodictic forms it possesses. This is applied and, to an extent, true. A priori actually salvaged the empiricist worldview, as even Hume noted that empiricism is predicated on causality (which is a problem if one is asserting everything must be applied to the external world to know it). Kant, generically speaking, simply provided (although he was against empiricism) what logically is demonstrably true of the form of reason itself (of subjectivity in a sense). We applicably know, via solely reason, that we are within an inescapable spatial & temporal reference. We are constrained to the principle of non contradiction and sufficient reason, and, with the combination of the aforementioned, presuppose causality in any external application. We cannot empirically verify causality itself: it is impossible. Nor pon, etc. I do have to somewhat agree with you that Kant does extrapolate much further than that, and claims things about a priori that cannot possibly be known (like non-spatial, non-temporal, etc), but within the logical constraints that are apodictically true for the subjects reason, it logically follows from the usage of such that there are certain principles that must exist for any observation to occur in the first place. Obviously there's the issue that we can't escape the apodictic rules of our reason, which is being utilized reflexively to even postulate this in the first place, and therefore it is only something that logically follows. But this applies to literally everything. To say that it makes the most sense (by a long shot) that we are derived from a brain is only something that logically follows (that which also does not escape our apodictic rules of reason).

    Distinctive awareness - Our discrete experiences themselves are things we know.
    Contextual logical awareness - The construction of our discrete experiences into a logical set of rules and regulations.

    To clarify, our discrete experiences themselves are things we know by application via reason. Our awareness of the distinctions is also known by the same sort of application: reason. If that is what you are stating here, then I agree: I am just not finding this sentence very clear at what you are trying to state. It could be that you are claiming they are essentially given, which I don't think you are stating that, which means it logically follows the stemming is from reason. Moreover, I think the problem here is that both are constructions of logical rules and regulations: distinctive awareness is derived from reason and reason is, upon reflexive examination, regulated by necessitous rules, whereas the "logical set of rules" you reference in "contextual logical awareness" is rules that, I think you are claiming, are not necessitious (as in a diversity of contexts can be produced, but it is important to remember that it is derived from those necessitous rules that reason manifests itself, apodictically, in).

    We distinctively know both of these contexts. Within our specially made contexts, if Gandolf is a good person, he WILL do X. The only reason Gandolf would not save the hobbit if it was an easy victory for him, is if he wasn't a good person. Here I have a perfectly logical and irrefutable context in my head. And yet, I can change the definitions, and a different logic will form. I can hold two different contexts of Gandolf, two sets of contextual logic, and distinctively know them both with contextual awareness.

    This is all fine, with the emphasis that this is applicably known via reason. IF conditionals are an apodictic instantiation of our reason: one of the logical regulations, upon recursive reflection, of reason itself. Depending on how you are defining those two conditional claims, it may solely pertain to reason or it may also pertain to the form of objects. If you mean in this example to define logically that a "good Gandolf" directly necessitates him doing X and, logically, if he doesn't do X, then he isn't "good", then is not only known in the mind (via reason pertaining to solely what lies in the mind), but also to all objects (all discrete experience of "objects"). You know, without application to the external world that the logical defining of person P is "good" if they do X and P is not good if not X will hold for all experience (including that which pertains to the external world). This is "applicably known" and "distinctively known" (as you would define it) without "applying" to the external world due it relating to the necessary logical form of discrete experience.

    Of course, I could create something illogical as well. "Gandolf is a good person, therefore he would kill all good hobbits in the world." Do I distinctively know this? Yes. But I really don't have contextual logical awareness. I am not using the "context of logic".

    It depends on what you mean by "logic". If you are referring to an adopted logical system (such as classical logic), which I should emphasis is based off of reason (which everyone has), then you are right. But you did still have a context of "logic" in the sense of the apodictic necessitous forms of the instantiation of reason. Firstly, if you define "good person" in a contradictory why (previously) to killing what is defined as "good hobbits", then you do not know that sentence distinctively--you know the exact contrary (the statement is false). However, one can hold such a contradiction if it is reasoned, no matter how irrational, to no longer be a strict contradiction. Maybe I decide that the end justifies the means: now that sentence is perfectly coherent. However, I could very well accept that sentence as "true", although I know it is contradictory, solely based off of "it makes me sleep better at night thinking it is true": this is still a reason. I could claim to hold it as a lie to annoy you, or just because I like lying: these are all reasons (not rational, but reasons). But my main point is that a person cannot conceive of whatever they want: they cannot hold that they are seeing a circle and a square (pertaining to the same object) at the same time. They can lie, for whatever reason, about it, but I know that they also do not distinctively "know" this. They may distinctively "know" that they want to lie about it for whatever reason, but they do not distinctively actually "know" that they are seeing two completely contradictory things. Likewise, even in the realm strictly pertaining to the mind, they cannot distinctively know a circle as a circle and a rectangle. They can lie about it, or convince themselves it is somehow possible, but they cannot actually distinctively know this (this is not merely my contextual interpretation--unless they are no human).

    The rational behind thinking logically, is when you apply logical thinking to reality, it has a better chance of your surviving.

    In a general sense, I agree that my survival is more likely if I abide by a coherent logical system (such as classical logic or something), but "survival" alone doesn't get you to any sort of altruism.

    You can see plenty of people who hold contexts that do not follow logic

    It doesn't follow a logical system that we have derived from our ability to reason. Everyone reasons. Not everyone is rational. There are apodictically true regulations of reason (which are obtained by analysis of a recursive use of reason on reason).

    and when they are shown it is not illogical, they insist on believing that context regardless. This is the context they distinctively know.

    They do not necessarily distinctively "know" the content of the entirety of the context they hold. Again, they cannot hold they imagined a circle that was also a rectangle that was also a triangle.

    It doesn't work in application to reality, but that is not as important to them as holding the context for their own personal emotional gratification

    I agree, but what you mean by "application to reality" is "application specifically to the external world".

    1. Some things they can know in the mind which is not known in the external world.
    2. Some things they cannot know in the mind nor the external world.
    3. Some things they can know in the mind and the external world (by means of what is known in the mind).
    4. Some things they can know by means of application to the external world.

    I think you are trying to reduce it to simply 2 options: application to the mind, or application to the external world.

    So to clarify again, one can hold a distinctive logical or illogical context in their head. They distinctively know whatever those contexts are. It does not mean that those contexts can be applied beyond what is in their mind to reality without contradiction. We can strongly convince ourselves that it "must" be so, but we will never applicably know, until we apply it.

    You are right in the sense that we cannot claim that my imagination of a unicorn entails there is a unicorn in the external world, but doesn't negate that discrete experience itself is the external world. Therefore, certain forms are apodictically true of the mind and the external world by proxy of the mind. A great example is causality.

    No, that is what our context of the world depends on. The world does not differentiate like we do. The world does not discretely experience. Matter and energy are all composed of electrons, which are composed of things we can break down further. Reality is not aware of this. This is a context of distinctive knowledge that we have applied to reality without contradiction. It is not the reverse.

    Again, discrete experience is the world. We cannot claim that an electron exists as a thing-in-itself (apart from the subject) nor can we claim that it doesn't exist as a thing-in-itself (completely contingent on the subject). We can claim certain aspects of objects, which are apart of discrete experience, are contingent on particular objects that we deem obtained our sensations and produced our perceptions (i.e. color is not an aspect of my keyboard, it is a matter of light wavelength directed through my eyes which are then interpreted by my brain--all of these are objects that are apart of discrete experience). All of it logically follows, but that is just it: logically follows via reason. Without such, which is the consideration of the absence of reason by reason itself, we can only hold indeterminacy. The "external world", object, is simply that which reason has deemed out of its direct control, but those deemed "objects" follow necessary forms (discrete experience) that form from reason.

    I've noted before that math is the logical consequence of being able to discretely experience. 1, is the concept of "a discrete experience." That is entirely of our own making. It is not that the external world is contingent on math, it is that our ability to understand the world, is contingent on our ability to discretely experience, and logically think about what that entails.

    I think, given that discrete experience is the world, that you agree with me (at least partially here). Nothing you said here is incorrect, your positing of a external world that is a thing-in-itself is where you went wrong. Just as someone could equally go wrong by positing the exact opposite.

    Does this mean that reality is contingent on our observation? Not at all. It means our understanding of the world, our application of our distinctive knowledge to reality, is contingent on our distinctive knowledge.

    Again, we cannot claim either. We have reason, and from it stems all else: this doesn't mean that there are no things-in-themselves or that there are. Only that we discretely experience things, which are deemed objects, and all of those objects abide by mathematics because, as you said, discrete experience is what derives multiplicity in the first place. Therefore, certain aspects of the external world are known by reason alone because certain aspects of the external world abide by, necessarily, those regulated forms of reason. This is not to say that you are entirely wrong either, as we can claim "objects", what are out of our control, but with the necessary understanding that mathematics is true of all objects (because it is discrete experience).

    Exactly. If you use a logical context that you distinctively know, there are certain results that must follow from it. But just because it fits in your head, does not mean you can applicably know that your logical context can be known in application to reality, until you apply it to reality by adding two potatoes together. To clarify, I mean the totality of the act, not an abstract.

    I am having a rough time understanding what you mean here.

    When I add these two potatoes together, what happens if one breaks in half? Do I have two potatoes at that point? No, so it turns out I wasn't able to add "these" two potatoes.

    I feel like you aren't referring to mathematical addition, but combination. Are you trying to get at that two potatoes aren't necessarily combinable? Like meshing two potatoes together? That's not mathematical addition (or at least not what I am thinking of). We know that one potatoe and another potatoes make up two potatoes. Even if one breaks in half, one half + one half + one entails two. Combining two potatoes won't give you two distinct potatoes, it will give you one big potatoe (assuming that were even possible) or two potatoes worth of smashed potatoes. If that is what you are referring two, then I would say you are talking about what must be empirically verified about the cohesion of "potatoes" in the external world, which definitely requires an empirical test to "know" it. However, to perform the mathematical addition of one potatoe to another, where two distinct potatoes are the result, is known about the external world by means of the mind via reason.

    But do you applicably know that you can fit this square and circle I give you in that way before you attempt it? No. You measure the square, you measure the circle. Everything points that it should fit perfectly. But applicably unknown to you, I made them magnetized to where they will always repel. As such, they will never actually fit due to the repulsion that you would not applicably know about, until you tried to put them together.

    This is 100% correct. It is pertaining directly to objects themselves, which requires empirical observation. However, that does not negate my claim that the ability to fit a circle in a square is known in the mind. Shape itself is a form of all discrete experience, and therefore can pertain to the external world with merely reason. I know that rectangular shapes take a specific form, and that pertains not only to what I imagine but necessarily objects as well. Think of it this way: I can also "know" what cannot occur in the external world without ever empirically testing it based off of shapes--which encompass the external world as it is discrete experience. Can you fit a square of 5 X 5 inches in a circle of radius 0.5 inches? No. Now, I think what you are trying to get at is that I will not know this about a particular circle and square in the external world until I attempt it--as my calculations (dimensions) may be off and they can fit because they are not the aforementioned dimensions. However, this does not negate the fact that I cannot, in the external world, fit such dimensional shapes into one another as specified. I know this of the external world as well as the mind without application to the external world. However, if the same ruler is utilized in both readings, then I do not need to even apply an attempt to fit them together in the external world because I do know it will not happen. Firstly, if "inches" is consistent (which is implied with using the same ruler), then it doesn't matter if my measured "in" actually is what we would define as an "in". Secondly, the significant digits is a vital consideration with determines whether one actually has to attempt fitting them together to "know" if they can fit. In this case, the significant digits can, with solely reason, be determined to not have an effect that would allow for such a margin of error that would allow it to fit. 5 "whatevers" (inches) by 5 "whatevers", will not fit in 0.5 "whatevers". Even if the significant digit, which would be 5.X and 0.5X (where X is the estimated digit) will not allow for any sort of variance that will allow either of us to claim we could have a large enough margin of error to presume we need to physically test it. If it were that it was a square of 1.X "whatevers" by 1.X "whatevers" and the circle had a radius of 1.Y "whatevers" (where Y is estimated smaller than X), then we now can reason that we could be wrong.

    Now, I like your example of magnets to show that I still wouldn't know, even if I new the dimensions checked out, that they would fit. However, I can "know":

    1. That dimensions that cannot mathematically fit, considering the margin of error as uneffective, cannot fit in the external world (this is a reason consideration within the mind which necessarily translates to the external world, as it is simply discrete experience).

    2. That a square can fit in a circle (this is sole consideration of the mind, but also translates into what cannot happen in the external world). I know, if that is true, that nothing pertaining to the shape of an object will necessitate that an object of shape "square" cannot fit into shape of "circle". As you noted earlier, it is true that to know two particular shapes fit into one another in the external world requires empirical observation, but I still nevertheless know that circularity and squareness, in shape, do not necessitate that they cannot be fit together: this is true of the external world as much as my mind.

    I understand. But your inability to conceive of anything else is because that is the distinctive context you have chosen. There are people who conceive of different things. I can make a context of space where gravity does not apply. I can conceive of space as something that can allow warp travel or teleportation.

    This is not the uniform, holistic, spatial reference I am referring to. Yes, people can conceive of spatial frameworks under the holistic spatial reference that do not abide by the same principles as that which we discover of the external world. My inability to conceive something else is not distinctive context I have chosen. Yes, I could choose to envision a spatial framework under space where I can fly and, yes, this would be a distinctive context. However, distinctive contexts themselves are depending on a regulated unescapable form which is space which cannot be contradicted: it is not chosen, it is always demonstrably true. Even the imagined spatial frameworks abide by space itself. This is not to be confused with it abiding by "outer space" or "string theory" or "my made up gravity free world". A necessary rule of the manifestations of reason is that it is spatial referenced (inevitably). Does that make sense?

    To hammer home, that is because of our application. When you define a logical context of space that cannot be applied and contradicts the very moment of your occupation of space, it is immediately contradicted by reality.

    Again, you are right, but this is not relevant to what I am trying to say. I am not referring to me being able to attempt my an application of my gravity free spatial framework to the external world to be met with gravity. I am referring to that which is discovered, projected, and conceivable--holistically all experience. You don't apply the holistic reference of space to anything (you cannot), it is that which necessarily always utilized by reason, in its manifestations (like thoughts), apply anything in the mind or in the external world. With respect to what you were getting at (or at least what I am understanding you to say), you are right.

    I think you misunderstood what I was trying to state. I was not stating a scientific theory. I was stating a theory. A scientific theory is combination of applicable knowledge for the parts of the theory that have been tested. Any "theories" on scientific theories are speculations based on a hierarchy of logic and inductions.

    I am not following what you are trying to say here. I was under the impression we were discussing science and the theories therein: those are all scientific theories. When you say "I was stating a theory", what do you mean? Colloquially a "theory"? What else is there in science that is a theory besides scientific theories? My point was that we do not simply accept that which is most cogent, it must pass a threshold of cogency in terms of a vast majority of institutions that are in place for developing knowledge. At what point is it cogent enough for me to base my actions off of it? How cogent of an induction does global warming and climate change have to be for me to change my lifestyle? How cogent does evolution need to be for me be base biology off of it? Just simply the most cogent? Scientific theories require much more than that, no?

    If they are using knowledge correctly, then yes. But with this epistemology, we can re-examine certain knowledge claims about history and determine if they are applicably known, or if they are simply the most cogent inductions we can conclude. Sometimes there are things outside of what can be applicably known. In that case, we only have the best cogent inductions to go on. We may not like that there are things outside of applicable knowledge, or like the idea that many of our constructions of the past are cogent inductions, but our like or dislike of that has nothing to do with the soundness of this epistemological theory.

    I think I following what you are saying now. We don't ever, under this epistemology, really state "historical facts" other than that which is deduced. Everything else is simply a hierarchy of inductions, which we should always simply hold the most cogent one. The problem is that there's never a suspension of judgement: we also claim a belief towards whatever is most cogent. Again, when is it cogent enough for me to take action based off of it?

    No, that is not "truth" as I defined it. That is simply applicable knowledge. And applicable knowledge, is not truth. Truth is an inapplicable plausibility. It is the combination of all possible contexts applied to all of reality without a contradiction. It is an impossibility to obtain. It is an extremely common mistake to equate knowledge with truth; as I've noted, I've done it myself.

    Again, this isn't true. "truth" being the "combination of all possible contexts applied to all of reality without contradiction" is the definition of that which is apodictically true for the subject. Again, take space, or causality, or pon: this is true of all reality because I am not just talking about the external world, I am referring to everything, which is discrete experience (as you put it). the world is reason. This doesn't mean that we can obtain "truth" of anything sans reason, but we must understand that we can't even conceive of such a question: without (sans) reason is considered via reason and its necessary form (i.e. without is a spatial reference and the entire question is via reason).

    To explain, I am limited by my distinctive context. I can take all the possible distinctive contexts I have, and apply them to reality. Whatever is left without contradiction is what I applicably know. But because my distinctive contexts are limited, it cannot encompass all possible distinctive contexts that could be. Not to mention I'm limited in my applicable context as well. I will never applicably know the world as a massive Tyrannosaurus Rex. I will never applicably know the world as someone who is incapable of visualizing in their mind. As such, truth is an applicably unobtainable definition.

    I think you are positing an objective world that is a thing-in-itself, where "truth" is if we were essentially omniscient with respect to the understanding of an object via all contexts. In that sense, I agree. But I don't think you can posit such.

    The problem here is in your sentence, "he speculates it could be the case". This is just redundancy. "Speculation" means "I believe X to be the case despite not having any experience of applicable knowledge prior". "It could be the case" means, "I believe it to be the case", but you haven't added any reasoning why it could be the case. Is it the case because of applicable knowledge, probability, possiblity, etc? I could just as easily state, "He speculates that its probable", or "He speculates that its possible".

    I don't think really addresses the issue. I used the terminology "speculates it could" because you used it previously, and I was trying to expose that it is the same thing as possibility (in a colloquial sense). It is redundant: to say "it could" is to say "it is possible" (in the old sense of the term). And, no, "it could be the case" is not equivocal to "I believe it to be the case". If I claim "Jones could have 5 coins in his pocket", I am not stating that I believe he does have 5 coins in his pocket. I am saying nothing contradicts the idea that he has 5 coins in his pocket (e.g. the dimensions dictate otherwise, etc). My reasoning for why "it could be the case" is abstract, but has nothing to do with reasons why he does have 5 coins in his pocket (or that I believe he does). In my scenario, he can't claim it is probable or possible. There's a difference between claiming there is colloquially a possibility that something can occur and that you actually believe that it occurred. Does that make sense? The dilemma is the latter is non-existent in your epistemology. Smith, in the sense that he isn't claiming to believe there are 5 coins in Jones' pocket, is forced to say nothing at all.

    It is a claim of belief, without the clarification of what leads to holding that belief.

    Potentiality is very clear (actually more clear, I would say, than possibility): that which is not contradicted in the abstract which allows that it could occur. Now, I don't like using "could" because it is utilized in colloquial speech in the sense of possibility and potentiality (possibility as something we could colloquially claim has been proven to occur and potentiality being that which simply hasn't been contradicted yet).

    I felt I did use your example and successfully point out times we can claim probability and speculation, but that's because I fleshed out the scenario to clarify the specifics. If you do not give the specifics of what the underlying induction is based on, then it is simply an unexamined induction, and at best, a guess.

    I felt like I made it clear. Smith is not claiming it is probable: there's not denominator there. He isn't claiming possibility: he has not seen 5 coins in Jones' pocket before. He isn't going to claim irrational induction, because he hasn't found any contradictions. He is not claiming speculation that Jones has five coins in his pocket: he is claiming that Jones' could potentially have five coins in his pocket. So what does he claim? As you agreed, saying he "speculates that it could happen" is redundant: either he is claiming that it "could" happen in the sense of possibility (as in he has experienced it once before)(which he is not in this case) or he is claiming that he can't contradict the idea that potentially has five coins in his pocket. He isn't asserting that he does, just that it could be the case (given his current understanding).

    I look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    First of all, an apology is due: I misunderstood (slash completely forgot) that you are claiming that abstract reasoning is knowledge (as you define it, “distinctive knowledge”). Our dispute actually lies, contrary to what I previously claimed, in whether both types of knowledge are applied.

    For starters, this may very well merely be a semantical dispute: only time will tell.

    When you state:
    What you have been trying to do, is state that distinctive knowledge can be applicable knowledge without the act of application.

    I think you are simply semantically defining your way into an obvious contradiction. As you are probably already well aware, if it is true that there is no act of application, then it logically follows that there is no application. I am claiming the contrary: distinctive knowledge is applied. However, using that terminology (distinctive) may be causing some issues (I’m not sure), so let me try to explicate my position more proficiently. First of all, I don’t think we are using the term “reality” equivocally: you seem to be referring to what I would deem “the external world” (to be more precise: “that which is object”--which includes the body to an extent), whereas I refer to “reality” as holistically the totality of existence (which includes the subject and object). Therefore, when you state “applicable knowledge”, I interpret that as “that which refers to the external world and is thusly applied to it for validity”. When you state “distinctive knowledge”, I am implicitly interpreting it as “that which refers to the mind, or that which resides in it, and thusly is applied to it for validity”. Please note that I am using “subject” and “object” incredibly, purposely loosely: simply for explication purposes of two major distinctions I think you are making. So when you talk about how what I reason in my mind doesn’t grant me knowledge about how that thing truly is in “reality” (i.e. your hydrogen + oxygen example), I proclaim “that is true!”. But why is this? It is because, I would say, the reasoning is pertaining to objects specifically. Therefore, the application necessarily cannot be merely from the mind. There are three types I would like to expose hereinafter:

    1. That which is in relation to a specific object
    2. That which is in relation to an object, but pertains to the general form of all or some objects
    3. That which is in relation purely to the subject

    Everything is derived from reason (or at least that is the position I take) and, consequently, the distinction between the external world and the internal world (so to speak, very loosely) is blended together (in to those three aforemented types). Certain aspects that do not directly pertain to an object can, and potentially must, be derived purely from reason. For example, when you say:

    What I am saying is you can distinctively know that if you have an identity of 1, and an identity of 1, that it will make an identity of two. But if you've never added two potatos before, you don't applicably know if you can

    The deductive assertion of “two potatoes” (as conceptualized without refurbishment from the standard definitions) necessitates the operation of addition: regardless of whether (1) the operation has been applied in the external world or (2) potatoes even exist in the external world. If we are utilizing distinction (which is implied with “potatoes” in “two potatoes”, as well as multiplicity in terms of “two”), then pure reason can derive knowledge that “one” potatoe + “one” potatoe = “two” potatoes. This is, as you are already inferring, simply the exact same thing as your first sentence (in the quote): 1 + 1 = 2. As far as I understand your example here, you are referring specifically to the addition operation and not the existence of potatoes (“you’ve never added two potatos before, you don’t applicably know if you can”): but, as I’m hopefully demonstrating, you definitely can know that. In simpler terms, math applies before any application to the empirical world because it is what the external world is contingent on: differentiation. This application, although it can be better understood with the use of objects, can be solely derived from reason (1 thought + 1 thought = 2 thoughts, this abstractly applies to everything). Therefore, if I distinctively define a potato in a particular way where it implies “multiplicity” and “quantity”, then the operation of addition must follow. The only way I can fathom that this could be negated is if the universality of mathematics is denied: which would entail the rejection of differentiation (“discrete experience” itself). Notice that 1 + 1 = 2 is of type #3, but, due to the intertwining nature of subject-object, is also utilized in #2 without any application to the external world. The object presupposes mathematics (differentiation): without it, there is no object in the first place. Differentiation is a universal, necessarily so, form of experience: thusly, mathematical operations (to go back to your example) applied in the abstract are, thereby, also applied (if you will) to the external world.

    Likewise, consider shapes: these are universal forms (not in the sense of Universals in philosophy, they can be, and I think they are, particulars in that sense) of experience derived solely from reason. I can know, in the abstract, that a circle can fit in a square. I do not need to physically see (empirically observe) a circle inscribed in a square to know this. Not only can I know, applied via reason in the abstract, this in relation to subject, but, since it abides by type #2, also in relation to object. Again, the universality of differentiation would have to be refuted for this not to hold for both subject (that which is conjured) and object (that which isn’t).

    Moreover, consider mathematical equations. If I have x + y = 1, I can, purely with reason, solve for x to see what x = ? is. Prior to this abstract application of the process of thoughts, I did not “know” what x = ? entails; afterwards, without any external application, I figured it out: this was abstractly obtained, not given.

    Now, the consideration of whether a “potatoe” exists in the external world, just like your hydrogen-oxygen example, requires empirical observation and, therefore, pertains to type #1 only. The mere form of the instantiation of objects will not get you to knowledge about a particular object you have the ability to imagine. But this does not negate the fact that we are able to apply in the abstract. I would also like to note that this also entails that you do know, by application, that your best guess, from reasoning abstractly, is whatever you deemed your best guess.

    To quickly cover #3, the knowledge that I did imagine a unicorn in my head, regardless of it is or isn’t instantiated in the external world, was applied strictly by reason (no empirical observation). It was not given, it was obtained. No matter how swift the conclusion was, I had to reason my way, which is the application of the principle of noncontradiction (along with other principles), into knowing such.

    With this in mind, I am not referring to objects when I assert space is purely apodictically true. Nor is it in relation to other spatial frameworks we can hold within the uniform spatial plane (like string theory, etc): I am referring to that which reason will always apodictically find true of all of its thoughts and, subsequently, all of its experience holistically—the inevitable spatial reference. Yes, we can conceive of multiple spatial frameworks, but they are necessarily within space. Nothing I can conceive of nor can I claim will ever not be within a spatial reference. Although this is slightly off topic, this is why I reject the notion of non-spatial claims: it is merely the fusion of absence (as noted under the spatial reference), linguistic capability (we can combine the words together to make the claim), and the holistic spatial reference (i.e. “non-” + “spatial”). This is, in my eyes, no different than saying “square circle”. So when you say:

    No, space in application, is not proven by distinctive knowledge alone. I can imagine a whole set of rules and regulations about something called space in my head, that within this abstract context, are perfectly rational and valid. But, when I take my theory and apply it to a square inch cube of reality, I find a contradiction. I can distinctively have a theory in my head that I know, but one that I cannot apply to reality.

    I am not referring to what we induce is under our inevitable spatial references (such as the makeup of “outer space” or the mereological composition of the space), but, rather, the holistic, unescapable, spatial captivity we are both subjected to: we cannot conceive of anything else. Does that make sense?

    The layman already misuses the idea of knowledge, and there is no rational or objective measure to counter them. But I can. I can teach a layperson. I can have a consistent and logical foundation that can be shown to be useful. People's decision to misuse or reject something simply because they can, is not an argument against the functionality and usefulness of the tool. A person can use a hammer for a screw, and that's their choice, not an argument for the ineffectiveness of a hammer as a tool for a nail!

    Fair enough.

    I want to emphasize again, the epistemology I am proposing is not saying knowledge is truth. That is very important. A common mistake people make in approaching epistemology (I have done the same) is conflating truth with knowledge. I have defined earlier what "truth" would be in this epistemology, and it is outside of being able to be applicably known. I can distinctively know it, but I cannot applicably know it.

    Completely understandable. I would also like to add that even “truth” in terms of distinctively known is merely in relation to the subject: it is still not absolute “truth”--only absolute, paradoxically, relative to the subject.

    To note it again, distinctive and applicable truth would be the application of all possible contexts to a situation, and what would remain without contradiction after it was over.

    I am a bit confused by this quote: you conceded that “distinctive..applicable truth is the application of all possible contexts to a situation”, which concedes that it is applied. I am presuming this is not what you meant.

    1. Inductions are evaluated by hierarchies.
    2. Inductions also have a chain of reasoning, and that chain also follows the hierarchy.
    3. Hierarchies can only be related to by the conclusions they reach about a subject. Comparing the inductions about two completely different subjects is useless.

    I am still hesitant about #3, but I will refrain for now (and let you respond to the rest first).

    So, I can first know that the hierarchy is used in one subject. For example, we take the subject of evolution. We do not compare inductions about evolution, to the inductions about Saturn. That would be like comparing our knowledge of an apple to the knowledge of a horse, and saying that the knowledge of a horse should have any impact on the knowledge of this apple we are currently eating.

    I think for now, I will refurbish my initial analogy to your other one (because I think mine was deviating from the main purpose):

    So we pick evolution. I speculate that because certain dinosaurs had a particular bone structure, had feathers, and DNA structure, that birds evolved from those dinosaurs. This is based on our previously known possibilities in how DNA evolves, and how bone structure relates to other creatures. To make this simple, this plausibility is based on other possibilities.

    I have another theory. Space aliens zapped a plants with a ray gun that evolved certain plants into birds. The problem is, this is not based on any applicable knowledge, much less possibilities. It is also a speculation, but its chain of reasoning is far less cogent than the first theory, so it is more rational to pursue the first.

    This is more in line with the main point I am trying to convey: theories are not what is most cogent, they are what has passed a threshold. Whether either of us like it, we do not claim “theory”, scientifically, to the most cogent induction out of what we know: that is a hypothesis at best. Even in relation to the same exact claim (so forget the saturn comparing to a horse for now—although we can definitely talk about that too), we hold uncertainty in most fields of study until it is considered worthy of the title “theory” or “true” or “fact” (etc). It isn’t necessarily bad that your epistemology erodes this aspect, if, and only if, it addresses it properly (I would say). As another example, historians do not deem what is historically known based off of what is the most cogent induction (currently), it has to pass a threshold. We don’t take the knowledge of one reference to a guy named “bob” and go with best speculation we can rationally come up with. As of now, your epistemology doesn’t seem to account for this. We do not accept all contextually “most cogent” inductive beliefs, we are typically selective. Are you claiming we should just accept all of the most cogent beliefs (with respect to each hierarchical context)?

    Within the context you set up, you may be correct. But in another context, he can claim it is possible or probable. For example, Smith sees Jones slip five coins into his pocket. Smith leaves the room for five minutes and comes back. Is it possible Jones could fit five coins in his pocket? Yes. Is it possible that Jones did not remove those five coins in the five minutes he was gone? Yes. We know Jones left those coins in his pocket for a while, therefore it is possible that Jones could continue to leave those coins in his pocket.

    I don’t think you really address my issue (I probably just didn’t explicate it properly). In my scenario with Smith, he isn’t speculating that Jones has 5 coins in his pocket: he is claiming it has the potential to occur. The dilemma is this:

    1. He can’t claim possibility (in my scenario)
    2. He can’t claim probability (in my scenario)
    3. He can’t claim irrationality
    4. He can’t claim speculation

    So what does he claim in your terminology? They are all exhausted. If he claims that he speculates it could be the case that Jones has 5 coins in his pocket, then he is literally claiming the colloquial use of the term possibility. I am salvaging this with “could” referring to potentiality. I am not quite following how you are reconciling this dilemma?

    Again, to keep this relatively short, I will address the rationality vs reason parts later. I would just like to point out that I agree, but you were referring to rationality, not reason. But more on that at a later time (I think we need to resolve the previous disputes first).

    I think you're getting the idea of contexts now. The next step is to realize that your contexts that you defined are abstractions, or distinctive knowledge rules in your own head. If we can apply those contexts to reality without contradiction, then they can be applicably known, and useful to us. But there is no one "Temporal context". There is your personal context of "Temporal". I could make my own. We could agree on a context together. In another society, perhaps they have no idea of time, just change.

    Time is change. What you are referring to is our abstraction of time into clocks (I presuming), which is most definitely correct. However, assuming I can converse with them (or communicate somehow somewhat properly), they will not be able to contradict the notion of space and time. You are right that they may reject any further extrapolation of mereological structures other than what they immediately see, but that would have any effect on my definition of “context” since any mereological consideration would thereby be omitted anyways. I’m not quite following how you can create a different “temporal context” than me, other than semantically refurbishing the underlying meaning. You can surely deny abstract clocks, but not causality.

    To answer your next question, "What is useful", is when we create a context that can be applied to reality, and it helps us live, be healthy, or live an optimal life. Of course, that's what I consider useful. Perhaps someone considers what is useful is, "What makes me feel like I'm correct in what I believe." Religions for example. There are people who will sacrifice their life, health, etc for a particular context.

    Convincing others to change their contexts was not part of the original paper. That is a daunting enough challenge as its own topic. In passing, as a very loose starting point, I believe we must appeal to what a person feels adds value to their lives, and demonstrate how an alternative context serves that better than their current context. This of course changes for every individual. A context of extreme rationality may appeal to certain people, but if it does not serve other people's values, they will reject it for others.

    This is feels like “context” is truly ambiguous. The term context needs to have some sort of reasoning behind it that people abide by: otherwise it is pure chaos. I think the main focus of epistemology is to provide a clear derivation of what “knowledge” is and how to obtain it (in our case, including inductive beliefs). Therefore, I don’t think we can, without contradiction, define things purposely ambiguously.

    My inability to apply something, is the application to reality. When I try to apply what I distinctively know cannot be applied to reality, reality contradicts my attempt at application

    This is an application in the abstract. You didn’t observe any contradiction with respect to objects, you reasoned that, in this case, that the term “non-” + “material” + “being” cannot exist in what is deemed a “material” + “world”. This is a contradiction that did not get applied to any objects.

    If I were to apply what I distinctively know cannot be applied to reality, and yet reality showed I could apply it to reality, then my distinctive knowledge would be wrong in application.

    In your example, specifically as you outlined it, this impossible. You defined your way into a contradiction, which means you are abiding by type #3: pure reason. Saying there is a non-material unicorn in a strictly material world is just like the consideration of a square circle. Now, to claim that a material unicorn, as imagined, cannot exist in the material world would be something that abides by the quote here (that you said), because there’s no pure reason that can be applied (at least not without further context): empirical observation is required.


    No, it at best proves the possibility that the Earth is round. If you take small spherical objects and show that shadows will function a particular way, then demonstrate the Earth's shadows also function that way, then it is possible the Earth is spherical. But until you actually measure the Earth, you cannot applicably know if it is spherical. Again, perhaps there was some other shape in reality that had its shadows function like a sphere? For example, a sphere cut in half. Wouldn't the shadows on a very small portion of the rounded sphere act the same as a full sphere? If you are to state reality is a particular way, it must be applied without contradiction to applicably know it.

    It is true that it does not prove that the earth is completely a sphere, but it does prove it is spherical (round and not flat). It isn’t merely a possibility, it cannot, even under what you described, be a flat plane. Sure it could be even 3/4ths a sphere, but it is nevertheless spherically shaped. Maybe that is what you were getting at, in that case we agree.

    Science does not deal in truth. Science deals in falsification. When a theory is proposed, its affirmation is not what is tested. It is the attempt at its negation that is tested. Once it withstands all attempts at its negation, then it is considered viable to use for now. But nothing is science is ever considered as certain and is always open to be challenged.

    This is not true. What you have described in a really vigorous form of the appeal to ignorance fallacy. Science does not deal with solely falsification; however, it does holistically deal with falsifiability (which is not equivocal). It is necessary that something that is claimed is falsifiable, but we do not assert “theories” as that which hasn’t been falsified in tests. We not only try to falsify the hypothesis, but we all verify that what should be expected is the results. We confirm, not by simply saying we can’t negate it in terms of this piece of evidence directly contradicts the idea of it. It pertains to “truth” relative to objects, which are relative to subjects.

    I look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    I think the notion of something abstract is it is a concept of the mind. Math is abstract thinking, and we discussed earlier how "1" represents "an identity". We really can't apply an abstract to reality without greater specifics. I need to apply 1 brick, or 1 stone. The idea of applying 1 is simply discretely experiencing a one.

    I am not sure what you mean by applying distinctive knowledge in the abstract. All this seems to be doing is sorting out the different ideas within my head to be consistent with what I know. Math again is the perfect example. I know that 1 + 1 make 2. Could I add another 1 to that 2 and get 3? Yes. But when its time to apply that to reality, what specifically is the 1, the 1, and the 2?

    So I think I have identified our fundamental difference: you seem to be only allowing what is empirically known to be what can be "known", whereas I am allowing for knowledge that can, along with what is empirical, arise from the mind. I think that the flaw in taking your approach here, assuming I have accurately depicted your position, is that certain aspects of knowledge precede empirical observation. For example, try applying without contradiction (in the sense that you seem to be using it--empirically) the principle of noncontradiction. I don't think you can: it is apodictically true by means of reason alone. Likewise, try to empirically prove the principle of sufficient reason (which can be posited equally as "causation") by applying it to reality without contradiction (in the sense you are using it): I don't think you can. The principle of sufficient reason and causality are both presupposed in any empirical observation. Furthermore, try proving space empirically: I don't think you can. Space, in one unison, is proven apodictically (by means of the principle of noncontradiction) with reason alone. Moreover, try to prove time without appealing to causation, which in turn cannot be empirically proven, without appealing to reason. Maybe we are just using the term "reality" differently? I mean the totality of existence: not just the "external" world. Again, just as another example, try creating a logical system, which is utilized by everyone (whether they realize it or not) every day, without appealing strictly to reason.

    To take your example of mathematics, there are two completely separate propositions that I think you are combining into one in your example. The abstract consideration of mathematics, regardless of whether it is instantiated in the "external" world, is still known (which I think you admit just fine): this is an abstract consideration (meaning within the mind). I find your example a bit confusing as I think you are agreeing with me, but yet arguing against me. If you say that "I know that 1 + 1 make 2", that seems like you are agreeing you can know things without "applying them to reality" (as you use that term), but yet then you attempt to use a (completely valid I must say) argument for why abstract numbers don't necessary map to real quantities in the external world to prove we must apply things without contradiction to reality to "know" them. If we have a mathematical formula, we can "know" it will work in relation to the "external" world regardless of whether it actually is instantiated in it. As we have previously discussed, mathematical inductions aren't really inductions, they are true with an if condition: but that if condition doesn't mean I can't claim to know that N + M abides by certain rules regardless. This is done with reason, which is what I mean by abstract consideration.

    That leads me to what I think is our second fundamental disagreement: whether inductions are knowledge or not. Initially, I was inclined to adamantly claim it is, but upon further contemplation I actually really enjoy the idea of degrading inductions to beliefs with different credence levels (and not knowledge). However, I think there may be dangers in this kind of approach, without some means of determining something "known", in terms of inductions, vs what is merely a belief, I am not sure how practical this will be for the laymen--I can envision everyone shouting "everything is just a belief!". Likewise, it isn't just about what is more cogent, it is about what we claim to have passed a threshold to be considered "true". Although I'm not particularly fond of that, it is an obvious distinction between a rigorously tested scientific theory and any other speculation.

    Plausibilities are not deductions though. They are inductions. And inductions, are not knowledge. Now can we further study inductions now that we have a basis of knowledge to work with, and possibly refine and come up with new outlooks? Sure! You have to realize, that without a solid foundation of what knowledge is, the study and breakdown of inductions has been largely a failure. I wouldn't say that not yet going into a deep dive of a particular induction is a weakness of the epistemology, it just hasn't gotten there yet.

    This the aforementioned in mind, when I stated your epistemology hasn't quite addressed the pressing matters, I was claiming that without the full understanding that you are claiming inductions are not knowledge: therefore, your epistemology does cover what "knowledge" is holistically. However, I don't think this fully addresses the issue, as it can be posited just the same now in terms of "belief". I find myself in the same dilemma where the theory of evolution and there being a teapot floating around Jupiter are both speculations. What bothers me about this is not that they both are speculations, but, rather, that there is no distinction made between them: this is what I mean by the epistemology isn't quite addressing the most pressing matters (most people will agree that which they immediately see--even in the case that they don't even know what a deduction is--but the real disputes arise around inductions). This isn't meant as a devastating blow to your epistemology, it is just an observation that much needs to be addressed before I can confidently state that it is a functional theory (no offense meant). I think we agree on this, in terms of the underlying meaning we are both trying to convey.

    Correct. And I see nothing wrong with that. Once he slides the coins into a pocket, then he'll know its possible for 5 coins to fit in a pocket of that size.

    Although I understand what you are saying, and kind of like it, I think this is much more problematic than you are realizing. Firstly, he most likely won't know the size of Jones' pockets. Even if he did take the time to measure them, then even with the consideration that he has witnessed 5 coins in Johns' pockets for size L * W * D, he cannot claim it is possible for those 5 coins to fit in a pocket of (> L) * (> W) * (> D). He could abstractly reason that if he experienced 5 coins in a pocket of some size, that, considering mathematics in the abstract, it is possible for 5 coins to fit in a pocket that is greater than that size (assuming the pocket is empty): but he didn't experience it for the greater sized pocket. To me, it seems wrong to think that I cannot reason conditionally that, regardless of whether the pocket of greater size is instantiated in the external world, it is possible to fit 5 coins into that greater sized pocket. Likewise, if I have experienced 1 coin, know the dimensions of that coin, and know the dimensions of Johns' pocket, I can claim it is possible to fit 5 coins in Johns' pocket with the consideration of math in the abstract. The only way I can fathom countering this is to deny the universality of mathematics, which seems obviously wrong to me.

    Again, I'm not seeing how we need the word potential when stating, "Smith speculates that Jones has 5 coins in his pocket."

    Firstly, claiming "smith speculates that Jones has 5 coins in his pocket" is completely different from claiming "smith thinks it is possible for 5 coins to be in Jones' pocket". One is claiming there actually are 5 coins, whereas the other is claiming merely that 5 coins could be in his pocket. These are not the same claims. But notice that, within your terminology, Smith cannot claim it is "possible", "probable", or "irrational". Therefore, by process of elimination he is forced to use "speculation"; however, as I previously just explained, this does not represent what he is trying to claim: he is not necessarily claiming Jones' actually has 5 coins in his pocket. Likewise, stating it as "smith speculates that there could be 5 coins in Jones' pocket" is just to claim "possibility" in wordier terminology. Speculations are not just claims about "what could", as "could" is purely abstract consideration: speculations pertain to positive or negative claims with respect to what actually is (not what could be). That is why potentiality is a prerequisite to speculation: you must not be able to contradict your claim about what is in the abstract, as that would negate it, but, thereafter, you are necessarily making a claim about "reality".

    We have to clarify the claim a bit. Does Smith know that Jones' pocket is the correct size to fit five coins?

    Again, empirically speaking, he cannot claim "possibility" based off of a pure abstract consideration of sizes unless that pocket is the exact same size as that which has been experienced before.

    Is he saying he knows Jones' pocket is big enough to where it is possible to fit 5 coins?

    Again, this is only considered possible if pocket size X = Jones' pocket size, not if pocket size X > Jones' pocket size. But clearly (I think) we can still claim it is possible (just not under your terminology, therefore it has the potential).

    The epistemology is not telling Smith to do what he wants. The epistemology recognizes the reality that Smith can do whatever he wants.

    He can only do whatever he wants in so far as he doesn't contradict himself. If I can provide an argument that leads Smith realize he is holding a contradiction, then he will not be able to do it unless he uncontradicts it with some other reasoning. Therefore, if we can come up with a logical definition of "contexts", then I think we ought to. This is really the root of the problem with possibility and contexts: they are not clearly defined (as in, the subject gets to do whatever they want).

    We can somewhat resolve this if we consider "possibility", in the sense of "experiencing it once before", as "a deductively defined concept, with consideration to solely its essential properties, that has been experienced at least once before". That way, it is logically pinned to the essential properties of that concept. I may have the choice of deductively deciding concepts (terms), but I will not have as much free reign to choose what I've experienced before. To counter this would require the subject to come up with an alternative method that identifies equivalent objects in time (which cannot be logically done unless they consider the essential properties).

    Although I am not entirely certain about contexts yet, I think I have distinguished two types: mereological context and temporal context. The former is what the subject typically deciphers as contextual structures of objects, whereas the latter is the summation of time up to present. Therefore, in terms of temporal contexts, I can claim that I am in a particular context now, which is the summation of my knowledge up to the present moment, which influences my judgements. Therefrom, I can also posit the charity of considering temporal contexts in relation to people (including myself). For example, this is my justification for claiming I may contradict what was considered "true" today by new knowledge that is acquired tomorrow (and, likewise, to people who came historically before me). In terms of mereological contexts, there is an aspect of contexts that has no relation to temporal frameworks: the structures of objects. I can equally claim that what is known now in terms of an object in relation to what is immediately seen does not in any way contradict that which is supposed in terms of an underlying structure of that thing now (i.e. it can be a table and be much less distinctly a table at the atomic level). In summary, I can claim that contradictions do not arise in terms of time as well as structural levels. These are the only two aspects of contexts and, therefore, as of now, this is what I consider "context" to be. It is important to emphasize that I am not just merely trying to advocate for my own interpretation of "context": I am trying to derive that which can not be contradicted in terms of "context"--that which all subjects would be obliged to (in terms of underlying meaning, of course they could semantically refurbish it).

    The problem isn't the reality that anyone can choose any context they want.

    I think they can do whatever they want as long as they are not aware of a contradiction. Therefore, if I propose "context" as relating to temporal and mereological contexts, then they either are obliged to it or must be able to contradict my notion. My goal is to make it incredibly hard, assuming they grasp the argument, to deny it (if not impossible). Obviously they could simply not grasp it properly, but that doesn't negate the strength of the argument itself.

    The problem is that certain contexts aren't very helpful. Thus I think the problem is demonstrating how certain contexts aren't very useful.

    I agree: but what in the epistemology explicates "usefulness"?

    If Smith isn't claiming that Jones has 5 coins in his pocket, then he's speculating Jones could, or could not have 5 coins in his pocket.

    To say "speculate could" is to say it is "possible" in the colloquial sense of the term. Therefore, if we are using it that way, you have only semantically eradicated the ambiguity from "possibility". Otherwise, speculation cannot refer to "could", but what is.

    The purpose of the original paper was simply to establish how knowledge worked.

    Again, since you are defining "knowledge" strictly in the deductive sense (which I partly think is correct), then technically you have achieved your goal here. But, for the reader, I don't think it is quite accurate to say that the epistemology holistically covers all it should: we've merely semantically shifted the concern from "speculative knowledge" to "speculative beliefs".

    When you think of something in your head that you distinctively know is not able to be applied. For example, if I invent a unicorn that is not a material being. The definition has been formulated in such a manner that it can never be applied, because we can never interact with it.

    But you can apply the fact that you distinctively know that it cannot be applied without ever empirically applying it (nor could you). So you aren't wrong here, but that's not holistically what I mean by "apply to reality".

    In your opinion you do, but can you disagree in application? Based purely on this experiment, its plausible that the Earth is round, and its plausible that the distance calculated is the size of the Earth. The actual reality of the diameter of the Earth must be measured to applicably know it. You have to applicably show how the experiment shows the Earth is round and that exact size. The experiment was close, but it was not the actual size of the Earth once it was measured.

    I think you are conflated two completely separate claims: the spherical nature of the earth and the size of the earth. The stick and shadow experiment does not prove the size of the earth, it proves the spherical shape of the earth. You do not need to travel the whole planet to know the earth is a spherical shape: the fact that sticks of the same length can throw different shadows contradicts the notion that the earth is flat. It cannot be the case that the earth is flat given that.

    It only undermines them if there are other alternatives in the hierarchy. If for example a scientific experiment speculates something that is not possible, it is more rational to continue to hold what is possible. That doesn't mean you can't explore the speculation to see if it does revoke what is currently known to be possible. It just means until you've seen the speculation through to its end, holding to the inductions of what is possible is more rational.

    I sort of agree, but am hesitant to say the least. Scientific theories are not simply that which is the most cogent, it is that which has been vigorously tested and thereby passed a certain threshold to be considered "true". I think there is a difference (a vital one).

    No, you can distinctively know that a logically unobtainable idea is irrational to hold. A logic puzzle must be reasoned before it can be distinctively known. Only applying the rules in a logical manner gets you a result.

    I disagree. You do not need to empirically apply rules in a logical manner to get a result. I obtain knowledge that never leaves my head: principle of noncontradiction, principle of sufficient reason, consideration of mathematics, space, time, causality, logical systems (such as classical logic), etc. What I think you are referring to is claims about what actually is vs what actually can be: both are obtained knowledge. Likewise, not all is claims are proved empirically. Again, try to prove space without presupposing it in an empirical application.

    While we could invent a result in our heads to be anything

    This is not true, we are subjected to certain rules which are apodictically true for us. However, I do see your point that we don't "know" what is by what can be. Also, somethings aren't just determined to be abstractly something that "can be", we also determine things as necessary. I abstractly conclude the concept of space itself from its apodictic nature: this is not something that can be empirically tested--"tests" presuppose such.

    it fails when the rules of the logic puzzle are applied

    I agree in the sense that what is applied to the external world may end up exposing contradictions that we hadn't thought of, but this doesn't negate the fact that there is such a thing as non-empirically verified knowledge (abstractly determined knowledge).

    Can I clarify that I agree, but people have the capacity to reason with varying levels?

    I agree, but when you say:

    Some people aren't very good at reasoning.

    I don't think we are using the term in the same sense. I don't mean what is rational, which is what we define inter-subjectively as a coherent form of reasoning. I am referring to that which necessarily occurs in all subjects, lest they not be a subject anything related to me. To put it in a sentence (admittedly from Kant, although I don't holistically agree with him at all): I can believe whatever I want as long as I don't contradict myself. This is the grounding I am trying to subject epistemology to (to the best of my ability). You absolutely right that people aren't very good at rationalizing, but when I refer to reason: we all have it.

    But it cannot convince a person who does not want to reason, or is swayed by emotion.

    The ability to act on emotion first must be decided by reason. Not to say it is rational, but it is always necessarily routed in a reasoning. I agree with you though, in terms of underlying meaning, but I am trying to emphasize that, once it is realized we are all reasoning beings, there is at least something to work with: something to ground in. That's all I am trying to say. But I think we are in agreement.

    Also, no worries! Enjoy your weekend!

    I look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    Fantastic points! To keep this condensed into one response, I am going to try and address your points more generally (but let me know if there's anything I didn't properly address). Just as a side note, this is entirely my fault, as I was the one who double posted (:

    Firstly, I think some exposition into "potentiality" is probably necessary. In general, although I may just be misunderstanding you, I think that some of your concerns are perfectly warranted (and thus I will be trying to resolve them) and some are simply misunderstandings of what I mean by "potentiality". First off, potentiality is an abstract consideration. You seemed to be trying to apply potentiality distinctively and applicably (and finding issues with it): abstract considerations are always applications to reality. I don't think that "application to reality" is limited to empirical verifications: abstract considerations are perfectly reasonable (I think). For example:

    For example, if it is possible that a person who wakes up every day at 8 am could potentially wake up tomorrow at 8 am, that's a distinctive potential. But if unknown to us, they died five minutes prior to our prediction, there is no applicable potential anymore.

    I think this is a misunderstanding of potentiality. Firstly, what do you mean by distinctive potential? Anything that "isn't contradicted in the abstract" (assuming it isn't directly experienced as the contrary) is something that got applied to reality without contradiction. I might just be misremembering what "distinctive knowledge" is, but I am thinking of the differentiation within my head (my thoughts which haven't been applied yet to see if the contents hold). If that is the case, then potentiality can never be distinctive knowledge, it is the application of that distinctive knowledge in the abstract. If I have a belief that unicorns exist, I can abstractly verify whether it is "true" that I have a belief that unicorns exist. If I can't contradict the idea that I am having a belief that unicorns exist, then that is applicable knowledge (because I applied it to reality without contradiction). Secondly, this objection you are voicing also applies to possibility. If I have experienced person X get up at 8 am before, then I can say it is "possible" for X to get up at 8 am tomorrow morning. However, unbeknownst to me, they actually died today: therefore it isn't possible for them to get up at 8 am tomorrow. I don't see this as a flaw in potentiality or possibility, because it is not about what you don't know: it is about, contextually, what you do know. Let's take the same situation, for possibility and potentiality, but add you to the mix. Let's say that I don't know X died today, but you do. For me, it is the most cogent position for me to hold that X can "possibly" (and "potentially") wake up at 8 am tomorrow. For you, it is the exact contrary. The way I interpreted "no applicable potential anymore" is that of something objective, which isn't what I am getting at with potentiality or possibility.

    However, I think you are right in potentiality seems to be consumed by other terms, but I'll get into that in later on (I think we need to hash some other more fundamental things first). I've realized that, although your epistemology is great so far, it doesn't really address the bulk of what epistemologies address. This is because your epistemology, thus far, has addressed some glasses of water (possibility, probability, and irrational inductions), but yet simply defined the whole ocean as "plausibility". Even with a separation of "inapplicable" and "applicable", I find that this still doesn't address a vast majority of "knowledge". So I don't think keeping a concise, one-word description of "speculations" is productive unless we dive into the subparts of that gigantic ocean.

    Now, with that in mind, I want to really explicate how narrow "possibility" truly is. I think it is, as of now, not clearly defined. Let's recall that possibility is "that which has been experienced at least once before". Now, let's dive into your example you gave about the coins:

    "Smith thinks Jones potentially has 5 coins in his pocket, but we the audience knows, that he does not (thus this is not an applicable potential).

    Again, as a side note, the audience would claim it has no potential and Smith would (no contradiction here). But at a deeper level, imagine Smith has never experienced 5 coins in a pocket, but he's experienced coins before. Therefore, Smith cannot claim that it is "possible" for there to be 5 coins in Jones' pocket. He can speculate based off of the possibility of coins and the abstract consideration that he can't contradict the idea that 5 coins could be in Jones' pocket. Therefore, his position is a possibility (coins) -> speculation (5 coins in pocket). What would he say? He can't say it is "possible". Normally, Smith would have, in colloquial speech, deemed this abstract speculation as a "possibility", but now it seems as though he has been stripped of his words. Therefore, I introduced something back from the old word "possibility": the abstract consideration. He can claim "it is potentially the case that Jones' has 5 coins in his pocket". But this can get weirder. Imagine Smith has experienced 5 coins in his own pocket, but not 5 coins in Jones' pocket: then he hasn't experienced it before. Therefore, it is still not a possibility, it just has the potential to occur. Now, I think we are both inclined to try and reconcile this with something along the lines of "contexts, bob, contexts". But what are "contexts"? If we allow Smith to decide what a context is, then it seems as though the epistemology is simply telling him to do whatever he wants (as long as he doesn't contradict himself). But then we could make this much, much weirder. Imagine Smith has experienced 5 coins in Jones' pocket yesterday, but he hasn't today. Well, if the context revolves around time, then Smith still can't claim it is possible. It is only potentially the case. Likewise, Garry could have a location based contextual system, where he's experienced 5 coins in Jones' pocket in location X, but Jones' is now in location Y. Garry and Smith would agree that it is not "possible" (not to be confused with "impossibility") that Jones has 5 coins in his pocket--but for completely different reasons. Moreover, as you can imagine, without clearly defined meaning of "context", Smith could claim it is "possible" while Garry claims it isn't. But to take "experience it at least once before" literally, then possibility is incredibly narrow. And to take it not literally is to create a superficial boundary with no clear meaning (as of yet).

    Also, I would like to point out, it wouldn't really make sense for Smith, although it is a speculation, to just merely answer the question with "I speculate he has 5 coins in his pocket", because Smith isn't necessarily claiming that Jones does have 5 coins, he is merely assessing the potentiality. Again, at a bare minimum, he would have to had experienced 5 coins in Jones' pocket before in order to claim it is possible. Most of the time we don't have that kind of oddly specific knowledge, therefore potentiality was born: it is a less strong form of possibility. It is to apply a concept to reality, in the abstract, without contradiction. Likewise, imagine Smith has experienced 4 coins in Jones' pocket, but not five. Then it also wouldn't be a possibility that Jones has 5 coins in his pocket: it would be an abstract consideration that is not contradicted.

    Furthermore, I would like to revisit the 8 am dead person example: it isn't necessarily the case that it is impossible either just because they are dead. Let's say I heard from a trusted friend that they died today: I didn't experience their death. This would be an abstract consideration. Do I trust them? If I do, what logically follows? It logically follows that there's no potential for them to wake up tomorrow at 8 am. But notice that in doing so, I've necessarily revoked any "possibility" as well, but not on the basis of "impossibility".

    To sum it up, I think we need to clearly and concisely define "context", "possibility", "impossibility", and "potentiality". If I can make up whatever I want for "context", I could be so literally specific that there is no such thing as a repetitive context, or I could be so ambiguous that everything is possible. Then we are relying on "meaningfulness", or some other principle not described in your epistemology, to deter them from this. If so, then why not include it clearly in the epistemology?

    I had inapplicable plausibility defined as "that which we are unable to apply to reality at this time."

    I think that, in this sense, I agree. But originally it encompassed two senses: that which can't be applied right now, and that which never will be. The latter is irrational. The former may be rational in the sense that it isn't an irrational induction, but it isn't necessarily the case that it should be pursued either. It would merely be a speculative potential: specifically, given no further context, an incredible speculative potential. Which leads me to my next question: when you say "unable to apply", what do you mean? I think that if nothing can be applied at all, then it isn't worth pursuing. If you can't find any evidence for that concept or idea at all, why pursue it? The great inventors of the past, albeit invented "crazy" "impossible" things, had some sort of evidence backing their speculations. They didn't tell themselves: "I am trying to discover a teapot 100 billion light years away in another galaxy, of which I have no evidence to support it is there, but I am going to incessantly keep trying anyways".

    For example, let us say that a man uses a stick and shadows to determine the Earth is round, and calculate the approximate circumference. The only way to applicably know, is to travel the world and measure your journey.

    I disagree. The journey across the world is not the only way to verify the spherical nature of the earth. The stick and shadows is just the beginning. One can find many more forms of scientific evidence (that doesn't require a round trip): it would be, given that kind of evidence it has, a "credible speculative potential".

    However, I do have my worries, like you, about even calling them "speculations": a lot of enormously backed scientific theories would be a "credible speculative potential", which seems to undermine it quite significantly. This is honestly the main issue with "plausibilities": it is really where epistemology mainly lies. It may be in our best interest to just dedicate more terminology, more explanations, towards speculations: there has to be further hierarchies within it. This is why, upon further reflection, although it is great so far, I don't think your epistemology really gets into any of the pressing dilemmas an epistemology is supposed to address. Now we must determine the thresholds of evidence that would constitute a scientific theory as significantly more reliable than, let's say, simply a man speculating with a stick and shadows (both of which could potentially be considered "credible speculative potentials"). Don't get me wrong, your epistemology does a splendid job at the fundamentals, especially in terms of inductions, but there's a lot of work needed to be hashed out in terms of speculations.

    I believe irrational inductions should remain a contradiction with what is applicably known

    I disagree, if what you mean by "application" is empirical evidence. I am claiming potentiality is applicably known (always). I can applicably know, in the abstract, that a logically unobtainable idea is irrational to hold. For example, take an undetectable unicorn:

    1. A truth-apt claim is a claim that has the ability to be falsifiable (true or false).
    2. An undetectable unicorn is unfalsifiable.
    3. An undetectable unicorn is not truth-apt.
    4. The pursual of a claim implies it is truth-apt.
    5. An undetectable unicorn is not pursuable.
    6. Therefore, to attempt to pursue the idea of an undetectable unicorn, leads to a contradiction: the pursual implies its truth-aptness, but yet the claim itself is not truth-apt.

    I have tried to avoid using the word "objective" within contextual differences, because I think there is something core to the idea of "objective" being something apart from the subject, or in this case, subjects. As you have noticed, there is a dissatisfaction if a person re-appropriates a word that is too far from our common vernacular. I believe a way to avoid this is to try to find the essential properties of the word that society has, and avoid adjusting those too much. In this case, I think objective should avoid anything that deals with the subject, as I believe that counters one of the essential properties that society considers in its current use of the word.

    Although you are right that I am refurbishing the term "objective", I think it is a step in the right direction. I think this is actually what people implicitly are doing when they say something is "objective": it is something they've deemed to out of their control (an object). Some people will go a step further and claim there's actual an absolute something out there, of which is separate from all subjects: this is a speculation that lacks potential. For a color blind person, I think they will be more than happy to accept that what is objective for them, isn't objective for other people. So, although I agree and you are right, I think society needs to stop making such bold, unnecessary claims that there's some sort of absolute instantiation of objects. It is something that is unfalsifiable.

    That the person decides to be rational. You can never force a person to be rational. You can persuade them, pressure them, and give them the opportunity to be, but you can never force them to be. Knowledge is a tool. Someone can always decide not to use a tool

    This is true. But I want to be careful with the term "rationality": I find too many people using it in an ambiguous way to justify their reasoning (without actually justifying it). For me, "rationality" is a inter-subjectively defined concept. Therefore, we are not all rational beings (like Kant thought), but we are all reasoning beings. My goal, in terms of epistemology, is to attempt to make the arguments based off of reasoning, so as to make it virtually impossible for someone to deny it (if they have the capacity to understand the arguments). I agree that people don't have to be rational, but they are "reasonable" (just meaning "reasoning").

    I look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    I apologize for the double post here, but I've had more time to think and wanted to share a bit more with you (so you can mow it over in your head).

    1. "Accidental Properties" should be "Unessential Properties". If I am remembering correctly, your epistemology utilizes the terms "essential" and "accidental" to refer to properties. However, although I understand the underlying meaning, I don't think "accidental" properly addressed what is trying to be conveyed. The way I am thinking about it, there's nothing "accidental" about properties that may be decided to be removable from the term. I would say those properties are "unessential", and they are predefined. If an "essential property" turns out to be something I deem unworthy of such a title, then that term is being fundamentally altered to mean something different (and not merely a refurbishment).

    For example, let's say I am defining "monitor" with the essential properties of ["displays things on a screen"] (where [] denotes a set). I think I am logically constrained to the following with consideration to object O:

    IF O lacks the potential to have had the essential properties necessary to be a monitor, then it is not a monitor. (i.e. in the abstract, if O lacks the necessary components, even in the sense of dysfunctional components, that make the essential property of displaying a screen, then it is not a "monitor")

    IF O has the potential to have had the essential properties necessary to be a monitor, then it is a monitor--"dysfunctional monitor". (i.e. in the abstract, I can consider O, given just a slightly torn wire or a completely empty wire port, if it were intact, to would have produced the essential property of displaying things).

    IF O has the essential properties, then it is a "monitor" ("functional monitor").

    The reason why this is of particular importance to me is that I was encountering essentially the issue of the Ship of Theseus again, but with doors. What makes a door a door? It doesn't seem like there is really, in colloquial speech, a clear line that is drawn (no real essential properties). Is it that it has a knob? No, doors can not have knobs. Is it that it has rectangular shape? No. Does it need to open? No. Does it need to close? no. Does it need hinges? no. But then I realized, and I'm pretty sure you probably meant this when we previously discussed the ship paradox, that essential properties are the exact same, in terms of arbitrariness, as unessential properties except that they are determined to be the fundamental aspects of the term. Therefore, if an essential property turns out to not be essential, then what is actually happening is that the subject is completely disbanding from that term and creating a whole new one (it is not a refurbishment, that can only occur with unessential properties).

    Therefore, I think each term must have at least one essential property, and that is the anchor, so to speak, of the term. So, for example, if I define a "door" as "that which can open", then it doesn't matter anything else (such as the shape, texture, color, material, etc). And if I decide that, actually, that essential property is no more, then so is the term "door". Now, there's two important things to note here: (1) I can most definitely still, after disbanding the term "door", define "door" again with different essential properties (it is just that it is no longer the same concept) and (2) the essential property, as previously defined, is constrained to potentiality (so even if a "door" won't open, that doesn't mean it hasn't qualified as "that which can open").

    Further, quite frequently when we say "that is a door", "that is a fake door", or the like, what we really are referring to is "likeness", which I consider to be only useful for anticipation purposes (strictly hypotheticals), and are not actually assigned to the term "door". For example, given my previous essential property qualifier for "door", if I see an object that resembles all the unessential, stereotypical, properties of a "door", I may be inclined to treat it as such--or, in the case that treating it as such produces no meaningful results, I may be inclined to define it as a "fake door". But my emphasis is that that which does not contain all the essential properties is not included in that term. So I would be inclined to say "it is like a door" when there is an object that lacks any potential to open but yet resembles a door.

    2. I think it is finally time to address "plausibilities". "Plausibility" typically means "Seemingly or apparently valid, likely, or acceptable; credible". I don't think this even remotely resembles what you are trying to convey in the epistemology and, although we could legitimately rebrand the term, I think it is in our best interest (or at least my best interest) to use more pertinent terminology. I hereby propose terminology more resembling "speculative potentials", which directly eliminates "credibility" and "likelihood" from the terminology (as I don't think either should be attributed to a "plausible induction"). Therefore, I think "plausibilities" are actually "speculative potentials". A "speculation" is "Reasoning based on inconclusive evidence; conjecture or supposition" and "potentiality" is referring to "that which is not contradicted in the abstract". To say something is "plausible" is not, as you are probably well aware, to claim something only based off of it having potential (it is weightier than that).

    Moreover, since "inapplicable plausibilities" have no potentiality (because they can be contradicted in the abstract: namely that they are not truth-apt, which contradicts the investigation of the claim in the first place), they will be hereby moved to "irrational inductions" and, most importantly, the terminology would now reflect that concisely and clearly ("speculative potential" directly explicates that it necessarily involves potentiality).

    Likewise, there needs to be some subcategories of "speculative potentials", for they are all not equal claims (potentiality is quite a low bar to pass). I hereby propose we separate it as follows:

    Divide "speculative potentials" into two subgroups: "considerable speculative potentials" and "inconsiderable speculative potentials". "Considerable" being defined as that which is worthy of consideration, which would be constituted by "a speculation, that has potential, that provides some form of negative and/or positive evidence beyond its mere potentiality". "Inconsiderable" is simply "that which has not provided anything beyond its potentiality as a basis of evidence".

    Now, it will have to probably be voiced in greater depth in a subsequent post, but I would like to briefly point out that I would like to also refrain from accepting "inconsiderable speculative potentials".

    Within "considerable speculative potentials", we can split it further into two subcategories: "credible speculative potentials" and "incredible speculative potentials". "Credible" being defined as "that which, upon consideration, (1) passes a threshold as defined in an axiomatic contract, (2) abides by a well defined and coherent logical system, or/and (3) directly abides by the principle of noncontradiction". Anything that doesn't constitute as "credible" is thereby "incredible".

    3. I am still not sure if I am right in trying to logically tie the subject down to avoid deadlocks (as discussed in the previous post), but I have thought a starter point. Firstly, in order to be a "societal context", there must be some sort of inter-subjective or inter-objective agreement. If not, then it is not a "societal context"--and thereby is a "personal context". This cannot be contradicted as it is a deduced term. Secondly, the subject can hold a subjective claim and it's inter-subjective converse without contradiction. Likewise, the subject can hold an objective claim and it's inter-objective converse without contradiction.

    My initial flaw, I think, in my contemplation of societal context deadlocks was my fundamental viewing of it as all "objective". However, I think we can split it into two meaningful terms: "objective" and "inter-objective". "Objectivity" is "that which the subject considers object in relation to itself", whereas "inter-objectivity" is "that which is agree upon, by a collective of subjects, as the object in relation to themselves as a shared experience". For example, when a red-green colorblind and non-colorblind person fundamental disagree (thus seemingly at a deadlock), they are actually disagreeing "objectively", but not necessarily "inter-objectively". The colorblind person could very well hold that it is "objectively" "true" that they are seeing green, while also holding that it is an "inter-objective" fact that what they are seeing is red--meaning they accept that it would be a contradiction for them to claim that it is green for the majority of people, but, nevertheless, it is not a contradiction to apply it to reality for themselves. To keep it brief, I think that "inter-objectivity", just like "inter-subjectivity", is a complicated subject that isn't merely "the majority deem what is inter-objective". No, I think it pertains more to a power dynamic, which tends to end up being the majority deem it so in more representational government systems. But that is for a later discussion. My main point here is that someone could reject someone else's claim at the "objective" or "subjective" level, but not be able to do so with respect to the inter-levels. I can apply to reality without contradiction that I value this particular loaf of bread at $100,000 (or pounds or pesos, whatever you fancy), but I cannot apply without contradiction the claim that that loaf of bread is valued inter-subjectively at $100,000 (it's probably not).

    Now, none of the aforementioned completely solves anything, but I thought I would get it on your radar so you can mow it over too.

    I look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    Don't worry, I am enjoying myself in these conversations. That being said, if you tire of them, feel free to let me know without any guilt or worry. I would like you to enjoy them as well, and not feel forced or pressured to continue.

    Likewise, I thoroughly enjoy our conversations! I have a lot of respect for how well thought out your positions are! I don't think enough people on this forum give you enough credit where it is due! I just wanted to make sure that you are just as intrigued by this conversation as me (:

    Moreover, I agree: I think our different outlooks on the "fundamental" is trivial enough, to say the least. I think it is time to continue to different aspects of the epistemology.

    The main objection, or more like issue, that I am internally thinking about pertains to the ambiguity, or almost incredibly limited scope, of what is covered in the epistemology as is. Again, as always, I may be just misunderstanding (you tell me!), but, although the epistemology is rock solid hitherto, it doesn't really provide a concrete structure for societal contexts (I would say--or at least that's how my internally raised dilemma goes). In light of your chinese poem example (which is a great find by the way!), I don't think I need to go too into depth about what I mean by ambiguity with respect to defining (more like creating) terminology. Just as a quick example, in the abstract, I can legitimately determine essential properties X, Y, Z and (distinctly different) essential properties A and B to the same term. So when I refer to that term, it could be in relation to either one of those two essential property sets (so to speak), and there is no contradiction here to be found: ambiguity is not a contradiction (in the form of A is A and not A).

    Although I think we both agree that the definitions that provide the most clarity should prevail, my dilemma is: "what justification do I have for that?". What in the epistemology restricts the other person from simply disagreeing? I found nothing stopping them from doing so. That is a worry for me, as it seems like, if I follow the trajectory of the epistemology in this manner, we end up with incomprehensible amounts of deadlocks (stalemates).

    I actually think I have come up with a solution to this. I think that the subscription to the pon actually provides more rigidity than I originally thought. I think that we can clearly argue that ambiguity is actually wrong (or, more specifically, best clarity right) if the individual subscribes to pon. They cannot hold both. The argument, loosely, is as follows:

    1. Ambiguity does not represent experience in the most clarifying manner.
    2. Every "thought" the subject has is motivated towards acquiring an explanation.
    3. The explanation that provides the most clarity for the subject becomes the explanation they accept. ("most clarity" being what they cognitively decipher as such, I'm not saying it is with respect to other subjects)
    4. Defining ambiguously contradicts providing the most clarity.
    5. Therefore if a less ambiguous definition is provided (that they also consider less ambiguous), it must be accepted by the subject.

    In my thinking, very premature I do admit, I think that even to provide a counter to this would be an attempt to provide a better clarifying explanation (conclusion), therefore it is self-defeating to reject this given pon. But, to dive in deeper:

    #1 This is based off of pon: "ambiguity" is defined as the contrary to that which provides clarity. Therefore, to reject this, I think one would be obligated to reject pon.

    #2 This is also based off of pon: I don't think this can be contradicted. Conclusions of any kind are an explanation. The sole purpose of questions is a "thought" driven towards the goal of explanation. Even to say "it just is", or anything like that that provides no real good explanation, is still an explanation--in a generic sense. A statement, blunt and without a question, is still an explanation. I don't mean "explanation" in the sense that we deem it "sufficient" in the sense of academia. Therefore, I think they would be obliged to reject pon in order to reject this.

    #3 Any attempt to counter this is implicitly trying to provide a better explanation than my proposition here, so even in the case they reject this, their rejection is quite literally them accepting the explanation (counter) they deemed to provide better clarity. Therefore, this cannot be contradicted.

    #4 This is honestly just a reiteration of #1. I'm not sure if it is even needed.

    #5 Again, even if they reject this, they would be acting it out implicitly, therefore it cannot be contradicted.

    Therefore, I think this argument conforms to a specific protocol, so to speak, which is simply use of pon. The only thing they must accept is pon to be obligated to accept that ambiguity is actually wrong. I can actually tell that person they are wrong even within their own context IFF they accept pon. That is our common ground.

    Accordingly to this kind of pon argument anchoring (where they must choose to accept whole heartedly or reject pon), I think we could most definitely add principles like these (as long as they conform properly) to the epistemology and, thereby, provide stronger, more structured, system for people to abide by.

    Likewise, I was wondering: "couldn't the other person just reject possibility (or some other induction hierarchy) as more cogent than plausibility (or some other induction)?". I think, as is, although you argue just fine for it, they could. They could utilize the most basic discrete and applicable knowledge principles in your epistemology to reject the hierarchy without contradiction. However, I think I can provide yet another pon anchored argument that forces them to either accept or reject the pon:

    1. Anything you experience requires a conclusion.
    2. Therefore, in order to concede objects, the subject is required.
    3. Therefore, that which is closer to immediate experience the subject can be more sure of.
    4. Therefore, possibility (as defined in epistemology) is more cogent than plausibility (ditto) because it is closer to the subject's immediate experience.

    This is just a raw rough draft, and it definitely could use some better terminology, but I think you get the general idea:

    #1 This cannot be contradicted. It would require a conclusion.
    #2 Just a specific elaboration of #1
    #3 Must reject 1 in order to reject this, which cannot be contradicted.
    #4 This logically follows. They would have to reject pon in #1.

    I think this kind of pon anchoring could really expand the epistemology with respect to a lot of other principles the subject would be bound to (unless they reject pon). Let me know what you think.

    The second idea I have been thinking of, to state it briefly, is what I can "axiomatic contracts". What I mean is that, in the case that something isn't strictly (rigidly) pon anchored, two subjects could still anchor it to pon with respect to an agreed upon axiom. For example, although my previous argument is much stronger (I would say), we could also legitimately ban ambiguity IFF the other subject agrees to the axiom that they want to convey their meaning to me. With that axiom in mind, thereby signing an "axiomatic contract", they would be obligated to provide as much clarity as possible, otherwise they would be violating that "axiomatic contract" by means of violating the pon. In other words, they would be contradicting the agreed upon axiom, which would, in turn, violate the contract. Just some food for thought!

    I look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    Let me start off with a concession: you are 100% correct here. I apologize for the confusion; I am currently slapping myself upon the head!

    I most definitely have to utilize the principle of non-contradiction (pon for short) to claim anything. To claim that there was a manifestation, or that I am currently having one, requires pon. Therefore, I would argue that it all starts with pon.

    However, most notably, I don't think pon and "discrete experience" are synonymous with one another: the latter is formed from the former. I think there is actually something in between the two, so to speak: the realization of the manifestations themselves. Let me put it forth bluntly:

    1. Starts with pon
    2. Utilizes pon to immediately realize the "thoughts" themselves.
    3. Utilizes those thoughts to realize that I "discretely experience"

    Therefore, my only adjustment is the insertion of #2. Again, I apologize for the confusion, the missing piece for me was realizing I am utilizing pon to get to 2, so I was just starting with 2.

    I would also like to respond to your elaboration on "fundamental":

    Using a higher level concept to discover a lower level concept does not mean the higher level concept is more fundamental than the lower level concept.

    The converse is exactly what I am saying. This is why I was using the egg and chicken analogy so much. This is also why I identified two types of chronological derivation (two types of derivations in terms of fundamentals).

    We only discovered atoms because of science that was not based upon upon understanding atoms. Does that mean that the science that discovered atoms, is more fundamental than the atoms themselves?

    Yes. This is exactly what I am trying to point out. There's two kinds of derivation, and I find people typically focusing on only one of the two. One "fundamental" is in the sense of the highest level thing which derives all else, the other "fundamental" is what the highest level thing concluded is lower levels. Likewise, it is only the "highest level" thing in relation to the latter form "fundamental", where it concluded there are lower level things, so to speak, than itself or what it is discretely experiencing. Moreover, it is the lowest level in relation to the former kind of "fundamental": everything is contingent on it. But that doesn't mean it controls everything, or that it isn't a fair point to conclude other contingencies in terms of other objects. It is fair to conclude tables are contingent on atoms, but both are contingent on the subject insofar as it may or may not be there absent the subject. Likewise, the atom is more fundamental than the table, but it is also less fundamental to the table. The microscope used to see a germ is more fundamental than the germ itself insofar as it is necessarily contingent on such a tool for its discovery. It may very well be, in 1000 years from now, that a much better tool we come up with renders our previous view of germs obsolete (not saying it definitely will, but it is possible). That microscope which you can immediately see for yourself is a much more concrete, sure fact than anything it produces for you to see. Likewise, although this may never happen either, we may, in 2000 years, determine that our view of atoms was completely off. The "atom", conceptually, is contingent on more "fundamental", "higher level" objects we use to discover them and could very well "undiscover them", if you will. Furthermore, this isn't to say we don't consider the "atom", conceptually, as more "fundamental" than the table, it is just with careful consideration that they both fundamentally contingent on one another in two different regards. Does that make sense?

    I do not mean a fundamental as a means of chronological use. I mean its smallest constituent parts.

    Firstly, you are 100% correct in your inference that I am using "fundamental" in a totally different way, as previously described. However, with respect to "discrete experience", I don't see how you are using "fundamental" in the sense of "smallest constituent parts". "Differentiation" is not the smallest parts. Just like how it was posited that the scientific tool utilized to discover the atoms are not more fundamental than the atom itself, differentiation is not more fundamental than the atoms that are discovered therefrom. I am probably just misunderstanding you, but if the goal is to use the smallest constituent parts, then you would have to derive back to a quark or something. Differentiation is in terms of my sense of the term "fundamental": it is scientific tool used to discover the item (analogously, not literally a scientific tool of course). In that case, it is pon.

    But there is one assertion which cannot be countered. There is discrete experience. I am a discrete experiencer.

    I would like to agree, but also emphasize pon -> thoughts -> discrete experiencer. You first must be convinced of the thoughts themselves to then conclude you are a discrete experiencer.

    It doesn't matter that I used thoughts, language, and my brain to discover that I discretely experience.

    I am hesitant here. There's a difference between the thoughts themselves, as immediately known via pon, and those thoughts concluding they are being produced by a brain. Same with language. I am not trying to constrict this to internal monologue. You must necessarily "know" your thoughts, via pon, before you can conclude you discretely experience. I am not referring to any inferences to where the thoughts themselves, or the use of pon, is coming from. I would say the fundamental is pon (after further contemplation and a couple slaps to the face).

    As I definitely overcomplicated this into a much longer discussion than it needed to be, although I am more than happy to continue the conversation, I don't want to squander any of your time. So, I will leave it up to you if you would like to terminate our conversation now, or continue the quest. I have much more to say pertaining to the ambiguity that worries me within your epistemology. Seems as though I really can define whatever I want, because "meaningfulness" is no where to be concretely found in your epistemology. There's a lot one can do without violating pon. Likewise, I can quite literally define two unique sets of essential properties to the same name without contradiction: there's nothing in your epistemology stopping me from doing this. But, again, I will only continue down that road if you would still like to continue our conversation.

    I look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    I am also sorry that I did not tackle a few of your points within the envelope arguments that I think had merit. It is just that in doing so, I think it would have presented confusion because of the flawed premises within the envelope argument they were tied with

    No problem! I do think that you aren't quite following what I am trying to convey. So I am going to keep this response incredibly brief so you can fire back with your thoughts (without having to decipher a long reply).

    I think that, although we both have good intentions, we are mostly talking over each other. I feel like I followed everything you stated in your last post, and mostly agree, but I must be missing something as well. When you state:

    The question is, "Can you come up with something more fundamental that you can distinctively and applicably know, prior to being able to discretely experience?"

    I think this is missing my point, as it is framed in a way where it is impossible for me to do so: "distinctively and applicably know" is within the discrete experience "framework", so to speak. And, as far as I am understanding you, this coincides quite nicely with your view of discrete experience being something of which I cannot possibly counter with a more fundamental.

    For all intents and purposes, I am going to simplify my "conceptualization" to "thoughts". I think, as far as I understand your point of view, you are viewing it like this:

    "discrete experience" -> "subject" -> *

    Where '*' is just a placeholder that can be filled with nothing or something else (objects). Whereas I am viewing it like:

    "discrete experience" <- "subject" -> *

    When you state that it starts (at the fundamental) with "discrete experience", I am thinking, from my point of view, that that is a "thought". You are "thinking" that everything requires "differentiation", "a discrete", which is where, I would argue, it starts. Even when you state (rightfully so) that "thinking" is a process of discrete experience: that is a "thought". So even if we go with:

    "discrete experience" -> "thought" -> *

    I am viewing it as:

    "discrete experience" <- "thought" -> *

    Obviously, there are many problematic issues with substituting "thought" with "subject", but I am just trying to convey the bare bone difference between us (stripping away everything else). From my view, you cannot claim "discrete experience is the fundamental" without "thinking it", where "thought" is the fundamental. This is why I think we are deriving in two completely different senses of the term. This is the challenge: you are not starting with a "discrete experience", you are starting with a "thought". The "thought" which states that thoughts itself are "discrete experiences", etc.

    I will let you fire back with what you think. I think this is the bare bone difference between us.
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    I think we are both struggling here to convey each other's intentions.

    I agree. Furthermore, I really appreciate your elaboration as I now understand better where exactly you are coming from. Likewise, I will do my best to keep this response concise and aimed at conveying my point of view.

    Now imagine that everything you do, thoughts, feelings, light, sound, etc, are the light that streams in from a lens. You don't comprehend anything but the light. The sea of existence. But then, you do. You are able to separate that "light" into sound and sight.

    I am understanding this as what is scientifically typically considered "sensations". Am I correct?

    Technically, this is the brain. If you had no brain, all the pulses from your eardrums and the light hitting the back of your eyes would mean nothing. The brain takes that mess of light, and creates difference within it.

    This seems, generally speaking, to be "sensations" vs "perceptions"--the former being the raw input and the latter being the interpretation (in your case, I think you are more stating "differences" instead of "interpretations", but I think they essentially convey the same thing here). My point is that, although you are right in everything you have said, this is all obtained knowledge pertaining to how you derived yourself (or how you, thereafter, derived someone else in relation to themselves). This is the chicken figuring out it came from an egg (or the chicken concluding another chicken must have come from an egg). Maybe instead of calling it "extrapolated chronological precedence" we could call this simply "that which was obtained or determined to be true regarding what must precede itself (itself being the "subject")". This is contrary to “just chronological precedence”, which maybe we could call this simply "that which is deriving or that which is required for the consideration in the first place". The chicken derives that it came from an egg: that derivation requires it in the first place. It could very well be, even given that it makes the most logical sense (or may even be considered necessary) that the chicken came from the egg, this is all formulations of that chicken. What if this "truth", that it must come from the egg, is simply that which is a product of cognition? What if it is a product of the chickens ability to differentiate things (and not “objectively” known, or absolutely apart of the “universe”)? What if it is only necessary in relation to itself? When we analyze a brain, it is an interpretation of a brain via a brain. Therefore, you will only know as much as is allowed via your brain's interpretation of that brain it is analyzing. Although I don't like putting it in those terms (because I am utilizing what I am criticizing to even put this forth), maybe that will make more sense (I'm not sure). Do you think it must necessarily be the case that it comes from the brain, or that it must necessarily be the case in relation to itself? I agree with the science you are invoking here (no problem), but hopefully my proposition here is making a bit of sense.

    What are the essential properties of a manifestation? If its not a discrete experience, can you explain what makes it different?

    Although your definitions are all splendid, I don't think your use of "manifestation" quite fits what I am trying to convey. I was using "manifestation" and "conceptualization" interchangeably (I apologize for the confusion there). For all intents and purposes right now, I am going to try to explain it in terms of "conceptualization" by means of a poor analogy.

    Imagine that an envelope (mail) pops into existence out of thin air into your hands periodically. You don't know the contents of the mail initially or where they came from or how they came to be, but you open them. In each envelope, which occur in succession to one another (and only once you read the one currently in your hand), there is a message that you read. Imagine you (1) necessarily always participate in this periodic reading of the contents of envelopes and (2) that you are always immediately convinced of the contents that you read. This is essentially how I view you (for all intents and purposes right now: "thought"). Let's take it for a spin.

    Let's take your pink elephant example. When you say you had a basic discrete experience of a pink elephant, I am going to map that to an envelope, of which you have no clue where it came from or how it is, which you necessarily opened and read--immediately convinced of its contents: the discrete experience of the pink elephant. Now, as the envelopes are in succession of one another, you are unable to continue until the next envelope pops into your hands and you can be convinced of its contents. Therefore, when you say you could (1) be in doubt that what you manifested was a pink elephant or (2) you could apply a tool of knowledge to determine whether you did in fact manifest it, these both (whichever occurs) would be the next envelope's contents (or an envelope simply after that envelope). So, for example:

    **discrete experience of "pink elephant"**
    envelope 1: I just discretely experienced a pink elephant [convinced]
    envelope 2: did "I" really just discretely experience a "pink elephant" [convinced you are doubting envelope 1]
    envelope 3: "I know I" discretely experienced a "pink elephant" because I can apply it without contradiction to reality [convinced]

    Notice the succession of envelopes and how you cannot (in this hypothetical) be convinced without the use of reading an envelope. Now, this would mean even if 600 million years or 2 seconds go by between manifestations of these envelopes (between when you get convinced via reading one), for you that time would never have occurred. If envelope 2 was read 600 million years after envelope 1, it would be no different for you than reading it 10 seconds after the other. Notice that the **discrete experience** is not “known” (or may “recognized” is a better word?) until envelope 1, not at the point of discrete experience. If envelope 1 never occurred, then the discrete experience “would have never occurred”. That envelope 1 is what enabled you to even logically consider the discrete experience in the first place. When I say “it never would have occurred”, I mean in the sense that even if hypothetically it is still objectively (or absolutely) occurring without the envelopes, it would be completely unverifiable and thereby meaningless to the subject.

    The conceptualization I am referring to is (more or less) the envelopes: a concept manifested in the same essential manner that is immediately convinced of. Even if I read an envelope, get convinced of it, and then immediately in the next envelope am unconvinced of the previous one, this process still occurred. Also notice that the correspondence, so to speak, of each envelope is necessarily one off: envelope > n can pertain to envelope n, but n cannot pertain to n. For example:

    envelope 1: I just discretely experienced a pink elephant
    envelope 2: I was convinced I was discretely experiencing a pink elephant when I read envelope 1

    Notice the convincement that one was convinced during envelope 1 occurs, at a miminum, n + 1 later and cannot occur at n (at 1 in this case). Likewise:

    envelope 1: I am convinced of this very sentence right now as I say it

    I have not solidified, so to speak, that very assertion until > 1 envelope pops up with a message pertaining to it. In other, more confusion, words: I am immediately convinced of “I am convinced of this very sentence right now as I say it”, but necessarily not immediately convinced of my convincement of that very statement until (if at all) envelope > n.

    Last thing to briefly elaborate on is, if this is the case, then how would the reader get convinced of the envelope process? Wouldn't this also be an extrapolation of some sort? The 'egg' of my analogy, so to speak? I think not. This is because, although envelopes can only pertain to each other in chronological order (> n pertains to n) (therefore I would be using the contents of those envelopes to verify the process itself in a logical manner, which is an extrapolation of some sort: a use of logic to derive the logic), I am not basing the argument (or at least not trying to) off of the process of the envelope as extrapolated, but by the form of the envelopes themselves. In other words, by means of the contents of the envelopes, all of these previous and currently continually manifesting envelopes assume the same form--that is, something of which I am immediately convinced of (which can equally be unconvinced of later on). The form is the concept in a pure sense (but I like your definition as well, but notice yours, as you rightly point out, is within discrete experience whereas the convincement of these envelopes I would argue is not). Also, it is important to note that "convincement" that I am referring to is being used not necessarily in terms of an envelope that explicitly contains "I am convinced of X", for that very statement is immediately convincing you of X and not itself. So when I "prove" conceptualization, I am merely reading the contents of envelopes and I assert I was convinced immediately of the content of envelope n by means of another envelope > n--and, in turn, everything in this general sense is a conceptualization (an envelope). I cannot break this immediate conceptualization loop that seems to occur ad infinitum.

    Likewise, when we talk about differentiation, I agree with your definition, but when we provide any logic, or illogical, or rationale, or irrationale, or absurdity, etc, we are doing so in a manner of reading envelopes that we are convinced of immediately, which can most definitely be unconvinced of later. Therefore, what you said is true pertaining to discrete experience, but the whole argument, including differentiation in the sense of experience being discrete, is a succession of envelopes. Actually, I would refurbish it to "is a succession of envelopes without conceding a succession of envelopes beyond the reading of the envelopes themselves". But I think that may be confusing (not sure).

    The manifestation itself is not contradicted by reality.

    So, to keep this as fundamental as I think possible, the idea of "contradiction" is read via an envelope. However, the important aspect of it that makes it "special", so to speak, is that it can be later on the contents of another envelope that asserts the necessity of the principle of noncontradiction and, most importantly, every envelope that manifests pertaining to such will assert the very same thing. This is why it is an axiom: you cannot apply the principle of noncontradiction to itself because that always leads to the use of it in the assertion.

    I can also differentiate the pink elephant manifestation from a grey elephant manifestation. "This" is not "that". Finally, I can start conceptualizing that I will call both "elephants" and one is "pink" while the other is "grey".

    What I am trying to get at is more fundamental than this, the differentiation of "this" is not "that" and the conceptualizing (in your use of the term) "elephants" and "pink" and "grey" are both contents of an envelope (or several). They take the necessary form of an idea popping into existence, so to speak, manifesting, that is immediately convinced of. I think your use of the terms, within discrete experience, are fine though.

    But your introduction of more identities does not introduce the idea of "implicit knowledge". One cannot have knowledge, without following the process of knowledge. If one follows the process of knowledge without knowing they are, that is accidental knowledge, not implicit.

    So there's two aspects needing to be addressed here. One aspect, which was my initial intention for the term “implicit”, is simply the acknowledgment that we, once we say we "know" something, may induce that that thing we know now was occurring the whole time prior to us knowing it (in light of us knowing it). This isn't to say, prior to us knowing it, we knew it. Just that, for example, when we due say "we know that differentiation necessarily occurs", we extrapolate that as occurring prior to when we even knew that. It is "implicit", with respect to this first aspect, in the sense that we are claiming differentiation was occurring, implicitly without our recognition, the whole while prior to our recognition of it. I think my concatenation of "implicit" with "knowledge" was confusing and wrong, so I apologize. My point was that we don't "know" it until we conceptualize it (until it pops up in an envelope). If we had never conceptualized it, it would been as if it never existed (it very well could have never existed). I think, now in hindsight, this is more or less what you meant by "accidental", but this leads me to the second aspect: it ended up, somewhere along the way, sort of morphing into a conversation about if you can "know" something without applying it to a tool (this is separate from my initial intention for the use of the term “implicit”). This sense, although I don't think "implicit" is the best word, I was meaning that the envelope itself is a given, without conceding a giver in the sense that any derivation of a giver would, well, be a derivation, which is derived from the content of the envelopes. This aspect, admittedly, isn't really "implicit", it is "manifested", or "given", or something. For all intents and purposes, this:

    "how do I know of my previous envelope I read?" -> "because I remember reading it"

    and this:

    "how do I know of my previous envelope I read?" -> "because I can apply that belief to reality without contradiction"

    Are of the same form. This form, this conceptualization, is the most fundamental in terms of "that which is deriving or that which is required for the consideration in the first place". On a separate note, I would even argue (and the argument itself was read from envelopes) that there is a difference between applying A to B within "reality" without contradiction, applying A to A within "reality" without contradiction, and applying "reality" to "reality" without contradiction. I think your use of "without contradiction" is utilizing the latter (with respect to immediately “known” things). Technically you are right though, I can't contradict that I had read previous envelope n, but how could I contradict it? How do I apply reality to reality? How do I pass a test through itself to see if it passes? My point is that it is impossible. Imagine you forgot that you read envelope n, then you wouldn't be applying anything in the first place: it would not become a consideration until an envelop > n pops up with a manifestation of that consideration. If an envelope pops up with a manifestation about whether a previous envelope occurred, and it is resulted with another envelope that concludes you did, then you did. Likewise, if we were to postulate that an envelope manifests asserting your use of drugs during envelope n's contents being read, and thereby questioning whether it is "objectively true", the fact that you had the envelope n occurred is necessarily solidified as true regardless of whether it is "objectively true". By example of yet another poor analogy, imagine our tool for determining motion was based off of a specific train, T, which is continually moving at a constant speed. Everything we characterize as “moving” or “not moving” (or any consideration of “how fast” or “how slow”) is relative to T. I am having a hard to understanding how we aren’t, when trying to applying an envelope succession to itself “without contradiction”, trying to determine whether T is moving. T is the standard, it is that which springs the very notion of “movement”. When we try to apply A and B, or even A to A, relative to reality (“to reality”), we can determine whether it is a contradiction; However, when apply “reality to reality” I don’t see how we are actually performing any “applying”, just like trying to “apply” T to T relative to T to see if T is moving.

    Perhaps the ant follows a process with its manifestations to know that sugar is edible, while dirt is not. And perhaps that process, is the process of knowledge put forth. But can the ant "know that it has knowledge"? With our current understanding of ant intellect, no.

    I'm thinking now in terms of "accidental knowledge", as you put it. My point is that the "accidental" or "unaccidental" knowledge we deem it to have has no relation to what it has in relation to itself. It may be the most logical thing for us to deem, but that has no impact on whether it knows anything. So I think you are right, but with careful consideration in relation to ourselves, not in relation to itself.

    How do you know that what is manifested is knowledge? Without a process of knowledge, you don't.

    To keep this brief, there’s two means of looking at this. One is the envelope succession is a loop the subject cannot break. The other is the envelope succession is what manifests any tool of knowledge we can come up with and process of acting out that tool, which necessarily means “know” those manifestations are “true” (aka, convinced of immediately) in order to do either of the two aforementioned.

    Likewise, I would argue certain aspects of this envelope process are necessary in the sense that the contents of the envelopes always conform to a specific convincement, such as the principle of noncontradiction.

    But building off of that, in terms of a tool of knowledge, I think we can also prove we have “implicit” knowledge in the sense of exactly what you were depicting with light and the brain: your brain, or to be more specific you as the subject, conforms to specific motivations which require “knowledge” in that sense. But this would be getting into the “why” of discrete experience—nevertheless, this is an innate form of knowledge that we obtain via the tool of knowledge (i.e. that your “tool of knowledge” would never have been created in the first place if you didn’t have some sort of motive to differentiate). I don’t think we are in any disagreement here, as this would have to be obtained via a tool of knowledge that there is “implicit”, innate knowledge in the first place.

    I will stop here for now: hopefully this exposes a bit better what I am trying to say.

    I look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    A pine tree and an oak tree are different trees. But they are still trees. Discrete experience is a tree. Differentiation an oak tree. Conceptualization is a pine tree. At the end of the day, they are both trees.

    If I am understanding your analogy correctly, then I would say (1) you are agreeing with me that discrete experience is not synonymous with differentiation (oak tree is derived from a tree) and (2) I would say, with respect to what I am attempting to convey, conceptualization would be a tree (not a pine tree). With respect to 2, this leads me to agree with you that we are essentially in agreement with one another; however, I am hesitant to completely agree with you as 1 directly entails to me that the fundamental is not differentiating "this" from "that", which I generally think your epistemology begins with such (that and the principle of noncontradiction). When you say "tree", in this analogy, I am arguing it is specifically not differentiation: it is the point of manifestation (and I think there is a difference). When I read your essays, "discrete" in "discrete experience" tended to be implying differentiation is the tree: maybe I just misunderstood you.

    For a certain context, identifying types of trees is not important.

    I agree, but I wouldn't constitute conceptualization as a pine tree, it would be the tree. Most notably, it would not be anything directly pertaining to a "discrete" anything.

    And this is what I'm noting with differentiation and conceptualization. They are both still at their core, discrete experiences.

    Again, I am interpreting this (1) as agreeing discrete experience is not differentiation directly (in that case, why use the term "discrete" if not to imply differentiation as apart of the fundamental) and (2) I think the ability, or act, of discretely (in the sense of differentiation) experiencing things comes after the experience itself. You first have a manifestation, an interpretation, and then, only after, can it be concluded that one necessarily discretely experiences. I think you may be attempting to use the term "discrete experience" synonymous with my attempted use of "conceptualization", however I find "discrete experience" to have confusing, almost contradictory, implications (no offense).

    If conceptualization is useful as a word, then simply follow the process. Discretely experience the word in your mind. Make it have essential properties that are non-synonymous, or distinct enough from another word as to be useful so that it is distinctive knowledge. Then, apply it to reality without contradiction. If you can do it once, then you have applicable knowledge that such a word is useful in reality.

    I think this is another big difference between us both: I don't think you can apply a tool of knowledge to that which is immediately known. I think you are attempting to acquire, holistically, all the knowledge you can claim to have via a tool: I don't think it makes sense to claim you can "know" something via a tool, yet you "do not know" the manifestations that were required for the tool of knowledge in the first place. Now here's where it gets a bit complicated (and you are right to point out my confusing terminology), because there's a difference between the manifestations and anything built off of those manifestations. For example, when I state that you "do not know" the manifestations that were required for the tool of knowledge in the first place, I am not referring to anything concluded to precede that tool of knowledge; in other words, a concluded manifestor by means of the manifestations. I think this is what you were meaning by the "I" and how it doesn't constitute knowledge: a concept of a manifestor must be subject to the tool of knowledge. We are in no disagreement there. However, the manifestations themselves are necessarily not subject to the tool of knowledge: it is the point of absolutely no movement (metaphorically speaking)(point of manifestation). It is the point of neither deduction nor induction, it is given. However, to emphasize this a bit more, when I state "it is given", I am doing so without conceding a giver. A giver would (metaphorically speaking) require movement and, therefore, would be subject to the tool of knowledge. I wanted to try and make that clear, first and foremost, that the division I am seeking is that of no movement vs movement, all of which is "knowledge"--but the former is given (with restraint from conceding a giver) while the other is obtained (via a tool of knowledge). So, with that in mind, I think you are not addressing my point here (and it is not your fault, I am doing a poor job of explaining it), which is self-evident to me due to you attempting to apply it. Anything applied is subject to the tool of knowledge. Conceptualization is not subject to such: it is absolutely no movement.

    From discrete experience, I define thoughts, sensations, and memory. Then I apply them to reality.

    Again, I think we agree that we can't apply discrete experience to reality because it is reality. However, if that is the case, I don't see how we could logically attribute something acquired via it as knowledge without conceding it is itself knowledge. Also, although you can apply thoughts, sensations, and memory to reality, you don't obtain that you know of them themselves and, thusly, cannot (not just do not) apply them to anything. It is like, you can apply a belief to reality to see if it stands, but necessarily without application you "know" you have a belief. I'm not sure if we are in agreement here or not. In other words, there are two aspects to those terms (thoughts, sensations, and memory), you are right with respect to one aspect, but I think you are disregarding the other.

    The issue with your current definition of conceptualization, is it isn't clear enough to show how it is separate enough from other useful words that can be applied to reality, and I'm not sure you've successfully applied it to reality yet without contradiction.

    I think, again, the confusion lies in the fact that I will never attempt, nor can I, to apply it to reality.

    There does seem to be something different from the act of first identifying "this" from "that", then adding a concept to it.

    For clarification purposes, I am not married to the term "conceptualization", it is just the best term I've come up with so far. But conceptualization is the identification that is the point of no movement I am trying to convey. "something different from the act of first identifying "this" from "that"" requires movement. I'm not really trying to address it in the sense of "well I have this discrete experience, let me induce/deduce a useful concept out of it". I am more trying to address it in the sense of the actual manifestation in the first place (without initially conceding a "manifestor")(without ever initially conceding a differentiation of "this" from "that"). I think you are more arguing that this cannot be done, namely without conceding differentiating "this" from "that", and that is where I think we mainly differ.

    So please do not take my notes as discouragement. Continue please. I just think the clarity isn't quite there yet on the definition, so lets keep trying!

    I completely understand: fair enough! I've been definitely making things more confusing, and I apologize, I'm trying to make it simpler, but haven't quite gotten there yet.

    It is why I note we do not need to know why we discretely experience, it is simply an undeniable fundamental that we do.

    This is fair and true. However, I would like to emphasize it is "an undeniable fundamental" (as in one of many, of which are not the fundamental in terms of the point of all manifestation) and thusly is derived from the manifestations themselves, the point of no movement, the point of manifestation, without conceding a manifestor, interpreter, etc (as those would be subject to the tool of knowledge to obtain it).

    This is simply a discrete experience as I describe it. "This" is not "that" is known by fact, because it is not contradicted.

    I would ask, how are you able to state it is not contradicted? Because the point of manifestation is cognitive in a sense; in other words, this is derived from a point of manifestation. "this" is not "that" is known because it is an immediately given, you seem to still be claiming you are applying it without contradiction, and that is how you have obtained it as known. I would say you necessarily cannot apply it, it is what you apply to. I think your use of the principle of noncontradiction is simply assumed, but I think it actually exposes the true point of no movement. You are first utilizing something that necessarily derives everything else: this is not the differentiation of "this" from "that", it is what allows for "this" is not "that" (without conceding an allower).

    Are the desk and keyboard in front of you both 100% separate and 100% not separate? If this were the case, you could not discretely experience them. At best, you can make a new word that describes both concepts together.

    I agree, but I would argue you are using the fundamental, point of manifestation, which dictates (without conceding a dictator) the necessity of the principle of noncontradiction. It isn't differentiation itself, nor the ability to "discretely experience" (in terms of the use of "discrete").

    The question after you realize you discretely experience is, "How do I know I discretely experience?" You try to contradict it. And as I've noted before, you cannot.

    Again, the question itself, the act of attempting to contradict it, and the realization is all "the tree", it is the point of manifestation, the point of zero movement. That is the fundamental of everything. This is why I would argue you can't actually even try to contradict it, it's just the fact that nothing happens that makes us feel like we successfully passed it through a test, but the manifestation of the "test" itself is what we were trying to pass through. It cannot be done. It is no different than trying to justify the principle of noncontradiction by trying to contradict it, it literally cannot occur (even as an attempt).

    With this, you can discretely experience whatever you like as long as it follows a few rules. It must be a distinct discrete experience that is in some way different from other discrete experiences in your head to avoid being a synonym, and it must not be contradicted by other discrete experiences you hold in your head.

    I agree, but these rules themselves require movement, which is derived from the point of no movement. They are manifestations which require a point of manifestation. Without conceding a mover or manifestor initially, as that would be subject to the tool of knowledge to be either rejected or obtained. It is essentially a thing recursively exploring itself, using it's own manifestations and rules to determine it has manifestations and rules.

    And of course we've covered inductions in depth. The reason why I wanted to go over your definitions, is underlying those concepts, are my concepts. Lets not even say underlying. Concurrently is probably better. My context and definitions serve a particular purpose, while yours serve another. The question is, while your definitions can be distinctively know, can they be applicably known? I am not saying they cannot, they just haven't really been put to the test yet.

    I am hesitant to say we are meaning the same exact thing, or that I am implicitly holistically using your epistemology yet, because I think you are still determining knowledge to be holistically that which must be tested. I am never going to test what is immediately known. And, likewise, I would consider just as much knowledge as any tool of knowledge we can conceive of. Although I may just be misunderstanding you, I am not attempting to apply your tool knowledge to the point of no movement, the point of manifestations. Also, I am only in agreement with you on "applying to reality without contradiction" if we are using "reality" in the sense of holistically all experience (which I think you are arguing for, but just wanted to clarify). Your thoughts are enough to create mathematics (in a general sense, obviously not for the derivation of math equations that pertain to things that must be seen in order to make sense of it, but math, as being the discrete logic, requires nothing but differentiation--I don't need to see "this" from "that").

    Again, I would be hesitant to state we are concurrent, because I am only agreeing with you in the sense of the tool of knowledge, which is not holistically knowledge (I would argue). You seem to be even attempting to apply our terms to a test, which I am saying there is such a thing as a known untestable piece of knowledge (specifically one: the test itself, not that which tests--again, not conceding a tester, just the test itself so to speak). I don't think we are in agreement about that.

    Why did I separate the act of discrete experience from knowledge? Because as you agree, knowledge is a tool. A tool is an invention that we build from other things that allows us to manipulate and reason about the world in a better way. Discrete experience is a natural part of our existence. Knowledge is a tool built from that natural part of our existence. It is the fundamental which helps to explain what knowledge is.

    Hopefully I've demonstrated that I do not think this is holistically the case. When I say "knowledge as a tool", I am meaning it as one subtype out of two distinct types. I don't see how someone could logically claim to "know" something by means of obtaining it from a tool, but yet equally claim they "do not know" that which it is built off of (again, not an interpreter, but the mere interpretation itself). I also find it wrong to claim to "know" you discretely experience by means of applying it. Likewise, that you know you hold a belief (not pertaining to the truth of the content of such), or that you know that immediate perception, thought, emotion, etc. It seems like you aren't granting these as known, or you are attempting to pass them through the tool to obtain them as knowledge (which I think you are incapable of such, we are incapable of such).

    How do you know its knowledge?

    My point is that you are immediately given, granted, the knowledge that you "know" that you are questioning how you know its knowledge. I am in agreement with you that a tool would be required to evaluate the truth of the content, so to speak, of the question itself, but not the question as immediately manifested.

    It is no longer a tool, but the source itself.

    Again, I want to careful with "source itself". In terms of movement, anything concluded, such as a source in the sense of an interpreter or manifestor, is subject to the tool. I am in agreement with you on that. However, the "source" as the immediate manifestations themselves, this is just known. And I don't think it would make logical sense to claim we can know something if the latter definition of "source" isn't known. So in a sense, you are right, in another sense, you are wrong (it is the "source" and tool, but not in the sense of any sort of movement).

    How then do I separate knowledge from a belief? If I can have knowledge that is a tool, and knowledge that is not a tool, isn't that an essential enough property for separating the concepts into two different concepts?

    Again, to determine the truth in terms of the content, or proposition, of a belief, it requires a tool. But you immediately know that you are having a belief as it was immediately manifested as such. In other words, the belief that there is a red squirrel in my room would require a tool of knowledge to obtain whether it is true or false, but the belief itself (as a belief) is necessarily known immediately. This doesn't erode the distinction between knowledge and the content of a belief. You can have a proposition you don't immediately know while still knowing that the very manifestation of the proposition itself is true (i.e. I don't immediately know if there is a red squirrel, but it is true that the belief--the proposition--has occurred to me). Likewise, I would say that the propositions in our thoughts, also called beliefs, are distinguished from knowledge, however the thought itself is necessarily a true fact (and thus known). Not that the proposition is true, but that the fact that there is a proposition is necessarily true.

    Does the definition you use increase clarity, or cause confusion?

    It most definitely creates more confusion and fair enough! And so, if the objective is to try to portray as much as possible for the masses, then it may very well be useful to start simply with the fact that we differentiate. Fair enough, however, I don't think, in terms of philosophy, our goal should be to just simplify positions due to it sometimes becoming an oversimplification--a lot of philosophical principles and achievements necessarily required at least some complex elaboration. I'm trying to say that starting with differentiation may be a necessarily false presupposition that can be used to better portray the epistemology as a whole to the masses. Fair enough.

    Too detailed, and it can quickly address unimportant details that aren't important to the overall concept. Too broad and it can be misapplied.

    Absolutely fair enough! It is definitely a trade off, but I am more trying to attempt this from what is the fundamental and not how to get the most conveyed to the most people. I think both are worthy considerations.

    What you are doing right now is seeking that refinement. But I do not think at this point that there is any disagreement with the overall structure. The basic methodology is still applied to the terms you propose.

    Again, I am hesitant here to agree. For these reasons:

    1. You seem to be deriving from differentiation, not the point of manifestation
    2. You seem to be claiming knowledge is strictly obtained and never given without conceding a giver

    I don't think I can really say I subscribe to your epistemology with such fundamental differences. I think you are more speaking in terms of once we are discussing the tool of knowledge, differentiation, and the principle of noncontradiction, then we generally agree and, thereby, I am subscribed in that sense.

    I would argue that it is both. It is necessary that atoms exist for the ruler to exist, whether you know it or not.

    I would like to careful here as well, it seems to be implying an "objective" reality that is an absolute reality (that which is not contingent on the subject). When I state "objective" reality, it is still in relation, and thus contingent to a degree, to the subject. It is necessary that atoms exist for the ruler to exist within the constraints of what has been manifested for you as the subject. We cannot claim beyond that.

    I believe this is a conclusion of applicable knowledge, not simply distinctive knowledge or merely discrete experience.

    This is true, but not in relation to an absolute "objective" reality. However, as you probably agree, it is not strictly applicable knowledge either: it is a combination as it all stems from those rules and the point of manifestation (what you would call discrete experience, which I would argue isn't sounding synonymous to me yet).

    As I mentioned before, we cannot discretely experience a contradiction. Because experiencing a contradiction, in the very real sense of experiencing something as 100% identical and both 100% not identical to another concept is something we cannot experience.

    Again, this isn't because we applied the principle of noncontradiction and found it not to contradict, therefore we obtained such knowledge, we simply "know" it because it is manifested necessarily that way. It is no different, I would say, to trying to legitimately apply the principle of noncontradiction on itself. I don't think it makes sense to constitute knowledge as strictly what has been applied (which implies strictly that which can be applied). Don't get me wrong, there is a very real sense where you are right, we can make up plausibilities that are inapplicable (which I would argue are irrational inductions), and that will never constitute knowledge. But there is a difference between something we moved to in our reasoning that cannot be applied and the reasoning itself which cannot be applied. These, in my head, are not the same "cannot be applied".

    You can discretely experience whatever you want. You know you can, because you have deduced it logically without contradiction.

    Although I understand what you are stating, and I agree in a sense, those two statements contradict each other. Also, it exposes the fact that of the manifestations and the seemingly necessary contingency (which is also a manifestation) of the principle of non-contradiction.

    This leads me to another point, "reality" isn't just object, it is also subject. The thoughts themselves are apart of reality. When you "apply" your thoughts, strictly in the abstract, you are "applying to reality" without contradiction because the principle of noncontradiction is ingrained in us.

    Another thing to consider is your terms are causing you to construct sentences that are difficult to grasp their meaning (not that I am not guilty of this too!) "The concept of the manifestation of the consideration". This seems verbose and I'm having difficulty seeing the words as clearly defined identities that help me understand what is trying to be stated here. I can replace that entire sentence with, "However, the discrete experience of whether I hold a particular belief is not induced, nor deduced, nor applied, it is immediately acquired." It is something we simply do.

    Fair enough! However, I would say that your insertion of "discrete experience" necessarily erodes some of the meaning away, albeit my definitions aren't very good at all.

    "You can't even claim to know something if you haven't, to some degree or another, conceptualized (my adjustment: discretely experienced) that something."

    Yes, this is exactly the point I've been making.

    If you are claiming "discrete experience" is the point of manifestation--not directly differentiation, then we agree. If not, then I don't think you can perform that substitution there. — Bob Ross


    No, I am not using the terms manifestation or conceptualization. I'm not saying you can't. Those are your terms, and if you have contradictions or issues with them, it is for you to sort out. All I am saying is if a being can't part and parcel the sea of existence, it lacks a fundamental capability required to form knowledge.

    I am honestly not quite following your response here. It seems like you didn't really answer question but, instead, referred it back to me. Either you agree that "discrete experience" is synonymous with the "point of manifestation and not directly differentiation", or you don't. I am just simply trying to understand whether you are attempting the same thing I am with the term "discrete experience", or whether you are not. Again, when you say "a being can't part and parcel the sea of existence", you are implying "differentiation" is "discrete experience", which is not what I am trying to convey. Also, the "sea of existence" seems to me to be implying, again, an absolute reality which is considered "objective". In other words, the subject is parsing the "sea of existence". It isn't that I am arguing the subject is the sea of existence, or that the sea of existence doesn't exist, but it is that we only view it as the sea of existence from what actually is existence: the manifestations themselves. We only induce there is sea of existence from, not that which induces, but the manifestations of those inductions themselves. I think there is a big difference.

    I think, in terms of your circular logic rebuttal, you are right if you are talking about the actual fundamental, but I don't think you are. I think you are taking a tiny step by means of the manifestations to prove differentiation, but then proving manifestations with differentiation (which I think is a IFF contingency, however I do see your point).

    I look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    To sum up I think you are under the impression that differentiation and conceptualization are separate identities. I am not disagreeing that you can propose such differentiation. What I am noting is that they are subsumed by both being discrete experiences, and I am unsure where differentiation leaves off and conceptualization begins. Even if it is the case, you still need differentiation before conceptualization. One cannot conceptualize before one can differentiate.

    I think we may, after all, be attempting to convey the same underlying meaning with "conceptualization" and "discrete experience"; however, I find myself in only in partial agreement with what you stated. I think that it would be beneficial for me to define all the terms, their relation to one another, and an elaboration on "knowledge" in general.

    Firstly, here's my interpretation of some of the definitions:

    discrete - individually separate and distinct. (as depicted in your last post)
    differentiation - the act of differentiating (I consider this synonymous with "the act of discretely experiencing"--as something being "discrete" is an instance of differentiation)
    discretely experiencing - the act of differentiating.

    Therefore, given those definitions, I think that your separation of "differentiation" and "conceptualization" as a part of "discrete experience" in your most recent post leads me to believe you may be attempting the same thing I am trying to convey with "conceptualization". As you seem to be using "discrete experience" as something more fundamental than "differentiation", but, where the confusion lies, at the same time, you seem to be also attempting to use them synonymously.

    Once again, I cannot conceptualize without first being able to tell a difference. Or maybe, they are one and the same. Perhaps differentiation at even the lowest level is some type of conceptualization.

    The first sentence seems to be implying you require differentiation in order to do anything else, which, in my head, directly implies differentiation is discrete experience. However, thereafter, you seem to be claiming that "conceptualization" and "differentiation" may be synonymous, and that they are apart of a more fundamental "discrete experience":

    The point is, these are words that describe acts of discrete experience. Conceptualization about a discrete experience, is a discrete experience that describes another discrete experience. Discrete experience is a fundamental that underlies all of our capabilities to believe and know.

    And likewise:

    Differentiation, is the act of discretely experiencing. Within the sea of your experience, you are able to say, "This" is not "that".

    So I am a bit confused if you are arguing for "differentiation" as "discrete experience", or whether that "discrete experience" is more fundamental than "differentiation".

    I think this is a perfect time to elaborate on a couple more terms:

    Concept - A general idea or understanding of something: synonym: idea.
    Conceptualization - The act of manifesting a concept.
    Point of manifestation - the grounds of everything in terms of just chronological precedence (contrary to extrapolated chronological precedence).

    The reason I chose "concept" is that it is a purposely vague manifestation of an idea, which is (I think) the best term I could come up with for conveying a fundamental, rudimentary point of manifestation. It is like a "thought", but not completely analogous: it isn't truly thinking of itself, for that is a recursively obtained concept that one thinks--which is not necessary for a concept to manifest. Likewise, it isn't thinking in itself, because thinking of itself is required for such. Therefore, I call it "conceptualization": the act of manifesting a concept (or concepts). When I use the term "concept", I don't mean high-level discernment of things: all of it is a concept and concepts can be built off of one another. Everything is manifested as a concept, including "differentiation" itself. This may just be me using the term wrong, but I wanted to clarify my use of the term.

    If what you mean by "discretely experience" is "the point of manifestation of everything, including everything itself", then I think we mean the same thing. However, my worry is any implication derived from "discretely" in "discretely experience": any extrapolation that differentiation is the point of manifestation. Notice that my definition here completely lacks any reference to "differentiation" (which, I think, includes "discrete", since it is also the separation of "this" from "that"), as I think it is manifested conceptually by means of the point of manifestation. If this is what you mean by "discretely experience", then we agree (however, I think the use of the term "discrete" in "discretely experience" has unwanted implications then").

    I want to point out the definition of discrete, and why I chose it. "discrete - individually separate and distinct." I was looking for a fundamental. Something that could describe a situation as a base.

    I am fine with your definition of "discrete"; however, when you say "I was looking for a fundamental", are you implying a fundamental that we must conceptualize to deem it so, or the point of manifestation required for that conceptualization in the first place? (the former I would call extrapolated chronological precedence, and the latter is just chronological precedence). I think this is a perfect segue into "knowledge". I don't think there are only either induced or deduced (or distinctive and applicable) knowledge: there is immediately acquired knowledge, mediated deductive knowledge, and mediated inductive knowledge. So when I was previously (in a subsequent post) asking it in the sense of "whether we must extrapolate differentiation, or whether it is the point of manifestation", I think I may have misled you with the term extrapolation; I am not implying that we induce differentiation, I am trying to imply that, once we conceptualize differentiation, we know it not as deduced nor induced but, rather, as immediately acquired knowledge. Let me explain a bit more on those three types of knowledge:

    Of manifestation vs from manifestation of itself - First I need to distinguish these two concepts (which I previously stated as "of itself" vs "in itself", but to resolve some confusion I think these other terms are better). "of manifestation" is as it is presented (its manifestation), whereas "from manifestation" is a form of knowledge either induced or deduced based off of "of manifestation" (that which was presented).

    Immediately acquired knowledge - that which is directly manifested (as a concept, I would argue) and, thereby, is immediately known. I think generally this is the principles of rudimentary logic (so to speak), perception, thought, and emotion of manifestations of themselves and, more importantly, any conceptualizations of manifestations of themselves that may stem from any of the aforementioned. I don't need a tool of knowledge, i.e. an epistemology, to "know" that I differentiate, require a sufficient answer to everything ('sufficient' can vary though), perceive, think, feel, or any form therein (within emotion, I don't need an epistemology to "know" "pain", generally, from "pleasure" of manifestations of themselves).

    Mediated deductive knowledge - that which is deduced based of off immediately acquired knowledge. This, in terms of immediately acquired knowledge, is distinguished by it being from manifestations of themselves in terms of perception, thought, emotion, and any form therein in reference to from a manifestation of itself. For example, I have an immediately acquired knowledge of "emotion" in terms of manifestation of itself, but the conclusion of the concept of "emotion", holistically, required the use of the individual concepts of feeling (such as pain and pleasure) to deduce it (this is "emotion" from manifestation of itself--it is the deduced knowledge which was deduced by the of manifestations of itself). I call it mediated, because, although "emotion" of manifestation and from manifestation of itself are both conceptualized (manifested as a concept), one concept is clearly mediated by the immediate forms of knowledge while the other is, well, immediately known.

    Mediated inductive knowledge - that which is induced based of off immediately acquired knowledge and meditated deductive knowledge. It is essentially the realm of hierarchical inductions. For example, I know "emotion" of manifestations of itself and from manifestations of itself so far, and I can induce why I have "emotion" in the first place (in terms of evolution or biology for example).

    It is important to note that I am claiming that conceptualization is occurring in all three forms of knowledge: these are all manifestations of concepts. However, there's nevertheless a meaningful distinction that can be produced because they are all conceptualized in this necessary hierarchy. For example, mediated knowledge (both forms) adhere and obey the immediately acquired form. Differentiation and the principle of sufficient reason are two great examples of immediately acquired knowledge that is necessarily imposed on all mediated knowledge. The reason why this is the case, as you mentioned, is not the subject of our conversation (as of yet), but merely that it is. They are necessarily imposed because all concepts that conform to the mediated type are always conceptualized, manifested via concept by the point of manifestation, as obeying such. Also, it is important to note that these are in relation to after it is conceptualized. So I am not claiming that you immediately know differentiation is occurring, only that, once it is conceptualized, it necessarily is known and requires no deducing nor inducing.

    So, with this in mind, when you stated:

    I'm not sure there is implicit knowledge. Knowledge is a process that must be followed to have it.

    There is no inherent knowledge. You can practice knowledge without knowing that you are doing it. You can have distinctive knowledge. You can even have applicable knowledge. But it is obtained because you are following the steps outlined in the epistemology. You can be blissfully unaware that it is what you are doing, and still have distinctive and applicable knowledge.

    I think you are 100% right in terms of knowledge as a tool, which I would say is mediated knowledge (it is, therefore, what one can learn). What one can't learn, what one cannot rationalize or reason their way away or towards, is the immediately acquired knowledge. Although I want to emphasize this is all the act of manifesting concepts (i.e. conceptualization), the immediately acquired knowledge isn't conceptualized in terms of deduction or induction (in other words, it is not a concept manifested in relation to other, more primitive or fundamental, concepts) but, rather, it is manifested as the basis--the ultimate bedrock. What I meant by implicit and explicit is more in terms of the relation of some concept being induced or deduced to be occurring prior to its manifestation. For example, when we conceptualize that we must differentiation, then that becomes something that must have been implicitly occurring all the while (i.e. the immediately acquired knowledge of manifestation of itself is utilized to induce, therefore mediated inductive knowledge, that it was occurring all the while--it is not to say that you knew that at the time). I don't think this is what you were referring to with your example of the runner: in terms of knowledge as a tool, thusly mediated forms, i would agree that the runner lucked, or "accidentally", followed the rules of the epistemology. However, and this may be a fundamental disagreement between us, I would state that "knowledge as a tool" (and, thusly, knowledge that can be learned) is a mediated form of knowledge, not all knowledge.

    Its more like accidental vs explicit. I could find a ruler on the street and not know what cm means. But I do notice there are some lines. I measure something and say its 4 ruler lines. I can safely say within that context, that I have measured length with a ruler. But I don't know its a ruler, or how it was made, or what any of the other symbols and lines mean like inch. Within your first few paragraphs, if you replace "implicit" with "accidental" I think you'll see what I'm trying to point out.

    This is with reference to knowledge as a tool and, therefore, mediated forms of knowledge (I'm fine with that). But my point was that you can't induce or deduce any concept as to have been occurring implicitly all the while without it first being explicitly known.

    You can discretely experience without a theory of knowledge. I am noting that to explicitly know what knowledge is, the first thing you must come to know, is discrete experience.

    Your first sentence seems to be sort of aligning with my view of immediately acquired forms of knowledge, I think you just aren't categorizing it as "knowledge". I agree with the second sentence if you are defining "discrete experience" as "the point of manifestation of everything, including everything itself". I would then also like to append that the next step, after "discrete experience", in order to know explicitly what knowledge is, you have to know what "differentiation" is (and things pertaining to such: like the principle of noncontradiction). I think this is generally what you are arguing for, but I think "discrete experience" and "differentiation" are used both synonymously and not synonymously in your statements.

    With this, you can build a theory of knowledge. You don't have to know why you discretely experience. Just as I don't have to know the atomic make up of the ruler I am using. I just have to know what consistent spacing is. Of course, that doesn't mean there aren't atoms that make up that ruler. It also doesn't negate the fact that without atoms, there could be no ruler. But the knowledge of atoms is entirely irrelevant to the invention and use of a ruler. So with knowledge.

    This is true. But I would like to emphasize that even if it is necessarily the case that it is made up of atoms, this is all apart of extrapolated chronological precedence and not just chronological precedence. Yes, I am made of atoms, so in that sense I am derived (one way or another) from those atoms, which necessarily precede me (as a subject, reflexive self that is). However, all of this, including that previously rationalized statement, is derived from the point of manifestation, which manifests certain concepts as necessarily the case (such as our immediately acquired knowledge of differentiation and the principle of noncontradiction). So I would state that with respect to conceptualization, it necessarily follows that I am preceded by atoms. Notice that the conceptualization is required, and is the spring of life (so to speak), of that very extrapolated truth.

    Basically, you are claiming (I think) that discrete experience cannot be contradicted because that contradiction also requires discrete experience. — Bob Ross

    Yes! I think you have it.

    If you agree with me here, then I would like to ask you how you or I derived this? I would say from a manifestation of a concept that is immediately known and is revealed, so to speak, as necessarily true absolutely. To be clear, I'm not asking you to explain why we discretely experience, only how you or I came up with that very claim. Did we just discretely experience it?

    If you conceptualized (discretely experienced) a blue ball within your mind that had clear essential properties to you, then you would distinctively know the blue ball.

    The essential properties themselves are concepts. When you have the belief that there is a blue ball, regardless of whether it is true or not, you know you have that belief. Moreover, if you want to take it a step deeper, if I want to determine whether I still hold a belief, then it will have to applied without contradiction; However, the concept of manifestation of the consideration of whether I still hold a particular belief is not induced nor deduced nor applied: it is immediately acquired. No process or tool of knowledge is required to know that. Likewise, if you are seeing a ball right in front of you, the belief aspect is the mediated deductive knowledge that it is a "blue ball" or mediated inductive knowledge of anything pertaining to the "blue ball", but the immediately acquired knowledge of the perception of the "blue ball" of manifestation of itself is not a belief (nor deduced nor induced).

    "You can't even claim to know something if you haven't, to some degree or another, conceptualized (my adjustment: discretely experienced) that something."

    Yes, this is exactly the point I've been making.

    If you are claiming "discrete experience" is the point of manifestation--not directly differentiation, then we agree. If not, then I don't think you can perform that substitution there.

    Once I am able to see "this" is different from "that", I can detail it.

    You are either deducing or inducing this, which is not immediately acquired knowledge and, most importantly, you first must conceptualize it.

    Discrete experience is a cat. Conceptualization may be a tiger, but its still a cat.

    A cat and a tiger are concepts. Again, I think we may be trying to utilize the same underlying meaning here, but I'm trying to understand if you are saying the fundamental base is differentiation, or if it is a separate, more fundamental, discrete experience.

    If you could try to present your argument that my proposal is circular with an A -> B -> A format, I think I could understand better where you're coming from, and we could settle that issue once and for all.

    Here's my understanding of circular arguments:

    1. Posited inquiry
    2. Justified explanation for 1
    3. Posited inquiry of that justification used in 2
    4. Justified with 1

    So, essentially, it is 1 -> 1 (or A -> A). Let me attempt an example:

    1. Posited inquiry: Is the bible true?
    2. Justified explanation: Yes, God says so.
    3. Posited inquiry of 2: How do we know God tells the truth?
    4. The bible says so

    1 -> ... -> 4 is actually just 1 -> 1. So I think it is with discrete experience in relation to reasoning:

    1. Posited inquiry: Do we discretely experience?
    2. Justified explanation: Yes, because reasoning deems it so (i.e. I cannot conceive without the use of discrete experience)
    3. Posited inquiry of 2: How do we know reasoning is a valid means of acquiring such knowledge?
    4. Because we discretely experience, and that is all that is required to begin our epistemic exploration.

    1 -> ... -> 4 is actually just 1 -> 1. I think that if you are using "discrete experience" in the same manner that I am using "conceptualization", as previously defined, then it isn't circular as it is the basis of reasoning itself, which isn't differentiation I would say. I think you may be arguing for this kind of thing with discrete experience but yet still implying differentiation in there a bit.

    I look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    I agree, I think we are still not quite understanding each other, so I will try to do my best to respond to your statements (very thought-provoking as usual!).

    First, I do not think that discrete experience is the most fundamental thing that explains our existence. I think discrete experience is the most fundamental thing an existence must be able to do to know, and it is a fundamental that can first be defined clearly, and without contradiction

    Fair enough. I don't think that you are arguing that discrete experience is the only thing, or that we can't induce beyond (or before) that, but I am questioning your claim that it is the most fundamental thing an existence must have to be able to know. I am providing a contender: without convincement, discrete experience is useless and cannot be extrapolated in the first place. If you couldn't conclude anything at all, then you wouldn't know you discretely experience. Now, I think this leads me to a good point you made: the distinction between knowing something inherently and conceptualizing it. In other words, you don't need to conclude you discretely experience to discretely experience. However, although it is a splendid point, I think there are two different kinds of knowledge that need to be addressed here: implicit and explicit. For example, I can implicitly know that food is necessary for me to survive without explicitly knowing it at all. But once I conceptualize it to whatever degree, then it necessarily becomes explicit knowledge. I like to think of this in terms of knowledge pertaining of itself vs in itself: the former is the conceptualization of the latter (former is explicit, latter is implicit). The reason I think this to be incredibly important is that I think you are arguing for discrete experience, at its most fundamental state, as implicit knowledge (that can or cannot be made explicit)(aka discrete experience in itself and not of itself, although the latter is a possibility, the former is a necessity). Correct me if I am wrong here, but that is what I am understanding you to be, generally, claiming. I am trying to propose that implicit knowledge can only be actualized (and thus obtained) once it has been made explicit. In other words, existence (whatever thing we are talking about that exists) doesn't obtain implicit knowledge until after it conceptualizes it and extrapolates the implicit therefrom. For example, let's say, hypothetically, that I never had realized, explicitly, that I discretely experience, and, upon reading your brilliant essays, now realize it. Then, and only then, would I then know that I implicitly discretely experienced all those times prior to the moment I realized explicitly that I discretely experience. If, for example, I never realized I discretely experience, then I would never know I discretely experience (because implicit knowledge is extrapolated from explicit knowledge). But there's also a need for considering the point of reference: with respect to you, even if I never realize I discretely experience, I may be a discrete experiencer to some degree or another. This entails that, if you've conceptualized me as a discrete experiencer whereas I haven't, you know I discretely experience but I don't. The moment, if at all, that I realize that I discretely experience, which would only be by means of extrapolating the implicit in itself from of itself, is the moment I know and not before that. Likewise, when I am arguing for thinking in itself, I think I was wrong to use that as the bedrock (along with motive) because the conceptualization of thinking (and motive) is required for me to even realize I think (or have a motive) in the first place, therefore thinking of itself (which is explicit knowledge) is required for me to then extrapolate that I was implicitly thinking in itself in the first place (and that it is a necessary extrapolation)(ditto for motive). I think this is the same process (fundamentally) for all knowledge: including this very statement I am making right now. That previous sentence required that I conceptualized such a thing, explicitly, and which I can claim therefrom to have been occurring implicitly before I made it explicit in my knowledge. Basically, you are claiming (I think) that discrete experience cannot be contradicted because that contradiction also requires discrete experience. I am claiming, although that is fine, it is an extrapolation that first had to be conceptualized (explicitly) to then, only thereafter, be considered implicitly true prior to its conceptualization. Therefore, the conceptualization is required first and foremost in order to ever claim anything ever was implicit previous to something explicitly being known. To know that you think requires that you conceptualized, to some degree, thought itself and then, therefrom, extrapolated you must have been thinking prior to this realization (i.e. implicitly)--my point is that without that explicit conceptualization, you would have never known that you think. Without the conceptualization that you discretely experience, you wouldn't know that you are implicitly discretely experiencing. However, you may still, even though you don't know you discretely experience, know things that stem from discrete experience. For example, if you conclude that you are seeing a blue ball, even if you don't know you discretely experience, you still know of the blue ball because you have conceptualized the blue ball. Moreover, you could then extrapolate that the blue ball was there prior to you conceptualizing it, but my point is that you wouldn't know that it was there unless you extrapolated it from your conceptualization of the blue ball. If you never would have explicitly known the blue ball, then you would never have known it in the first place. You can't even claim to know something if you haven't, to some degree or another, conceptualized that something.

    I want to be very clear, I do not think there is nothing prior to discrete experience. I also do not think that something that is not a "being" can discretely experience. I believe it is fundamental that there be a "self". One cannot discretely experience without being something.

    Fair enough. I apologize if I portrayed it that way: I never thought you were arguing the contrary.

    But I find that I cannot define the "self" as a fundamental, without first defining discrete experience.

    I agree, but in a slightly different way: the most fundamental in the sense of conceptualized to be the most fundamental is differentiation. But again, you could make claims pertaining to differentiated things all the while never knowing that you discretely experience, and, more importantly, you wouldn't even know you implicitly discretely experience until you know it explicitly. To even try to prove anything, including discrete experience, you must conceptualize it first (to some degree or another). I am trying to state that knowledge doesn't begin its manifestation with differentiation, it begins when it is conceptualized (made explicit).

    Perhaps you can prove this. Can you know something prior to discrete experience?

    I am not entirely sure that it is a proof, because I partially agree with you here, but to claim that discrete experience is implicitly required for all else requires explicit knowledge of such. So, in my head, when we are conversing about when someone knows something, it isn't the extrapolated implicit discrete experience that grants the right "to know it": it is the conceptualization of that contextual thing (or even of another concept--as to know in the abstract requires the conceptualization of such first and foremost as well). I think to say it truly is discrete experience is to operate with a hindsight bias after the fact that the person claiming it has extrapolated the implicit knowledge from the explicit knowledge.

    Can you know what an "I" is before you are able to differentiate between the totality of experience?

    Well, it depends on what you mean by "I". Technically speaking, the "I" isn't necessarily the synonymous with conceptualization. The granting of knowledge is within each context, or avenue. So to know the "I", whatever you are depicting that as, is possible to be known without ever knowing of discrete experience (again, to say the "I" was implicitly discretely experiencing the whole time requires conceptualization of the implicit into something explicit, which is actually how I am able to claim it is the "implicit into something explicit" because I am extrapolating that the implicit must have came before the explicit in order to make sense of it). I get that it seems like I am using discrete experience to attack discrete experience (which is contradictory), but what I am really using is the conceptualized, explicit knowledge I have to base the claim that conceptualization must be the farthest we can derive without beginning extrapolation (this claim in itself is also a conceptualization).

    I know that you can believe such, but can you know it?

    Honestly, I think your argument is plenty strong enough to even claim that you cannot believe it without discretely experiencing. But this is only known after it has been conceptualized.

    Can you know what eyes are? A mind? The difference between your body and another thing? Conscious and unconscious?

    Although I understand and agree with you, oddly enough, I disagree (:. It is only after you have the conceptual knowledge (explicit knowledge) of discrete experience that you can claim that discrete experience was implicitly happening all the while when you previously conceptualized an eye ball. Prior to that, you did not know it (but yet you knew of an eye ball). I think when you say something along the lines of "try to disprove your discrete experiences without using your discrete experiences", I would like to agree (firstly) and (secondly) append "try to disprove or prove discrete experience without ever first conceptualizing it".

    I can't reasonably see how this is possible without the ability to discretely experience

    Again, I think this is hindsight bias: you have explicit knowledge of discrete experience (because you conceptualized it) and, only thereafter, now extrapolate that it was there implicitly all along. Without conceptualization, you wouldn't ever know anything (even if you implicitly discretely experience, for to know that you would have to explicitly conceptualize it first).

    Again, I do believe there is a "self", but I cannot define or even conceive of a self without first discretely experiencing.

    I understand (fair enough). But, again, you could know of a "self" without ever implicitly or explicitly knowing of discrete experience (discrete experience wouldn't be apart of your knowledge collection, so to speak). Therefore, the real contingency of knowledge is conceptualization, not differentiation. The former is utilized to conceive that the latter is logically necessary for all else. It is also a conceptualization.

    On a side note, I would also like to point out that the antonym of "differentiation" is not "nothing", it is "oneness" (cohesion). It isn't necessarily true that you wouldn't exist without differentiation, you may exist as one with everything (therefore terms themselves wouldn't exist for you, but you would exist--in a sense). I would agree with you that, if you were oneness, you wouldn't know anything, but this is due to the lack of conceptualization. I think you are right in the sense that me even claiming "conceptualization", or even conceptualization in itself, is "contingent" on differentiation. However, that statement is a conceptualization first and foremost, and so it is with this statement as well. It all is. Does differentiation come first as an extrapolated truth (whereby it can be equally extrapolated to have been an implicit truth all along), or as the actual spark of manifestation? I think the former.

    An ant can discretely experience. Does it know what an "I" is? Does it know it can discretely experience? No, but it can know things, because it discretely experiences.

    No, within reference to itself, it knows nothing. With reference to you, it knows things. This is because, it isn't about whether it knows it discretely experiences, it is about whether it conceptualizes to any degree. If it does, to contradict what I previously stated, then it knows. It if doesn't, then it doesn't know. But its knowledge has no direct relation to your knowledge of its knowledge. It could very well be the case that it doesn't conceptualize anything, but yet you, being able to conceptualize, deems that it does based off of your conceptualizations of its actions.

    It knows the sugar in front of it is good compared to the dirt that surrounds it.

    Again, you conceptualized this and, therefrom, deemed that ant to know. This doesn't mean that it actually knows anything (maybe it does, maybe it doesn't). Just because it is the most rational position for you, as a being capable of conceptualizing, to hold with reference to the ant, namely that it knows to some degree or another, doesn't mean that in reference to itself that it knows anything at all. It would have to be able to conceptualize something. And, yes, again, me claiming "it must conceptualize something" is contingent on differentiation because all conceptualizations I have had are conceptualized as being contingent on differentiation, therefore I deem it so (and me deeming it so is also a conceptualization). The way I see it, conceptualization is the point of manifestation for everything (including everything), it is the point at which you can quite literally seemingly postulate it ad infinitum recursively (reflexively) (although I don't think it is an actual infinite, only potential). It is where, in my opinion, you truly hit bedrock: where anything below, above, before, after, outside, without, and those very concepts themselves is extrapolated from (conceptualized). In other words, I can conceptualize that I must differentiate, but I hit bedrock (recursive potential infinity) if I try to conceptualize conceptualization, and so forth.

    I think the same thing is true of AI and other beings (to dive a bit into solipsism). To be clear, I am not a solipsist, but I think I may not be one for a different reason than you: I think that the most rational conclusion is that there are other beings like me with reference my conceptualization of them, but that doesn't mean I've proved that they can conceptualize. It only means that via my conceptualizations, the most rational position to hold is that of not being a solipsist. Again, just because I deem another person to know, or even if it is solely based off of risk analysis (which is also a conceptualization)(as in what if I am wrong and choose to be solipsist vs what if I am wrong and choose to respect other people as actually other people), doesn't mean that, in reference to themselves, they actually know anything.

    While "You" must exist to discretely experience, "You" existing does not give you the fundamentals of an epistemology, it is "You" that can discretely experience that does.

    I agree, but this is conceptualized and, thereby, only implicitly known after it is explicitly known. You are right that me existing does not ground an epistemology: it is the ability to conceptualize that very statement that grounds it (and the ability to conceptualize that, and this, and this, etc).

    I discretely experience, because any proposal that I do not discretely experience, is contradicted.

    Again, I think you can only propose this if you are able to conceptualize. First you must have explicit knowledge of this to then extrapolate it as implicit and, therefore, you are extrapolating discrete experience as an implicit truth after you have already gained it as knowledge via explicitly (aka, via conceptualization). I think this is the point at which knowledge is granted, at least initially, or manifested: when it is conceptualized.

    The simple proof I put forward is that to present any counter argument to discretely experiencing, to even understand what it is you are trying to counter, you must discretely experience

    I think I can use that same argument to prove you are right and that that doesn't mean it is the point at which knowledge manifests. In order to even claim that I can't postulate a counter argument without differentiation, you must have a conceptualization (and same for me). I think that they are both deeply integrated into our existence, but one is the point of manifestation (conceptualization), the other is a product of that manifestation that is manifested as a necessity to all else (differentiation). However, although I think you are using A -> A still, I think that you are actually right: there is a point at which it is circular, and that is fine as long as it is the point of all other manifestation. I think that you think that point is differentiation, I think it is conceptualization.

    I hope this cleared up what I'm trying to prove

    I think I understand and I hope that I demonstrated that in my responses. If I didn't fully grasp your views, please, as always, correct me!

    I look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    The goal of the knowledge theory was to find just one thing that I could "know", and use that to go from there. I can know that I discretely experience, but I explicitly did not try to determine "why" I discretely experience.

    My question for you is, is there something you feel 'motive' brings to the table that challenges or puts to question the formulation of the epistemology I've put forth so far? If yes, then we'll have to explore it in earnest.

    I think that, although I completely understand that it seems completely unrelated to your epistemology, the first quote pertains to my objection. I think that if you are trying to find one thing that you can "know", that this, in terms of derivation, it should be you. Nothing, in terms of just chronological viability, can be derived further than the subject. It doesn't start with discrete experience, it starts with you obtaining the knowledge that you discretely experience by means of thinking in itself: you begin with thought (but,again, in itself and not its characterization of itself). Albeit a very close connection between the two, I do believe you start with the thought that convinces you that you discretely experience, and then go from there. This is why, although I agree with your work, I think your epistemology starts at some mile other than 0 in a 500 mile race: you simply start your endeavor off of an assumption, that is discrete experience, and work your way from there without providing sufficient justification for discrete experience. I may be simply misunderstanding you, but as I far as I can tell it seems like your epistemology simply posits discrete experience as a given, but I am trying to get at that positing itself exposes a more fundamental aspect than differentiation. I truly do think that your argument (1) posits discrete experience as self-evident and (2), in actuality, utilizes the more fundamental aspect that is required to even put forth the argument in the first place which, thereby, causes your argument to really be "I think (in itself), therefore I discrete experience. I discretely experience, therefore I think (in itself)" (this is no different than A -> B, B -> A, which really is A -> A, so I do think you are essentially saying "I discretely experience because I discretely experience--hence #1). In other words, I am disputing the grounds of your epistemology, as I don't think your argument in the essays really provides any sufficient response.

    When you say that you don't provide a "why" for discretely experience, I think that is fair enough if you are right in that discrete experience is the most fundamental thing, which I don't think is true. I think you are agreeing with me then that your epistemology starts with an axiom that must be assumed, but it is so effective only due it being a commonality between humans (it isn't a very hard axiom to adopt). My point is that your argument has a fundamental flaw: you are arguing that discrete experience is the most fundamental, but yet you are using thinking in itself to do that in the first place. I think the fact that you can put forth an argument at all provides direct explication into the fact that discrete experience isn't the most fundamental thing. This may just very well be a point that you don't find particularly useful in terms of what you want to portray in your essays, but it really comes down to which requires the other to be viable, thereby which is the most fundamental, the motive to differentiate (to think in itself), or the differentiation that occurs as a result of it? That is what I am trying to get at. This is why I think your epistemology fails: not because it is wrong, only because it posits discrete experience as if it actually is the most fundamental and that that is proven. If your epistemology were to simply concede that it is starting off with the assumption of differentiation, I think everything necessarily follows quite nicely. I'm not sure if that makes sense or not.

    So this is sort of a descriptive order of causality, or why we arrive at the point that we are in our thinking?

    Yes, it is to question our way into deriving what must precede another for it to be viable. It is to determine what is the most fundamental in terms of what, in terms of your experience, requires for all else. However, on the flip side, it can also be analyzed in the sense of extrapolated precedence. So the utilization of whatever was required to even posit the questions in the first place can be utilized to determine what even itself must logically be preceded by in order for itself to be viable. However, my point concern with extrapolated forms of derivation is that the subject ends up more sure of whatever they found logically necessarily precedes them then themselves and then, subsequently, can fall into a trap of actually doing things that they normally wouldn't do as a result. I like to think of them both as useful forms of derivation, but the derivation to what must exist for the consideration in the first place must always be a more sure fact than anything that can be necessarily, and logically, extrapolated from it (including its own extrapolation).

    It is not that discrete experience causes the motive to be, but we do need to discretely experience to know what the motive is.

    Yes I believe this is accurate. However, I would like to emphasize that this in no way implies that we start with the differentiation (the discrete experience): it implies that the discrete experience is concluded to be true or exist based off of the motive. I can posit that "I discretely experience", but the fact that I can posit explicates what is actually the most fundamental. If it were possible to experience differentiation (discretely experience) without motive (or thinking in itself), although I can't say it is even possible, I would say that, paradoxically, you wouldn't experience at all. If you lacked any motive to be convinced via a set of rules, then you would never know that you experience in the first place. You never would have posited this epistemological theory. We wouldn't know that we are conversing right now. etc. Now it may be that both motive and its subsequent differentiation are bi-dependent, however my point is that differentiation is subsequent to motive, or, better yet, thinking in itself. Don't get me wrong, I think your concern is very warranted: would this be really worth prepending to your essays? Wouldn't it just over-complicate things? It may very well be that the best approach to your philosophy is to start with the assumption of discrete experience, which isn't the most fundamental just to provide easier comprehension for the reader (or to keep it on point to what you would like to portray). But my point is that it doesn't seem like your essays really acknowledge this, as they actually, on the contrary, seem to be arguing that it is the most fundamental and that that is proven.

    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    I fully understand! It is a constant struggle for me as well. One of the reasons I respect you is you are a participant trying to understand what the underlying meaning of what I am saying is as well. I hope I have been as open and understanding back.

    Thank you! Same to you! I wasn't bringing that up with any implication that you weren't attempting to understand the underlying meaning, just that, since my terminology isn't really on point yet, I wanted to notify you, so to speak, that simple acknowledgement that I haven't worked out all the kinks.

    Motive can be used to describe "Why I discretely experience" There is something that compels the mind to do so. What is that compulsion?

    Not quite, I would say. Motive is deeper than that: it is the underlying motivation, along with the most fundamentals rules, that must be abided by. Therefore, i consider the statement "I discretely experience" an extrapolation which utilizes this fundamental motive, and subsequently the outlined rules that constraint it, to determine that that is true in the first place. I am trying to convey that it starts, at the most fundamental aspect, with motive, and consequently a set of rules, and not discrete experience.

    The issue I have is that this motive is logic. While a motive can be logic, it is unfortunately not the motive of everyone, nor necessarily a basic function of thought. Many thinking things are not motivated by logic. Survival and emotions seem to be the most basic of motives that compel us to discretely experience, and identify the world a particular way.

    When I used the term "logic", admittedly, this may not be the right word to use. I am not referring to anything that is taught. When you are using the term "logic", I am thinking of "rationality". Anything that could be taught to the subject must abide by the motive, the rules, that subject necessarily has. When I say "rules", I don't mean all the rules they could ever abide by, but, rather, the rules that are necessarily the case for any convincement to occur. In other words, you can't teach them anything without them first having this motive, for that would mean they don't have any set of rules, fundamental rules that is, that they must follow. This doesn't mean that whatever they are convinced of is rational, but it does mean they are convinced of something. This convincement can only occur, I am trying to argue, given a motive, which perpetuates the rudimentary, fundamental rules that must be abided by for a claim to convince them. Therefore, I see no difference with "survival" or "emotions". If someone does something based off of "emotions", they have considered that their claim abides by the most rudimentary rules, set in place by the motive, and thereby, in the heat of the moment, they are convinced of it.

    When you say:
    " Survival and emotions seem to be the most basic of motives that compel us to discretely experience, and identify the world a particular way"

    I think this is an extrapolation that requires the motive in the first place for you to be convinced of this. You are extrapolating claims, for example in terms of survival, based off of empirical evidence in terms of evolution (which is fine, but this is an analysis at a much higher level, because it utilizes motive in the process, than what I am trying to convey).

    Logic can be done without training or thought, but it is often something learned

    Again, I may be just misusing "logic", but I would consider your use of "logic" to be "rationality". Everyone, in the sense that I am using it, utilizes "logic". It is simply a rudimentary, most fundamental, set of rules that is perpetuated by a motive. Without it, the subject wouldn't be capable of rationality or irrationality.

    . It is a higher order of thinking that one must learn by experience or be taught to consistently think and be motivated in such a manner.

    I would characterize this as "rationality" (or something like that).

    How do I take the fact that I discretely experience, and use it in a logical way?

    In terms of what I am trying to convey, this proposition here is too high level. The most fundamental thing isn't that you discretely experience, it is that you are convinced that it is the case. There was a motivation, with you as a subject, to innately attempt convincement by means of a rudimentary set of rules that must be abided by. In other words, when you claim "you discrete experience" or that "the most fundamental thing is discrete experience", these claims are only possible if something has a motive to try to be convinced of these via a set of necessary rules. This set of rules doesn't have to encompass all of rational thought--so basically the motivation towards the necessary use of the rudimentary rules, along with those very rules are rational, but not all rational rules are in that set of rudimentary rules (they can be built off of them). I think you are generally correct in the sense that if I were to lump "logic" into what can be learned and what necessarily isn't learned, but is built off of, then that is true. However, if I did that then "logic" would necessarily have to have two sub-types, and motive would be associated with the aspect that isn't learned.

    There is nothing to compel us to think logically, but a logical conclusion itself. A person who rejects logic entirely in favor of survival or emotions will not be able to discretely experience in terms of knowable outcomes, but in more of a selfish and basic survival satisfaction.

    Again, I would say "nothing compels us to think rationally". But if I were to go with the way you are using "logic", then I would split it into two sub types, and emphasize the aspect that is necessary to learn anything in the first place. Survival and emotions are still formulated from the motive and its rules. If the subject is truly convinced that their decision to follow emotions isn't correct, then they necessarily would not follow their emotions. They may deny certain rational claims, but they necessarily utilize a particular motive, which they don't control, which perpetuates the rules by which they are convinced of anything at all. Does that make sense?

    How do you convince a person to think logically?

    Again, you would have to convince them, which would require that it abide by the necessary rules, from the motive, that is in place for them (I don't mean "in place" in the sense that they are choosing them--they aren't).

    You've used a term a couple of times here, "chronological viability". What does that mean to you? You've noted two types. Could you flesh them out for me? Thanks for the great input!

    Of course, so, in a nutshell, "chronological viability" is the attempt of the subject to derive the chronological order of what must come first before another thing. I call it "viability" because I see the derivation of things in terms of which order produces the necessary viability that I experience. For example, if I were to think that my discrete experience is derived from the car I see, in a literal sense, then that cannot be true because the viability of the car's existence as I see it depends on discrete experience in the first place. But to ask "what must come first" can be taken two different ways (I think). It is sort of like in the sense of the chicken vs the egg: which comes first? I think we can derive something in terms of extrapolated chronological viability and just chronological viability. The former would be more in terms with the egg coming before the chicken (the bedrock of the chicken is the egg, as the egg must come first for the chicken to be there), whereas the latter is in terms of what had to be there for the consideration in the first place (the chicken extrapolated that it came from the egg, therefore the chicken is required for that extrapolation to occur in the first place, therefore it must be more sure of its existence over the fact that it came from the egg). I think these are both important aspects of derivation, but shouldn't be taken into account solely without consideration of the other. I think that the motive, and its rules, is required, in the sense of just chronological viability, before the discrete experience. But once the motive is, whatever that may be, and consequently its rules, then it necessarily follows that anything I can possibly imagine requires discrete experience--including the attempted derivation of the motive itself and its rules. Does that make sense?

    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    Sorry for the wait Bob, busy week, and I wanted to have time to focus and make sure I really covered the answers here.

    No worries! I always appreciate your responses because they are so well thought out!

    I think, before I address your post (which is marvelously well done!), I need to try to convey, in terms of our discussions pertaining to thought, the underlying meaning of what I am attempting to portray. Forgive me, but I am still contemplating it and, consequently refurbishing my ideas on the subject as I go on, so the terminology is not what I would prefer you to focus on (as I try to explicate it hereafter): it is the underlying meaning (because I freely admit that these terms I am about to use may not be the best ones, but, unfortunately, they are the best ones I can think of right now).

    I first would like to explain back to you my understanding of discrete experience and then utilize that to attempt to convey a problem with it being utilized as the most fundamental in terms of chronological viability (derivation of the subject, and consequently everything, in terms of viability). When I, shortly hereafter, explain your concept of discrete experience, please correct me anywhere I am misunderstanding as it is crucial to what I state afterwards!

    Discrete experience is differentiation, that is the capability of impenetrability and cohesion. Without such, we wouldn't have existence, or, at the very least, it wouldn't be anything like we are now as there is nothing more fundamental than differentiation (or at least I think that is how your argument goes, but, again, please correct me!). So when you say:

    The question of course is, can you even make an argument against discretely experiencing, if you didn't discretely experience?

    I think this is exactly what you are arguing: differentiation is the derivative of all else. So, no matter what thought manifests in my mind to counter your claim, I must concede, as you are right, that it in itself required "discrete experience"--that is, to be more specific, differentiation.

    However, I think this is wrong and right. Right in the sense of derivation in terms of extrapolated chronological viability, and wrong in the sense of derivation in terms of just chronological viability. Let me try to explain.

    In terms of derivation, we first have thoughts, but, as you rightly pointed out, there is a difference between the concept of thoughts (which is an extrapolated inference of what is typically characterized as the process of thinking) and thought itself. You are absolutely right that a person could never define thought, however, they would still be thinking. This is where, as you also rightly pointed out, a distinction needs to be made: thinking in itself and its own extrapolation of itself into a characterized process. The latter is not required, the former is. Furthermore, this is why I will be disregarding the latter, the characterized process, for now and focusing on the former because I am attempting the derivation of chronological viability of the subject (myself).

    With respect to thinking in itself (not to be conflated with Kant's notorious use of thing in themselves, I am making no such noumenon/phenomenon distinction--I just can't think of a better word yet), it, in turn, requires a further derivation: I can question, logically, the very discernment between the thoughts themselves, which is also a thought that relates solely to thinking in itself and not to traditional objects. In other words, I have thought A and thought B, I can ask, logically, "why was I able to have A and B and not just a blob of thought (meaning the cohesion of all thoughts)?". I think this is the level at which your argument determines that the answer to such a question in the more fundamental "discrete experience" (aka, differentiation is required for the thoughts to occur). You are right! But the derivation does not stop there. Now, in terms of the aforementioned question, I could legitimately answer myself with "differentiation must occur for my thoughts". This is 100% valid. However, now I can ask a further question: "how am I able to be convinced and why am I convinced that my answer satisfied it?". I think this reveals to the subject that the most fundamental thing, in terms of just chronological viability, is the fact that they are a motive. They are a perpetual motive towards logic, which any answer (any conclusion) that satisfies logic satisfies the subject. Now I think we are getting more fundamental than simply differentiation. There are rules, identified later as "logic", which the subject, at its most rudimentary form, is perpetually motivated to follow. Without it, differentiation is meaningless. This is because a being could have the capability to differentiate while never being motivated to utilize it within any construct of rules.

    Now, I think you could counter this with "logic requires differentiation to occur in the first place", but my point is that motivation doesn't necessarily require differentiation: it is the thing differentiating--based off of that motive. Another problem is that I can answer "what is the motive?" by utilizing that motive, thereby within its motivated constraints, but I cannot really answer, as of yet, "why is there a motive?". To be clear, I don't mean motive in the sense of "I want to do this", no that is already very far away in the sense of derivation and, consequently, utilizes the motive and discrete experience to conclude such a "want". This is what I was trying to get at with rudimentary reason, but I am not sure anymore if that is the best term for it.

    I think you are arguing that differentiation is the key to everything (in terms of derivation), I am saying you are right if we are talking about derivation in the sense of extrapolated chronological viability, contrary to just chronological viability. The reason I think this is the case is because, from this motive we differentiate, and thereby, it is something we conclude by means of the motive that we have discrete experience in the first place. In more simple terms, the fact that either of us can argue either way requires a motive we did not choose, for it is our bedrock, which constraints us to logic, which we don't ever have to define to know it is true. That is why we can create a vicious absurdity of questioning where we demand a logical explanation for everything--including the concept of everything. My point is that this demanding requires a motive which precedes differentiation in terms of viability: differentiation requires a motive. However, in term of extrapolating where that motive came from (the "why is the motive there"), I think the best explanation is what you are arguing for: differentiation is required. This is because I could very well counter this with "well, motive itself is a differentiation of sorts". This is true, but I am trying to convey that that is utilizing the motive to even make that statement in the first place, and therefore, everything points back to this motive. But if I ever want to attempt to explain the motive, then I will be bound to its rules, logic, which I must be convinced is being obeyed and, therefore, it will require that I extrapolate, within the inevitable use of the motive, that it all requires differentiation--including itself. Notice that this doesn't actually mean that motive depends on differentiation, only that the use of motive to derive itself will always result, due to abiding by its own rules, that it requires differentiation. Does that make any sense?

    So, with that in mind (hopefully I did a good enough job of explaining it for now), when you say:

    Can you disprove that you discretely experience?

    I cannot, because my use of my motive to derive my motive will inevitably be constrained to its logic, which will require that it be convinced that itself is derived from differentiation. This is not the same thing as stating it actually is derived from differentiation. Do you see what I am, in underlying meaning, trying to convey?

    It is the ability to take the entirety of your experience, and divide it into parts.

    Again, this requires a motive.

    If that didn't make sense, please let me know! But if it did, I think it demonstrates quite effectively that, regardless of how we want to define thought, we are logically bound to utilizing thinking in itself, via the motive, to derive differentiation in the first place. Therefore, I still do think you are arguing "I think, therefore I discrete experience, and vice versa", but "think" in the sense of itself, which requires no defining. In other words, I don't think you are arguing in your paper that the concept of thinking, as defined in your paper, is what derives discrete experience, but, rather, you are implicitly utilizing thinking in itself throughout the entirety of the derivation.

    This is incorrect. Thoughts have nothing to do with the ability to discretely experience. I never say, "First I think, then I discretely experience."

    I think you are talking about the concept of thoughts, and in that sense I think you are right. But not in the sense of thinking in itself.

    I eliminate thoughts, and arrive at the idea that discrete experience is the one thing I cannot eliminate.

    You cannot eliminate motive, subsequently thinking in itself, without utilizing it to attempt to do so. I think you are talking about the concept of thoughts, which is a defining, and you are right in that sense, but I am not trying to argue against that at all.

    Since this is becoming entirely too long, I will address the possibility and "first cause" points you made after I let you have proper time to respond to the aforementioned comments. Again, splendid post! I really enjoy reading your responses as they are incredibly well thought out!

    I look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    The reason why I haven't yet lumped it into an irrational induction, is there is an essential difference between the two. An inapplicable plausibility is unable to be applied, while an irrational induction is a belief in something, despite the application contradicting the belief. But as you've noted, niether have potential, so I think they can be lumped together into a category.

    Yes, but I don't think an "irrational induction" is "despite the application contradicting the belief" anymore, it is when is is impossible and has no potential. However, I do see your point, I don't think all inapplicable plausibilities are irrational, depending on how we define it. There's a difference between claiming something that cannot be applied now or even in one's lifetime (or 30,000 years from now) and something that can be proven to lack potential (meaning it can be abstractly proven to never be able to be applied). For example, the belief in a magical unicorn that can fly and has invisibility powers isn't necessarily an inapplicable plausibility in terms of the latter, it could be that, as our technology advances, that we can actually detect invisible things somehow or maybe we find one on another planet or something. However, the belief that there's an undetectable unicorn is an example, I would say, of the latter: we can, in the abstract, since it is undetectable, determine it is an irrational induction because it lacks potential. If we define inapplicable plausibilities in the manner of the latter, then I would advocate that all inapplicable plausibilities are actually irrational inductions. However, if the former is also utilized to a certain degree, then further consideration is required.


    I think a more accurate comparison would be "Claiming there is a first cause is the same as claiming there is a smallest particle that can exist." Comparitively, claiming, "This thing is a first cause, is the same as claiming this particle is the smallest particle." Each have different claims of existence and logic behind it. While I believe the most cogent belief is that there is at least one first cause, I find the bar to prove that any one thing is a first cause, may be extremely difficult to claim.

    Although I understand the distinction you are making here: it is still an irrational induction based off of the same logic. I can abstractly prove that when you say it "may be extremely difficult to claim" "there is a smallest particle that can exist" that that can never been applicably known. Therefore, it lacks potential and, subsequently, is an irrational induction. Stating "there is a smallest particle that can exist" is no different than stating "there is an undetectable unicorn". You can never verify either, nor can you disprove it, because it is actually a form of irrationality (I don't need to disprove it beyond demonstrating it lacks potential). I think the only way to amend this is if you were to accept inapplicable inductions, in the manner where they can never be known, as rational. I would disagree (although, yes, this kind of irrational induction targets potentiality and not impossibility).

    The reason is simple. A first cause has no prior reason for its existence. But there is nothing to prevent it from appearing in such a way, that a person could still interpret that something caused it to exist. If a particle appeared with a velocity, how could we tell the difference between it, and a particle who's velocity was caused by another? We would have to witness the inception of the self-caused particle at the time of its formation. But a historical analysis would make the revelation of certain types of self-caused things impossible.

    I think that you are starting to demonstrate why this has no potential. It can never be applicably known. It is simply a belief within the mind, like an undetectable unicorn.

    1. One must have distinctive knowledge first. Distinctive knowledge is the essential properties you have decided something should be. I can define a "tree" as being a wooden plant that is taller than myself.

    2. Experience something, and state, "That is a tree." To applicably know it is a tree, your essential properties must not be contradicted. Turns out the plant I'm looking at it wooden, and taller than myself. I applicably know it as a tree. Therefore I know it is possible that there are wooden plants taller than myself.

    I have no problem with #1, but #2 is where the ambiguity is introduced: you are clumping "trees" together as if that is a universal, it is a particular. To "experience something, and state "that is X"", is something someone can do with virtually anything. To say that the only requirement in #2 is that the essential properties are not contradicted is like using potentiality is if it is possibility. Just because the essential properties don't contradict doesn't mean I am justified in claiming X and Y are similar enough for me to constitute it as the same experience on two different occasions. Although I am probably just misunderstanding you, there's no real justification here that gravity as experienced here is similar enough to say it is the same there. Sure, we could say that it has the same essential property that it falls both times, but that does not mean they are identical enough to constitute it as the same experience: experiencing it on a mountain isn't the same as in a valley. Can I say, after experiencing it in a valley, that it is possible on a mountain?

    I don't believe this is the case. Circular logic is when a reason, B, is formed from A, and A can only be formed from B. Thus the simple example of, "The bible states God exists. How do we know the bible is true? God says it is."

    Let's break this down in the proper circular logic format as you described:

    A is because of B, B is because of A.
    Bible states God exists, We know it is true because God says so.
    I discretely experience because I concluded it in my thoughts without contradiction. How do I know I think? Because I discretely experience.

    You argument, as I understand it, is also circular. In order for any of the epistemology to work, you must conclude, which is a thought. So you conclude you have discrete experiences. But then it can be posited "how do I know I think?", your answer is: "I discretely experience". They are dependent on one another: this is the exact same thing as A proves B, B proves A. Maybe I am just missing something though.

    My definition of "thoughts" does not prove discrete experience. My definition of thoughts comes from discrete experience.

    Here is it in action (I think): you are saying you don't prove discrete experience with thought, because you simply discretely experience. But the whole thing, including the acknowledgment that you discretely experience, is dependent on you have a conclusory thought. I think your argument is along these lines:

    1. I think, therefore I discretely experience
    2. I discretely experience, therefore I think

    I don't think this is explicitly what you were arguing for, but, nevertheless, I think your argument is implicitly utilizing this kind of circular logic. You use your ability to think to conclude that you discretely experience, and then you just simply justify those thoughts with the fact that you discretely experience. This is circular. My original way, "I think, therefore I think", was a bad way of demonstrating this, so I apologize for the confusion, it is more about the relationship between thought and discretely experiencing.

    Thoughts, as defined here, are simply my ability to continue to discretely experience when I stop sensing. I can choose that definition, because I can choose how to discretely experience.

    Again, you are concluding this, which is a thought, so you are using thought to prove discrete experiences, and then vice-versa.

    It is, "I discretely experience, therefore I can define a portion of my experience as "thoughts"

    Again, how did you conclude that? You thought (concluded) that you discretely experience, and then you justified the process of thinking (which you used to acknowledge discrete experience) with the fact that you discretely experience. The separation of your experience into perception, emotion, and thought in itself depends on thought (a particular kind I called rudimentary reason). If you couldn't conclude, then you would have determined that you discretely experience (I would argue that you can't discretely experience without some form of rudimentary reason).

    If you think I do not know that within my self-context, can you disprove it? Can you demonstrate that I do not discretely experience?

    I think this is an appeal to ignorance fallacy, I don't have to disprove it. I am simply analyzing your proof for the conclusion that we discretely experience and I think it is circular. Even if you completely agree with me that it is circular, that doesn't disprove that we discretely experience (and I don't think it has to). I am failing to understand why I would need to disprove it?

    I look forward to your response,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    Yes, if you're just comparing the fundamental building blocks of different plausibilities, you can determine plausibility A is more cogent than plausibility B. The problem is, if they aren't within the same context, how useful is that analysis?

    I think the comparison is more relevant when you actually have to choose between the two. As a radical example, imagine someone puts a gun up to your head and tells you to bet your life on either plausibility A or B (where both are completely unrelated): I don't think you would just flip a coin, or answer with indifference. I think you would analyze which you are more sure of.

    Your two examples are great. Unlimited infinities are irrational. But some limited infinities may be inapplicable plausibilities. Perhaps there is no limit to space for example. Its plausible. But it is currently inapplicable.

    Excellent point! You are right: potential infinites, when asserted as if they are actual infinites, are also irrational inductions because they are inapplicable plausibilities. I think you were right in wanting to move inapplicable plausibilities to irrational inductions, because they lack potential. I can never apply the belief that any given infinite, within a limit, is actually infinite. Splendid point!

    Yes. Stating that everything which has a cause, must have a cause, is an unlimited infinity. It breaks down if you examine it in the argument. All that is left, is that there must be a first cause. BUT, this is still either an applicable or inapplicable plausibility at best. It is simply more cogent to believe that there is a first cause, then not. Since we do not have any higher induction we can make in regards to the a first cause within the context of that argument, it is more cogent to conclude there is a first cause.

    Now that we agree that actual infinites are irrational, you are right: the other option seems to be a first cause. However, claiming their is a first cause would be the same as claiming this particle is actually the smallest particle that can exist: it is an inapplicable plausibility. Inapplicable plausibilities are irrational inductions (because they lack potential). I can, in the abstract, prove that we will never be able to state that "this is the first cause", just like how we cannot state "this is actually the smallest thing". They are both irrational inductions. What we could say is that "this is potentially the smallest thing", and that is an applicable plausibility (if no one finds anything smaller, then it is potentially the smallest thing). So, in light of this, I think that, at best, you could only claim, rationally, that this or that thing is potentially the first cause: never that there actually is one. Then I think we would be on the same page as claiming potentials would restrict us to our true limits of experience and anything attempting beyond that is irrational. This is what I mean by explanatory-collapsibility: restraining oneself from going beyond one's capabilities, where one is susceptible to making actual claims when it is really potential. We are always in a box, and in that box we shall stay.

    I'm not sure if that answered the question, but I felt this was a good example to show the fine line between what can be applicably known, possibility, and plausibility. Feel free to dig in deeper..

    Although I really appreciate the elaboration, I don't think you addressed the most fundamental issue.

    It is when you have concluded applicable knowledge within your context.

    I consider this completely ambiguous. Although I understand what you are trying to say. I think, as of now, your epistemology is just leaving it up to the subject to decide what is or isn't possible (because they can make, in the absence on any clear definition, "experienced before" mean anything they want). If we don't draw a line at where something has been experienced before, then I think possibility loses its power, so to speak. Is experiencing that apple enough to justify this apple? Is experiencing gravity on earth enough for the moon? Is my car starting enough to justify another car starting? What if they are the same exact model? What if they are different manufacturers. We've touched this a bit before, but, mereologically, where are we drawing the line such that "experience before" is similar enough to "experience now" to the point where I can logically associated them together?

    I do not claim that perception, thoughts, and emotions are valid sources of knowledge.

    If you are saying that you aren't claiming your knowledge to necessarily be true, then I agree.

    I claim they are things we know, due to the basis of proving, and thus knowing, that I can discretely experience.

    My point is that it isn't a proof: it is vicious circle. As far as I understand it, you are stating that "I think, therefore I think", "I perceive, therefore I perceive", and "I feel, therefore I feel". These are not proofs, these are the definition of circular logic.

    The discrete experience you have, the separation of the sea of existence into parts and parcels, is not an assumption, or a belief. It is your direct experience, your distinctive knowledge. I form the discrete experience of thoughts as a very low set of essential properties in the beginning, so that I can get to the basic idea of the theory.

    I am having a hard time of understanding how this isn't "I discretely experience because I discretely experience".

    You create an idea of a thought, and you confirm it without contradiction immediately, because it is a discrete experience.

    Again, how is this not "I think, therefore I think"? This boils down to: "I know that I experience discretely, because I do". This is the definition of circular logic.

    If only I could ever get the idea out there in the philosophical community at large. I have tried publication to no avail. Honestly, I don't even care about credit. Perhaps someone on these forums will read it, understand it, and be able to do what I was unable to. Or perhaps someone will come along and finally disprove it. Either way, it would make me happy to have some resolution for it.

    I am truly sorry that people aren't taking your epistemology seriously: it deserves the credit it is due! I think your biggest adversary are the rationalists. They will put the a priori knowledge at a higher priority than the a posteriori, the egg before the chicken, which I think your epistemology does the reverse (although you don't subscribe to such a distinction, that's typically how they will view it).

    Thanks again Bob. It has been very gratifying to have someone seriously read and understand the theory up to this point. Whether the theory continues to hold, or crashes and burns, this has been enough.

    Of course, thank you for a such a lovely conversation! I thoroughly enjoyed understanding your epistemology.

    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    Yes, I think this works nicely! I think potentiality nicely describes process of creating the useful distinctive knowledge we come up with. Anything which we come up with in our minds that contradicts our other distinctive knowledge, could be said to lack "potential".

    Yes, I think we are on the same page now!

    So if you conclude that an induction is built up of two essential properties, one having a direct grounds off of applicable knowledge, while the other has grounds on plausibilities, you can rationally reject the second essential property, but keep the first.

    I agree. But I suspect that you are only referring to the comparison of plausibilities that relate to one another, so I would like to explicitly state that I am claiming that one can compare all plausibilities to one another in this manner. When comparing to completely unrelated plausibilities, it isn't a matter of choosing which one you should hold: it is about which one is stronger, more sure of. I am not entirely sure if you would agree with me on that.

    In light of our recent agreements, I think it is safe for me to move on and explicate some of my other thoughts on your epistemology:

    Actual Infinities Are Irrational

    I think that, in light of us agreement on potentiality, we can finally prove that actual infinites are irrational inductions. To keep it brief, we can abstractly prove that actual infinites contradict logic: a great example of this is the infinite hotel problem (thought experiment). Therefore, since they contradict logic in the abstract, they lack potentiality. If they lack potentiality, then they are an irrational induction. Therefore, if any induction invokes such a principle, it must be an irrational induction unless the induction can safely separate itself from any actual infinite claims it is actively utilizing. For example, if I say it is possible for an apple to exist in all of time and space, I am holding a legitimate possibility induction because I am utilizing a potential infinite, which has limits (in this case, the limits are space/time itself). However, if I say it is possible for an apple to exist within everything (where everything has no limits), then I am holding an irrational induction because actual infinites have no potential. Therefore, I think that your epistemology quite nicely dictates that our inductions can only be rational, in any sense of the term, if it utilizes limits (which encompasses potential infinites). I think, as you may already be inferring, that this actually have heavy implications with respect to your idea of a "first cause", but I will refrain as I will not continue down that alleyway unless you want me to.

    Mathematical Inductions Are Possibilities

    I know we had a lot of disputes about mathematical inductions, and so I wanted to briefly continue that conversation with the idea that mathematical inductions do not require another term, contrary to what I was claiming, because they are possibilities. If I say that F(N) works for all integers, N, I am utilizing my distinctive knowledge to claim that it will hold again. This is no different than gravity: I have experienced it, therefore I say it is possible for it to happen again. At its most fundamental level, with math, I am claiming that my experience is differentiated all the time, therefore that differentiation should hold. theoretically, everywhere and everytime. In other words, math is possible. I also see know, that you were right in that probability is its own thing, because it takes it a step deeper: it isn't just a possibility.

    We Need to Define The Definition Of Possibility

    I think that it would be beneficial to really hone in on what it means to have "experienced something before". Where are we drawing the line? Is there a rational line to be drawn?

    Distinctive Knowledge is Assumed

    I think that your epistemology, at its core, rests on assumptions. Now, I don't mean this is a severe blow to the your views: I agree with them. What I mean is that, as far as I am understanding, your epistemology really "kicks in" after the subject assumes that perception, thought, and emotion are valid sources of knowledge. If they agree with that assumption, then your epistemology works. However, since we are philosophizing, I think we really need to hone in on these fundamental principles a little deeper. I think so far your epistemology essentially states:

    "We think, therefore we think"
    "We perceive, therefore we perceive"
    "We feel, therefore we feel"

    Just some food for thought! I know these are probably loaded, completely separate, propositions of mine. So feel free to guide the conversation as you wish.

    I look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,
    I apologize for such a belated response: I've been quite swarmed recently.

    Almost every single belief of induction is not contradicted in the abstract. Meaning at best we describe all inductions besides irrational induction.

    I think that the first sentence here is sort of like survivorship bias: it isn't that almost every single induction has potential, it is that all beliefs of induction that hold any substance at all have potential, therefore all the ones that have survived enough for both of us to hear tend to have potential. Most people naturally revoke their own inductions that have no potential without ever verbalizing them, because it is the first aspect of consideration in the process of contemplation. What I am trying to say is that I wouldn't post an inductive belief on here if I was well aware that it had no potential. So I agree, but I don't think it implies what I think you are trying to imply: it doesn't mean that potentiality isn't a worthy, or relevant, consideration just because most don't make it out of our heads to other people. I agree with you that potentiality doesn't get the subject to a completely working, solid claim of knowledge.

    With respect to the second sentence, I holistically agree! The point I am trying to make is that "irrational induction" is not just what is contradicted by direct experience but, rather, it is also about whether it is contradicted in the abstract. I think it may be the appropriate time to elaborate on what I mean by abstraction. A contemplation resides in the abstract, in a pure sense, if it isn't pertaining to particular experiences but, rather, is utilizing a combination of those experiences or/and a generic form of those experiences. For example, the consideration of 1 "thing" + 1 "thing" is 2 "things" is purely abstract because it doesn't pertain to particular experiences. Although we can dive in deep into what abstraction really is, I am going to intentionally keep it this vague so you can navigate the discussion where you would like. In light of this, the example of fitting malleable (as you rightly mentioned) candy bars in specific dimensions that cannot occur is not due to it having no possibility (I am not negating it based off of a direct experience), but actually because it lacks any potential. This is an irrational induction. What also is an irrational induction, but not based off of abstraction, would be if I were to hold the belief that some particular apple is poisonous, yet having experienced a person eating that exact apple and they showed 0 signs of poisoning. In this case, no abstraction is needed: the particular experience is enough to warrant it as an irrational induction. That is essentially what I was trying to convey.

    Rationally, something that is not contradicted in the mind may have no bearing as to wheather it is contradicted when applied to reality.

    I agree. I am not attempting to claim that something that has potential necessarily is possible (which is what I think you are getting at here). I am attempting to claim that something that has potential is more cogent than something that lacks potential.

    Perhaps "potentiality" could be used to describe the drive that pushes humanity forward to extend outside of its comfort zone of distinctive knowledge, and make the push for applicable knowledge. The drive to act on beliefs in reality.

    No I don't think it is the drive, it is what most subjects do inherently (and what everyone does who has subscribed themselves, legitimately, to the game of rationality--I would argue). It is an important aspect of what constitutes an irrational induction. Without it, I think your epistemology is constrained to the apple example I gave previously: what is irrational, is what is impossible. I am saying: what is irrational, is what is impossible and has no potential.

    But what I think you want, some way to measure the potential accuracy of beliefs, is something that cannot be given.

    To a certain degree, I agree with you. Potentiality in itself does not warrant a belief accurate, but the lack of potentiality warrants it necessarily inaccurate. In order for me to properly assess potentiality, I think that we ought to define the definition of possibility (define what it means to experience something before), because this greatly determines what is considered abstract. So, how are you defining what "you've experienced at least once before"?

    There is no way to measure whether one plausibility is more likely than another in reality, only measure whether one plausibility is more rational than another, but examining the chain of reason its built on.

    I think you are wrong, but actually right. I think we can most definitely compare plausibilities in terms of induction hierarchies within it--not in terms of probabilistic quantitative likelihoods. But before I can get into that, I need to do some defining. First, I need to define the relations within the induction hierarchies, so here's how I will be defining it (all of which are open to redefining if you would like):

    The Induction = The induction being proposed.
    the grounding inductions = The inductions that The Induction is contingent on, which ground it to the subject (derive back to the subject).
    induction hierarchy = The Induction considered with respect to its grounding inductions, which can be considered a holistic analysis of The Induction.
    components = The distinct claims within The Induction (more on this later).
    characteristic = An attribute, descriptor within a component (more on this later).

    To summarize what is defined above, The Induction is simply the actual induction that the subject is making, whereas the grounding inductions are, as we previously discussed, what the subject will consider in a holistic analysis of The Induction. The induction hierarchy should be pretty self-explanatory as it is that holistic analysis of The Induction. The components is where it gets interesting. The components are what essentially distinctively makeup The Inductions and the characteristics are, quite frankly (not to use the word to explain it, but I am definitely about to do that) the characteristics of the components.

    Let's go through some examples real quick. So, for all intents and purposes right now, let's consider the induction hierarchy as a horizontal holistic analysis, like this:

    possibility -> possibility -> plausibility

    The two possibilities would be the grounding inductions, the plausibility The Induction, and the whole thing is the induction hierarchy. The components of The Induction are going to be formatted like this (I just made it up, no real rhyme or reason):

    possibility -> possibility -> plausibility: (component1, component2, ...)

    I am merely separating the components from The Induction with a colon and encompassing them with parenthesis. Also, I will put distinctive knowledge, although it isn't an induction, in the chain (for consistency) like this:

    [distinctive knowledge1, distinctive knowledge2, ...] - possibility -> possibility -> plausibility: (...)

    Note that I am not claiming that distinctive knowledge are apart of the induction hierarchy, just that they are grounds for it (one way or another). Also, the characteristics are within the components, so I won't have any special characters for those; instead, I am going to bold them.

    Now that that is out of the way, let's dive in! Let's use our favorite example: unicorns (: . Let's say I claim this:

    1. There are horses (distinctive knowledge)
    2. There are horns (distinctive knowledge)
    3. It is possible for animals to evolve into having horns (evolution) (possibility)
    4. It is plausible that a horse with a horn could exist (plausibility)

    Now, we can map this into our induction hierarchy like this:

    [horses, horns] - evolution -> unicorn

    But we can go deeper than this with components:

    [horses, horns] - evolution -> unicorn: (horned horse)

    The components are the specific distinctive claims within The Induction itself. In this case, I limited the claim to a horned horse: that is the sole component of my induction and the characteristic is horned. To really illuminate this, let's take a similar claim:

    1. There are horses (distinctive knowledge)
    2. There are horns (distinctive knowledge)
    3. It is possible for animals to evolve into having horns (evolution) (possibility)
    4. It is plausible that a horse with a horn and the ability to turn invisible could exist (plausibility)

    I can map this one like so:

    [horses, horns] - evolution -> unicorn: (horned horse, invisibility capabilities)

    Now there are two components to my inductive claim. I think that this is incredibly useful for comparing two plausibilities. At first, I thought I could utilize the sheer quantity to determine the cogencies with respect to one another. I was wrong, it gets trickier than that because the components themselves are also subject to an induction hierarchy within themselves. I can claim that it is possible for an animal to evolve into having a horn, but I cannot claim that an animal has evolved into being invisible (assuming we aren't talking about camo but actual invisibility), so the components themselves are not necessarily as cogent as each other. Therefore, I must take this into consideration.

    [horses, horns] - evolution -> unicorn: (horned {possible characteristic} horse)
    [horses, horns] - evolution -> unicorn: (horned {ditto} horse, invisibility {plausible characteristic} capabilities)

    Therefore, #1 is more cogent than #2, not due to the sheer consideration of quantities of components, but the quantity in relation to an induction hierarchy within the component itself. In other words, a plausibility that has one component which is based off of a possible characteristic is more cogent (doesn't mean it is cogent) than one that has component which is based off of a plausible characteristic.

    For example:

    [horse, horns] - evolution -> unicorn: (horned horse, has scaly skin)
    [horse, horns] - evolution -> unicorn: (horned horse, invisibility capabilities)

    The first is more cogent than the second because we can be more detailed with the components like this:

    [horse, horns] - evolution -> unicorn: (horned {possible characteristic} horse, has scaly skin {possible characteristic})
    [horse, horns] - evolution -> unicorn: (horned {possible char} horse, invisibility capabilities {plausible char})

    Therefore, #1 is more cogent than #2 when analyzed from the perspective of quantities (which are equal in this case) and in relation to to type of induction the characteristic is. So:

    [horse, horns] - evolution -> unicorn: (horned {possible characteristic} horse, has scaly skin {possible characteristic})
    [horse, horns] - evolution -> unicorn: (invisibility capabilities {plausible char})

    Even though #2 has only one component, that component is a plausible characteristic and #1 has two possible characteristics--therefore, #1 is more cogent because a possibility is more cogent than a plausibility. However, if it were the case that plausibility #1 had 3 plausible characteristics while #2 had 2 plausible characteristics, then #2 would be more cogent. I am simply applying the same induction hierarchy rules a step deeper to analyze plausibilities. When I state that a component contains a "possible characteristic", note that I am not trying to claim that that characteristic is possible with respect to subject it is describing; I am merely distinguishing characteristics that have been experienced before from ones that haven't been (some are just figments of our imagination, quite frankly). However, it isn't just about the relation to an induction hierarchy within the component itself: it is also about the quantity, but the quantity is always second (subordinate) to the consideration of the relation. For example:

    [horse, horns] - evolution -> unicorn: (horned {possible characteristic} horse, has scaly skin {possible characteristic})
    [horse, horns] - evolution -> unicorn: (horned {possible characteristic} horse)

    First we consider the relation in terms of the characteristics within the components, as it takes precedence over quantity, and we find that both claims utilize possible characteristics. Now, since they are equal in relation, we must consider the quantity: #1 has two components while #2 has one. Now, we must keep in our minds at all times that these are components of a plausiblity and, therefore, a plausibility with more components of the same induction type is less cogent than one that has less of that type. This is because the more I add components, the more speculation I am introducing and, most importantly, in this case, I am adding more of the same type of speculation. Therefore, #2 is more cogent than #1.

    I hope that serves as a basic exposition into what I mean by "comparing plausibilities".

    This is because the nature of induction makes evaluation of its likelihood impossible by definition

    We aren't really using likelihoods to compare plausibilities and if we are, then it is a qualitative likelihood of some sorts. I am going to stop here as this is getting quite long (:

    I look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    I think at this point to construct potentiality as a viable term it will need to

    a. Have a clear definition of what it is to be applicably known.
    b. It must have an example of being applicably known.
    c. Serve a purpose that another applicably known term cannot.

    I appreciate that you put your concerns (with respect to potentiality) in a such a concise manner, as it really helps me, on the flip side, really hone in on what I am trying to say. I've never been the best at explanations. So thank you! I will attempt to address this in my post hereafter.

    I think, upon further reflection, we are both conflating potentiality and possibility to a certain extent in the process of trying to dissect the colloquial use of "possibility". Potentiality is "what is not contradicted in the abstract", whereas possibility is "what has been experienced before". When you define possibility in that manner, I think you are implicitly defining it as "I've experienced X before, because I've experienced X IFF X==X". Therefore, assuming we don't get too knit picky with a more strict comparison X===X, possibility is like "I've experienced an Orange before, because I've experienced an Orange IFF 'Orange'=='Orange'". Therefore, when you say:

    So, if you have all of those answers, then you can state, since it is possible to line up a candy bar in X manner, then it is possible that a candy bar will be able to be lined up if X manner is repeated. Because there is no claim that the candy bar should not be able to stand if X manner is repeated, it stands to reason that if we could duplicate X manner many times, 3000 per say, the candy bars would stand aligned. But, if we've never aligned a candy bar one time, we don't applicably know if its possible

    You are stating it is possible to line up X manner repeated because "You've experienced 'X manner repeated' before, because you've experienced 'X manner' IFF 'X manner repeated' == 'X manner'". But that IFF does not hold, just like how 'X + 1' != 'X'. Even if you have experienced lining up 2,999 of those particular candy bars in question, and you knew all the other things you mentioned were possible (such as aligning candy bars are possible, horizontally lined up, etc), you would not be able to claim, according to your definition, that it is possible to line up 3,000. What is missing here, and what I think you are also trying to maintain, is potentiality: the abstract consideration. What you claimed is correct, but it is because you abstractly determined, via mathematical operations of repetition, that there is the potential for lining up 3,000 candy bars. Likewise, when you define impossibility in this manner:

    Applicable impossibility, is found when new applicable knowledge contradicts our previous possibilities.

    What you are stating is the converse of possibility, something like "I've experienced X contradict previous experience Y, IFF X disallows Y". This would directly entail that you have to directly experience the converse, such that "I've experienced X before, which is contradicted by this experience Y, therefore X is impossible". Notice this also disallows abstract consideration. It is:

    "I've experienced a cup holding water, therefore it is possible for a cup to hold water"
    "I'm now experiencing cups not being able to hold water, therefore it is impossible for them to hold water"
    "The most recent experience out of the two takes precedence"

    But then I think you introduce potentiality here into impossibility:

    Likewise, without ever experiencing it, I can hold that it is irrational to believe that one can fit 7,000 2 in long candy bars, side by side long ways, within 1,000 feet (because, abstractly, 1,000 feet can only potentially hold 6,000 2 inch candy bars side by side).

    There is an asymmetry between possibility and impossibility in your usage of the terms: the former has no abstract consideration while the former does (aka, the latter allows for potentiality as a consideration whereas the former does not). What I am understand you to hold here, is that you can hold that it is impossible to fit 7,000 2 in long candy bars, side by side long ways, within 1,000 feet because you have abstractly considered its lack of potential. You have not determined this based off of "I've experienced the converse of X, which contradicts Y", therefore you haven't determined it an "impossibility" or "possibility", as they both are contingent on the experiences. No, you did not utilize anything except the abstract induction of mathematical operations to warrant it impossible (I'de say you actually warranted it, more specifically, as lacking potential). Admittedly, I have also been conflating potentiality and possibility in our discussion because it is a hard thing to separate. But they are two distinct things. Yes, I am still utilizing experience to do math in the first place, but I am not experiencing the direct converse for something to be considered lacking potential. But, according to your terms, I am also not stating that "I've experienced X before, which contradicts Y". I am stating "I've experienced X before, and the extrapolation of X contradicts Y in the abstract". For example, consider the following:

    I claim something is either (1) green, (2) not green, or (3) other option

    This does, eventually, boil down to the law of noncontradiction, but, in the immediate, it is the law of excluded middle. What I am trying to explicate is that the rejection of #3 as being a "possibility" isn't experiential based--as in I am not negating the usage of #3 in terms of "I've experienced X, which contradicts Y". I am considering this purely in the abstract and rightfully concluding it cannot have any potential to occur. The reason this feels like a sticky mess to me, and maybe for you too, is that this is traditionally how "possibility" was also used: it had multiple underlying meanings.

    So let's go back to this:

    I think at this point to construct potentiality as a viable term it will need to

    a. Have a clear definition of what it is to be applicably known.
    b. It must have an example of being applicably known.
    c. Serve a purpose that another applicably known term cannot.

    A is:

    "what is not contradicted in the abstract"

    Although I don't think abstraction has to be directly applicably known (like I would have to go test, every time, the usage of mathematical operations passed what has been previously experienced), but I think B is:

    Abstraction is the distinctive knowledge, which is applicably known to a certain degree (i.e. I applicably know that my perceptions pertain to impenetrability and cohesion, etc), that is inductively utilized to determine potentiality.

    C is:

    The defining of "possibility" as "I've experienced X before, because I've experienced X IFF X==X" removes the capability for the subject to make any abstract determinations, therefore potentiality is a meaningful distinction not implemented already in possibility (and likewise for impossibility).

    I think that this is a good start to spark further conversation, so I think we can revisit some of the other things you demonstrated in your post after we find some common ground on the aforementioned.

    I look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    Absolutely fantastic deep dive here Bob. I've wanted to so long to discuss how the knowledge theory applies to math, and its been a joy to do so. I also really want to credit your desire for "potentiality" to fit in the theory. Its not that I don't think it can, I just think it needs to be more carefully defined, and serve a purpose that cannot be gleaned with the terms we already have in the theory. Thank you again for the points, you are a keen philosopher!

    Thank you Philosophim! You are a marvelous philosopher yourself! I am also thoroughly enjoying our conversation. I agree in that our dispute is really pertaining, at a fundamental level, to two concepts: potentiality and math.

    I have been thinking about this for some time. I like the word "potential". I think its a great word. The problem is, it comes from a time prior to having an assessment of inductions. Much of what you are describing as potential, are a level of cogency that occurs in both probability, and possibility. The word potential in this context, is like the word "big". Its a nice general word, but isn't very specific, and is used primarily as something relative within a context.

    I agree, I definitely need to define it more descriptively. However, with that being said, at a deeper level, the term possibility is also like the word "big": it is contingent on a subjective threshold just like potentiality. Although I like your definition of it (what has been experienced once before), that very definition is also utterly ambiguous (from a deeper look). Just like how I can subjectively create a threshold of when something is "big", which you could disagree with (cross-referencing to your own threshold), I also subjectively create a threshold of what constitutes as "experiencing it before". Furthermore, I also subjectively create a threshold of what constitutes as having the potential to occur. I think we can definitely get further into the weeds about "possibility" and "potentiality", but all I am trying to point out here is that their underlying structure is no different.

    Logically, I can only say inductions are more cogent, or rational than another.

    I agree, I think potentiality is an aspect of rationality. If it has no potential, just like if it isn't possible, then it is irrational. Potentiality isn't separate from rationality (it is apart of rational thinking).

    I have absolutely no basis to measure the potential of an induction's capability of accurately assessing reality

    The basis is whether you think it aligns accurately with your knowledge. For example, although this may be a controversial example as we haven't hashed out math yet, I can hold that, even though I haven't experienced it, lining up (side by side) 2 in long candy bars for 3,000 feet has the potential to occur because it aligns with my knowledge (i.e. I do applicably know that there is 3,000 feet available to lay things and I do applicably know there are 2 in long candy bars); however, most importantly, according to your terminology, this is not possible since I haven't experienced it before. Likewise, without ever experiencing it, I can hold that it is irrational to believe that one can fit 7,000 2 in long candy bars, side by side long ways, within 1,000 feet (because, abstractly, 1,000 feet can only potentially hold 6,000 2 inch candy bars side by side). Yes, there is a level of error (mainly human error) that needs to be accounted for and, thusly, it is merely an ideal. But, nevertheless, I can utilize this assessment of potentiality to determine which is more cogent and which to not pursue (although both are not possibilities--as of yet--I should not sit there and try to fit 7,000 2 in long candy bars--side by side--within 1,000 ft since I already know it has no potential). Notice though, and this is my main critique, that the use of solely possibility (in your terms) within your epistemology strips the subject of being capable of making such a distinction (they are both not possible without further elaboration).

    Much of what you are describing as potential, are a level of cogency that occurs in both probability, and possibility

    I am failing to understand how this is the case? Potentiality is the component of colloquial use of "possibility" that got removed, implicitly, when your epistemology refurbished the term. Therefore, it does not pertain, within your terms, to possibility directly at all. Yes, in the sense that potentiality branches out to plausibility, possibility, and probability is true. But that is because it is a requisite (if it has no potential, it necessarily gets redirected to the irrational pile of claims). Something can't be plausible if it can be proven to have no potential (and it doesn't necessarily have to be "I've experienced the exact, contradictory, event to this claim, therefore it is an irrational induction": I don't have to experience failing to be able to fit 1,000,000 5 ft bricks into a 10 x 10 x 10 room to know that it is an irrational inductive belief). Moreover, something can't be probable (with an actual denominator) if it doesn't have potential. And, finally, it can't be possible (you couldn't have experienced it before) if it has no potential (and if you did experience it, legitimately, then it has potential). I think the main issue you may be having is that your new definition of possibility implicitly stripped this meaningful distinction out of "possibility" in favor of a new, less ambiguous term. However, now we must determine, assuming you agree with me, how to implement this distinction back into the epistemology. Otherwise, the subject is incredibly limited in what they can meaningfully induce about the world.

    but I cannot use it as anything more than that before it turns into an amorphous general word that people use to describe what they are feeling at the time.

    I agree: people could use it with no real substance. However, this is also true for possibility. I could make subjective thresholds for what constitutes "experiencing something before" that renders possibilities utterly meaningless. I think "rationality" isn't merely determining something a possibility, plausibility, or any other term: it is also about how one reasoned their way into the thresholds for those terms. I can dereference any term into a meaningless distinction, but how can we keep it meaningful for all subjects when it isn't a rigid distinction? I think we just have to agree, as two subjects conversing, on the underlying reasoning behind our subjective thresholds: that is rationality (what we both constitute as valid reasoning).

    Now a word which could describe a state of probability or possibility, becomes an emotional driving force for why we seek to do anything.

    I see where you are coming from and you are totally correct: people can de-value anything. However, I don't see how it is actually a probability or possibility: only that the distinction between what is irrational and rational (rational being probability, possibility, and plausibility) is necessarily potentiality (to one degree or another). All three terms within rational beliefs (not considering which is more rational than another, which could technically make a rational belief actually irrational if one determines another rational belief to be a better choice) inherent from this concept of potentiality: it is a requisite.

    I could hold an irrational belief, and say its because its potentially true.

    If we are defining an irrational belief as what has no potential to be true, then this statement is an irrational belief, within our subjective determination of what the term "irrational belief" should imply, because it directly contradicts the definition.

    Potential in this case more describes, "I believe something, because I believe something (It has potential).

    Awe, I see. This is what I was referring to a while ago (in our posts): people tend to make an illegitimate jump where they claim that "since it has potential, it is possible, therefore I believe it". This is not necessarily true though. Honestly, your defining of possibility as "experiencing it once before" is so brilliant for this very reason: something can have potential but yet never have been experienced, therefore it isn't possible (yet). Therefore, consequently, merely claiming something has potential ergo I believe it is true is irrational because, rationally speaking, something can't be constituted as "true" if it first isn't possible. Potentiality doesn't pertain to the "truth" of the matter, just a requisite to what one should rationally not pursue. It is a deeper level, so to speak, of analysis that can meaningfully allow subjects to reject other peoples' claims just like what you are describing.

    Without concrete measurement, it can be used to state that any belief in reality could be true.

    Not everything could be true. Firstly, not everything is possible (because we either (1) haven't experience it or (2) we have experienced contradictory events to the claim). Secondly, not everything has potential (because we may have experienced enough knowledge to constitute it as not having the capability to occur). Admittedly, potentiality and possibility are incredibly similar and that's why, traditionally, they are but one term. However, potentiality is a more broad claim, less bold and assertive, than possibility (if we define it as having experienced it before). Now, within this new terminology, we can boldly and assertively claim something is possible (assuming we agree on the subjective thresholds in place) because we have experienced it before. In regards to potentiality, we aren't boldly claiming that it can occur, just that there is potential for it to occur. This is more meaningful in terms of negation and not positive claims: we can meaningfully claim that something is irrational if it has no potential (assuming the subjective thresholds are agreed upon, like everything else). It isn't as meaningful in terms of two things that have potential and that's where the other terms come into play, but they only come into play once it is accepted that it has potential (that's why it is a requisite).

    I think I'm going to stick with evaluating inductions in terms of rationality, instead of potentiality.

    That is absolutely fine! My intention is not to pressure you into reforming it, but I do think this is a false dichotomy: this assumes potentiality is a separate option from rationality. Potentiality, and its consideration, is engulfed within rational thinking and the negation thereof is why it becomes irrational. We can't claim that something that has no potential is irrational if we aren't also claiming that if it does that it is rational to continue the analysis.

    So earlier, I was trying to explain that math was the logical conclusions of being able to discretely experience. I remember when I learned about mathematical inductions, I thought to myself, "That's not really an induction." The conclusion necessarily follows from the premises of a mathematical induction. I checked on this to be sure.

    "Although its name may suggest otherwise, mathematical induction should not be confused with inductive reasoning as used in philosophy (see Problem of induction). The mathematical method examines infinitely many cases to prove a general statement, but does so by a finite chain of deductive reasoning involving the variable n, which can take infinitely many values."
    https://en.wikipedia.org/wiki/Mathematical_induction

    This is true, but that is with respect to the mathematical operations, not the numbers themselves. I can say it is possible to perform addition because I have experienced it before, I cannot say that it is possible to add 3 trillion + 3 trillion because I haven't experienced doing that before with those particular numbers: I am inducing that it still holds based off of the possibility of the operation of addition. But, yes, you are correctt in the sense that philosophical induction is not occurring with respect to the operations themselves, but I would say it is occurring at the level of the numbers.

    N + 1 = F(N) is a logical process, or rule that we've created. Adding one more identity to any number of identities, can result in a new identity that describes the total number of identities. It is not a statement of any specific identity, only the abstract concept of identities within our discrete experience. Because this is the logic of a being that can discretely experience, it is something we can discretely experience.

    We could also state N+1= N depending on context. For example, I could say N = one field of grass. Actual numbers are the blades of grass. Therefore no matter how many blades of grass I add into one field of grass, it will still be a field of grass. I know this isn't real math, but I wanted to show that we can create concepts that can be internally consistent within a context. That is distinctive knowledge. "Math" is a methodology of symbols and consistent logic that have been developed over thousands of years, and works in extremely broad contexts.

    I agree, but this doesn't mean it holds for all numbers. We induce that it does, but it isn't necessarily the case. We assume that when we take the limit of 1/infinity that it equals 0, but we don't know if that is really even possible to actually approach the limit infinitely to achieve 0. Likewise, we know that if there are N distinct things that N + 1 will hold, but we don't if N distinct things are actually possible (that is the induction aspect, which I think you agree with me on that, although I could be wrong).

    I don't believe you did in this case. If you recall, thoughts come after the realization we discretely experience. The term "thought" is a label of a type of discrete experience. I believe I defined it in the general sense of what you could discretely experience even when your senses were shut off. And yes, you distinctively know what you think. If I think that a pink elephant would be cool, I distinctively know this. If I find a pink elephant in reality, this may, or may not be applicably known. Now that you understand the theory in full, the idea of thoughts could be re-examined for greater clarity, definition, and context. I only used it in the most generic sense to get an understanding of the theory as a whole.

    Yes, I may need a bit more clarification on this to properly assess what is going on. Your example of the pink elephant is sort of implying to me something different than what I was trying to address. I was asking about the fundamental belief that you think and not a particular knowledge derived from that thought (in terms of a pink elephant). I feel like, so far, you are mainly just stating essentially that you just think, therefore you think. I'm trying to assess deeper than that in terms of your epistemology with respect to this concept, but I will refrain as I have a feeling I am just simply not understanding you correctly.

    I think again this is still the chain of rationality. A probability based upon a plausibility, is less cogent than a probability based on a possibility.

    Yes, but your essays made it sound like probability is its own separate thing and then you can mix them within chains of inductions. On the contrary, I think that "probability" itself is actually, at a more fundamental level, contingent on possibility and plausibility for it to occur in the first place.

    You distinctively know that if you travel 30 miles per hour to get to a destination 60 miles away, in 2 hours you will arrive there.

    Agreed, but, depending on if I've experienced it before, it may be an induction based off of possibility or plausibility.

    A probability is not a deduction, but an induction based upon the limitations of the deductions we have. Probability notes there are aspects of the situation that we lack knowledge over.

    Whether or not it is a deduction or induction, probabilities are derived from two separate claims that are not equally as cogent as one another. A calculation based off of a possibility is more cogent than a plausibility. Yes, this is still using the induction hierarchy, but notice it is within probabilities, which means probabilities itself is contingent on possibility and plausibility while the latter two are not contingent in any way on probability.

    I look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    Further, potentiality is not something the hierarchy can objectively measure. Let say that in a deck of 52 cards, you can choose either a face card, or a number card will be drawn next. You have three guesses. Saying number cards is more rational going by the odds. But the next three cards drawn are face cards. The deck was already shuffled prior to your guess. The reality was the face cards were always going to be drawn next, there was actually zero potential that any number cards were going to be pulled in the next three draws. What you made was the most rational decision even though it had zero potential of actually happening.

    Although I understand what you are saying, and I agree with you in a sense, potentiality is not based off of hindsight but, rather, the exact same principle as everything else: what you applicably know at the time. Prior to drawing three face cards, if you applicably know that there is at least one number card in the 52 (or that you have good reason to believe that there is one regardless of whether you directly experienced one), then there is a potential that you could draw it. Regardless of whether it is the most rational position, it is nevertheless a rational position. However, if you applicably know that there are no number cards in the 52 (or you have good reason to doubt it), then it has no potential and, therefore, it is irrational.

    Only this time, I didn't put any number cards in the deck, and didn't tell you. You believe I made an honest deck of cards, when I did not. You had no reason to believe I would be dishonest in this instance, and decided to be efficient, and assume the possibility I was honest. With this induction, I rationally again choose number cards. Again however, the potential for number cards to be drawn was zero.

    Again, I understand what you are saying and I agree. However, within the context (in the heat of the moment) the numbers do have the potential to be in the deck if you have assessed that your knowledge deems it so. In hindsight, which refurbishes the context and maybe a new context depending on how one looks at it, you can now claim that there was no potentiality. But with respect to whether it had potentiality prior to this new knowledge that they lied, it is more rational to conclude that it has potentiality. I would argue, furthermore, that this assessment is actually necessary for one to even pick numbers in the first place (in terms of your example): if they don't think there is any potential for there to be a number, then they wouldn't pick numbers (and if they did, then it would be irrational). Although sometimes potentiality and "possibility" (in your terms) coincide, it isn't necessarily the case that something only has potential if you have "experienced it once before".

    An induction cannot predict potentiality, because an induction is a guess about reality.

    It is a part of the guess. First I make an educated guess that there is potential for water to exist on another planet somewhere, then I guess on how likely that is and, thereafter, whether it really constitutes as knowledge or not (with consideration to my discrete and applicable knowledge). Potentiality is the first (or at least one of the first) considerations when attempting to determine knowledge. If the subject determines there is no potential, then they constitute any further extrapolations as irrational and thereby disband from it.

    Some guesses can be more rational than another, but what is rational within our context, may have zero potential of actually being

    It isn't about what can potentially occur in light of new evidence afterwards, it is about what can potentially occur in light of the current evidence. It is perfectly fine if we find out later that what we thought had no potential actual does have some, or vice-versa. This is how it is for all contexts and even the induction hierarchies. Potentiality is a guide to what one should pursue (as one of the first considerations), and I would argue we all implicitly partake in it: that's why if you can convince someone that they hold a contradiction, they will feel obligated to refurbish their beliefs (most of the time). It is the fact that they know they are holding an irrational belief, due to the potentiality being nonexistent, that motivates their will to change. This would be, colloquially speaking, "possibility". I agree that this may just be a sematical difference, but I think defining possibility as "what one has experienced once before" eliminates the other meaningful aspect of the term (potentiality).

    It is less uncertainty, but has no guarantee

    Nothing is guaranteed. It could very well be that in five years we will look back, in hindsight, and "know" our understanding of induction hierarchies was utterly wrong (with consideration to new evidence). This doesn't mean that we can't use the induction hierarchies now, does it? I don't think so. So it is with potentiality. In my head, this would be like claiming that we can't utilize "possibilities" because, in the future, it may be the case that we find out it never actually was possible.

    For the purposes of trying to provide a clear and rational hierarchy, I'm just not sure whether potentiality is something that would assist, or cloud the intention and use of the tool.

    Personally I think it is necessary, but of course do what you deem best!

    Math is the logic of discrete experience.

    I agree for the most part: math deductions are the logic of discrete experience and we inductively apply that in the abstract. But I think the problem remains: where does mathematical inductions fit into the hierarchy?

    This is a known function. This is an observation of our own discrete experience

    It is an observation of our own discrete experience (when it is a deduction), but that doesn't exempt it from the hierarchy (when it is an induction). 1 + 2 = 3 is an observation of our own discrete experience, whereas X + Y = Z (where all of them are numbers never discretely experienced before) is based off of our own discrete experience (it's an induction, as you are probably well aware). When I state that 1 + 2 = 3, I know that these numbers are possible, whereas I don't know that is the case in terms of X + Y = Z for all numbers. Furthermore, there is actually cases where I know that they aren't possible, in the case of imaginary numbers (i), such as 1 + √-25 = 5i. We also apply math to actual infinities that may not actually exist (such as infinity and negative infinity and even PI and E, which are irrational numbers). When we take the limit approaching infinity, are you claiming that that is an observation of our own discrete experience (or a distant extrapolation)? Therefore, the function F(N) is not an observation of our own discrete experience (that would be a deduction) but, rather, an induced function meant to predict based off of our deducible knowledge (it is literally an induction put into a predictive model). This directly implies that, for N in F(N) it could either (1) be a possible number, (3) applicable plausible number (with regards to your terms: has potential and can be applied but isn't proven to be possible), (4) inapplicable plausible number (has potential and hasn't been proven to be possible but cannot be applied), or (5) an irrational number (has no potential, isn't possible, and has no potential). I think you are right in the sense that, in the abstract, X + Y = X + Y will always hold, but saying it will always hold is an induction (it is just so ingrained, as you stated, into our discrete experience itself that we hold it dear--in my terms it is one of the most immediate things, closest to our existence). Most importantly, none of this exempts it from the hierarchy of inductions and, therefore, I would like to know where you were classify it?

    When I discretely experience something that I label as "thoughts" in my head, I distinctively know I have them.

    My intention is not to try and put words in your mouth, but I think you are, if you think this, obliged to admit that you and thought are distinct then. I don't think you can hold the position that we discretely experience them without acknowledging this, but correct me if I am wrong. If you do think they are separate, then I agree, as I think that your assessment is quite accurate: we do apply our belief that we have thoughts to reality, because the process of thinking is apart of experience (reality). It is just the most immediate form of knowledge you have (I would say): rudimentary reason.

    Distinctive knowledge occurs, because the existence of having thoughts is not contradicted. The existence of discretely experiencing cannot be contradicted. Therefore it is knowledge.

    I agree!

    We cannot meaningfully understand what plausible probability is, without first distinctively and applicably knowing what plausibility, and probability are first.

    If I follow this logic, I still end up with a problem: without first distinctively and applicably knowing what mathematical induction is, I cannot meaningfully understand what a probability is. Therefore, why isn't mathematical inductions a category on the induction hierarchy? Why only probabilities?

    Furthermore, I apologize as my term "plausible probability" is confusing: I am not referring to a chain plausibility -> probability. What I was really referring to was something we've previously discussed a bit: there are different cogencies within probabilities since they are subject, internally and inherently, to the other three categories (irrational, possibility, and plausibility). Same goes for math in general. Two separate probabilities, with the same chances, could be unequal in terms of sureness (and cogency I would say). You could have a 33% chance in scenario 1 and 2, but 1 is more sure of a claim than 2. This would occur if scenario 1 is X/Y where X and Y are possible numbers and scenario 2 is X/Y where X and Y are plausible numbers (meaning they have the potential to exist, but aren't possible because you haven't experienced them before). My main point was that there is a hierarchy within probabilities (honestly all math) as well.

    Moreover, another issue I was trying to convey is why does probability have its own category, but not mathematical inductions? I think what your "probability" term really describes, in terms of its underlying meaning, is mathematical inductions. If I induce something based off of F(N), this is no different than inducing something off of 1/N chances, except that, I would say, anything induced from the former is more cogent. This is because if I base a belief on there being a 90% chance, that will always be less certain (because it is a chance) than anything based off of F(N) (directly that is). For example, if I induce that I should go 30 miles per hour in my car to get to may destination, which is 60 miles away, in 2 hours, that is calculated with numbers that are a possibility or plausibility (the mathematical operations are possible, but not necessarily the use of those operations on those particular numbers in practicality). But this is more cogent than an induction that I should bet on picking a number card out of a deck (no matter how high the chances of picking it) because the former is a more concrete calculation to base things off of (it isn't "chances", in the sense that that term is used for probability). Don't get me wrong, the initial calculation, because it is also math, of probability is just as cogent as any other mathematical operation (it's just division, essentially), but anything induced from that cannot be more cogent than something directly induced from a more concrete mathematical equation such as 60 miles / 30 mile per hour = 2 hours. Notice that these are both inductions but one doesn't really exist in the induction hierarchies (mathematical inductions) while the other is the most cogent induction (probability). Why?

    1. Its plausible the dark side of the moon is on average hotter than the light side of the moon, therefore it is probable any point on the dark side of the moon will be hotter than any point on the light side of the moon.
    2. Its possible the side of the moon facing away from Earth is on average colder than the light side of the moon, therefore it is probable any point on the dark side of the moon will be colder than any point on the light side of the moon.
    3. The dark side of the moon has been measured on average to be cooler than the light side of the moon at this moment, therefore it is probable any point on the dark side of the moon will be colder than any point on the light side of the moon.

    This may be me just being nit picky, but none of those were probable (they are not quantitative likelihoods, they are qualitative likelihoods). If you disagree, then I would ask what the denominator is here. But my main point is there is a 4th option you left out: if I can create a mathematical equation that predicts the heat of a surface based off of it's exposure to light, then it would be more cogent than a probability (it is a mathematical induction based on a more concrete function than probability) but, yet, mathematical inductions aren't a category.

    Furthermore, #2 isn't possible unless you've experienced the side of the moon facing away from the earth being colder than when you experienced it on the light side. This is when we have to consider what we mean by "what we have experienced before". This is more of potentiality than possibility (in your terms). I think that your use of "possible" is more in a colloquial sense in #2.

    As you can see, intuitively, and rationally, it would seem the close the base of the chain is to applicable knowledge, the more cogent the induction.

    I agree!

    Look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    An inapplicable plausibility is different enough from a plausibility to warrant a separate identity in the heirarchy.

    It is completely up to you, but I think that inapplicable plausibilities should be a plausibility; It is just that, in order to avoid contradictions, "plausibility" shouldn't be defined as what can be applicably known, just what one believes is "true" (or something like that). What can be applicably known would, therefore, be a subcategory of plausibility, namely "applicable plausibilities", and what cannot be applicably known would be another subcategory, namely an "inapplicable plausibility". On a separate note, the potentiality of a belief would be differentiated between irrational inductions and all other forms (as in it is irrational if it has no potential). And it is not necessarily always the case that a belief that cannot be applied to reality has no potential to occur and, thusly, there is a meaningful distinction between irrational inductions and inapplicable plausiblities (as in the latter is guaranteed to have potential, but cannot be applied). I just think that a contradiction arises if you define "plausibility" as always applicable (can be applied). You could, on the flip side, decide that the belief in what cannot be applied to reality is irrational and, consequently, that would make it an irrational induction.

    This is correct. An irrational induction is a belief that something exists, despite applicable knowledge showing it does not exist.

    Fair enough.

    What does indirect application to reality mean? I only see that as an inductive belief about reality. This isn't an applicable knowledge claim, so there is no application to reality. If there are no sentient beings, then there is no possibility of application knowledge.

    What I meant by "indirect" and "direct" seems to be, in hindsight, simply an inductive belief about reality (you are right). But my point I was trying to convey is that we produce meaningful probabilistic models based off of the idea that something is in multiple states at once, which doesn't really abide by the law of noncontradiction in a traditional sense at least.

    Superpositioning, to my understanding, is essentially probability. There are X number of possible states, but we won't know what state it will be until we measure it. The measurement affects the position itself, which is why measuring one way prevents us from measuring the other way. You won't applicably know the state until you apply that measurement, so the belief in any particular outcome prior to the measurement would be an induction.

    I agree. I was merely conveying that, to build off of what you said here, we don't assume the law of noncontradiction in terms of some quantum "properties" (so to speak), but the contrary. For example, a 6-sided die is considered to have 6 states. Even when the subject isn't observing the die, they will assume the law of noncontradiction: it is in one of the 6 states. Whereas, on the contrary, electrons can have two spin states: up or down. However, unlike the previous 6-sided die example, the subject, if they are quantum inclined (:, will assume the electron is equally likely in both positions (thus, not assuming the law of noncontradiction in the same sense as before).

    Great! We might be nearing a limitation for where I've thought on this.

    I think that, to supplement what you stated, possibility really isn't defined as clear as it should be. Instead of what "has been experienced before", it should be what "is similar enough to what one has experienced before". This is what I mean by rigidity (although I understand you agree with me on it being elastic): "possibility", as defined as what has been experienced before, implies (to me) that you have to experience it once before in a literal, rigid, sense. On a deeper level, I think it implies that experiences tend to be more like universals and less like particulars. For me, defining it in the previously mentioned refurbished way implies that subjective threshold.

    Further, I think that the terminology is still potentially somewhat problematic. Firstly, your essays claim that probabilities are the most cogent, yet they actively depend on possibilities. There is no validity in probabilities, or honestly math in its entirety, if we weren't extrapolating it from possibilities (numbers in actuality, in reality). To say that the probability of 1/52 is more cogent than a possibility seems wrong to me, as I am extrapolating that from the possibility of there being 52 cards. Maybe it is just a difference between cogency and sureness, but I am more sure that 52 cards are possible than any probability I can induce therefrom. Secondly, it seems a bit wrong to me to grant probabilities their own category when there can be plausible probability claims and possible probability claims. For example, it becomes even more clear (to me) that I am more sure of the possibility of 52 cards when I consider it against specifically 1/N probability where N is a quantity that I haven't experienced in actuality (in reality). 1/N would be a probability that is really just a "plausible probability", contrary to a "possible probability" which would be a quantity, such as 52, that I can claim is possible. One is, to me, clearly a stronger claim than the other. Furthermore, probabilities are really just a specific flavor of mathematical inductions, which it seems odd that they have their own category yet mathematical inductions aren't even a term. For example, if I have a function F(N) = N + 1, this is a mathematical induction but not a probability. So, is it a plausibility? Is it a possibility? Depends on whether N is something experienced before or not (or how loosely we are defining similar enough). Probabilities are considered the most cogent, but is 1/N probability, where N is an unexperienced number in actuality, really more cogent than F(N), where N is a number experienced in actuality? I think not. On the flip side, is F(N), where N is an unexperienced number in actuality, more cogent than a 1/N probability where N is a number experienced in actuality? I think not. What if F(N) and 1/N are a number, N, that has not been experienced in actuality before? Are they equally as cogent? F(N) would be a plausibility, and I would say probability too, but probability would be considered more cogent simply because it has its own term (at least that is how I am understanding it). In reality, I think mathematical inductions (which includes probability) are subject to the same, more fundamental, categories: possibility, plausibility (and its subtypes), and irrational induction. Also, F(N) and 1/N where N is unexperienced and so large it can't be applied to reality would both be inapplicable plausibilities. Therefore, I think we are obligated to hold the position that an inapplicable plausibility mathematical induction (such as F(N) where N is inapplicable) are less cogent than an applicable plausibility (such as I can apply the existence of this keyboard without contradiction to reality) because fundamentally, mathematics, abides by the same rules. But, however, an applicable plausibility mathematical induction (such as F(N) where N is applicable) would be more cogent than probably every other non mathematical plausibility I can come up with because the immediateness of numbers, and its repetition, surpasses pretty much all others.

    Thirdly, it also depends on how you define "apply to reality" whether that holds true. Consider the belief that you have thoughts: is your confirmation of that ever applied to "reality"? It seemed as though, to me, that your essays were implying sensations outside of the body, strictly, which would exclude thoughts. However, the claim that you even have thoughts is a belief and, therefore, must be subject to the same review process. It seems as though your thoughts are the initial beliefs being applied to "reality", which seems to separate the two concepts; Do you applicably know that you think? I don't apply my thoughts to reality in the sense that I would about whether a ball will roll down a hill: my thoughts validate my thoughts. If my thoughts validate my thoughts, then we may have an example of one of the most knowable beliefs for the subject that is technically inapplicable. However, if we define the thinking process as an experience, then we can say it is possible because we have experienced it before. However, most importantly, that directly, and necessarily, implies that you are not thought but, rather, you experience thought. On the contrary, if you don't experience thought, and subsequently the separation between the two isn't established, then you cannot claim that your own thoughts are possible, since you are incapable of experiencing them.

    he question to you is, is it useful for you? Is it logically consistent? Can it solve problems that other theories of knowledge cannot? And is it contradicted by reality, or is it internally consistent?

    I think that it is an absolutely brilliant assessment! Well done! However, I think, although we have similar views, that there's still a bit to hash out.

    I look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    Then what they are describing is an inapplicable plausibility. It is when you believe that something that exists, but have constructed it in such a way that it cannot be applicably tested. I can see though that my language is not clear, so I understand where you're coming from. Applicable knowledge is when you apply a belief to reality that is not contradicted. All inductions are a belief in something that exists in reality. The type of induction is measured by its ability to be applicably applied or known.

    I agree with you here, but my point was that it is an inapplicable plausibility (which means we are on the same page now I think). A couple posts back, you were defining "plausibility" as "the belief that distinctive knowledge that has never been applicably known, can be applicably known", which I am saying that is an "applicable plausibility", not "plausibility". I am now a bit confused, because your response to that was "In both cases, the person believes that the plausibility can be applicably known", which that is why I stated people can have plausibilities that they don't think can be applicably known. You are now saying, as far as I am understanding it, that if they think it can't be applicably known, then it is an "inapplicable plausibility" (I agree with that, but notice that doesn't align with your previous definition of "plausibility", as it was defined as "can be applicably known"--unless you think that "inapplicable plausibilities" are not a subcategory of "plausibility", I don't see how this isn't a contradiction).

    Upon further reflection, I think that if we define every "plausibility" that has no potential as an "irrational induction" (and, consequently, all plausibilities have potential), rather than an "applicable/inapplicable plausibility", then I have no objections here. So, using my brick example, the claim that one can fit 2,000 5 ft bricks in 1000 sq ft is an "irrational induction" (not a "plausibility") because 1000 / 5 = 200, which necessarily eliminates any potentiality. However, I still think that there are meaningful hierarchies between claims (between plausibilities, for example) that relate to sureness apart from cogency which can be evaluative tools.

    Even though you did not actively think about hierarchial induction, you practicied it implictly.

    Fair enough. But I would say that the fundamental comparison with respect to the law of non-contradiction is a valid comparison across all hierarchical chains.

    No one has ever applicably known a situation in which the something was both itself, and its negation.

    This is true, but also notice that no one has ever applicably known a situation in which, in the absence of direct observation, something necessarily was not both itself and its own negation.

    If you define something as one way, then define it as its negation, you have created a situation that can never be applied to reality.

    Let's say we have these two claims:
    1. Absent of direct observation, things abide by the law of noncontradiction.
    2. Absent of direct observation, things do not abide by the law of noncontradiction.

    Firstly, I could apply both of these indirectly to reality without any contradiction because, using the law of noncontradiction, I can create situations where the law of noncontradiction doesn't necessarily have to occur (mainly absent of sentient beings). Don't get me wrong, I agree with you in the sense that both are inapplicable plausibilities, but that is with respect to direct application. I may decide, upon assessing the state of a currently unobserved thing, to decide that the outcome should calculated as if they are superpositioned (this is how a lot of the quantum realm is generally understood). This can be indirectly applied to reality without any contradiction. Or, on the contrary, I could decide the outcome should be calculated as if they are either/or (this is generally how Newtonian physics is understood).

    If we cannot observe it, we cannot apply this to reality.

    We cannot directly apply it to reality, but we can produce meaningful calculations based off of superposed states which necessarily imply A being, only in the theoretical, in two contradictory states. Even if we could not produce meaningful calculations, it is equally as much of an "inapplicable plausibility" as claiming it does abide by the law of noncontradiction.

    Again, you are doing the practice of hierarchial induction here, whether you are aware of it or not. I don't think its a consideration prior, but a consideration of it.

    I think that if these underlying principles, which engulf other contexts, are a consideration of it, contrary to prior to it, then you are agreeing with me that hierarchical chains, to some degree or another, can be compared. I was merely trying to distinguish between the underlying, engulfing, principles and the point at which the induction chains can no longer be fairly compared.

    It is more cogent to believe in the first plausibility, then the second. We can do a little math to prove it.

    I agree with you here, but now we are getting into another fundamental problem (I would say) with your terminology: if a "possibility" is what one has experienced once before, then virtually nothing is a possibility. And, to be more clear, I think it is a much bigger disparity then I think your epistemology tries to imply. It all depends, though, on what one defines as "experiencing before". I don't think we experience the exact same thing very often (potentially at all) and, consequently, when one states they have "experienced that before" what they really mean is they've "experienced something similar enough to their current experience for them to subjectively constitute it as a match". For example, under your terms, the claim that "that car, which I haven't experienced run, will run, when started, because I've experienced my car run before" is not a "possibility" but, rather, a "plausibility". I think that we would both agree that that is the case. However, this directly implies that I also can't claim that "this apple is edible, although I haven't taken a bite yet, because I've experienced eating an apple before" is a "possibility": not even in the case that I have good reason to claim that this apple resembles another apple I've eaten before. Similarly, I also can't claim that it is "possible" that my car will start because I've experienced it start before because, directly analogous to the apple example, my car is not the exact same, within the exact same context, as when I experienced it start (at least, the odds are that it most definitely is not). In more philosophical terms, the problem is that almost all experiences are of particulars, not universals. A previous experience of thing A cannot be constituted as a previous experience of B, ever, because A is a separate particular from B and, therefore, "possibilities" would be constrained to only that experience after it occurs and never before its occurrence. However, I would say that a previous experience of thing A can be constituted as a previous experience of B if it qualifies, potentially with reference to objective evidence but necessarily contingent on subjective determination, as similar enough, within the context, to a previous context. Notice how "possibility" is no longer a zero sum game, a binary question, and, subsequently, becomes a matter of passing a subjectively determined threshold (which could be, in turn, based off of objective claims) by necessity: this is what I would call the spectrum of sureness. It isn't a question of whether something (1) has been experienced before or (2) it hasn't but, rather, a question of how sure are you of the similarity between what you just experienced and a past experience: does it constitute as similar enough. I think there is rigidity within your epistemology that mine lacks, as I see it more as an elastic continuum of sureness. I don't know if that makes any sense or not.

    Correct, depending on the context. You do not know if people have internal monologues in their head like yourself.

    Not depending on the context, but every context that contains such a claim. Asking someone if they have internal monologue, no matter how you end up achieving it, doesn't prove it is "possible" in the sense that you "have experienced it at least once before". "Hard consciousness", as you put it, is exactly what I am trying to convey here in conjunction with your "possibility" term: by definition, I can never claim it is "possible" for someone else to have internal monologue. Even if you knew that the person could not physically lie about it, you would never be able to claim it is "possible" because you have never experienced it yourself (even if you have experienced internal monologue, you haven't experienced it particularly within them).

    We can determine a bat can think, but we can never have the experience of thinking like a bat.

    We cannot, under your terms, claim that a "bat can think", only that it is a plausibility. Even if we scanned their brains and it turns out the necessary, similar to ours, faculty exists for thought, we would never be able to label it as a "possibility" because we have not experience a bat thinking. This is, to a certain degree, what I was trying to convey previously: how incredibly narrow and limited "possibility" would be. It would essentially only pertain to universals (that which has an objective, or absolute--depending on how you define it--basis) or subjective universals (that which is true for all experience for a particular subject). An example of this would be numbers in terms of quantity: one abstract "thing" is the exact same experience as one abstract other "thing" because quantity is derived from the same subjective universal called differentiation. Differentiation, at its most fundamental level, is the same for all particulars for that subject--as they wouldn't even be particulars, but rather a particular, if this wasn't the case.

    I look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,
    I am glad that you are feeling well!

    In both cases, the person believes that the plausibility can be applicably known.

    I don't think this is necessarily true. It depends on what you mean by "applicably known": lots of people believe in things that they claim cannot be "applicably known". For example, there are ample amounts of people that believe in an omnipotent, omniscient, etc (I call it the "omni" for short) God and actively claim that these traits they believe in are necessarily outside of the scope of what we can "applicably" know. Another, non-religious, example is a priori knowledge: most people that claim their are a priori knowledge also actively accept that you necessarily cannot applicable (directly) know the components of it. At its most generic form, they would claim that we there is something that is required for experience to happen in the first place, for differentiation to occur, but you definitely will never be able to directly "applicably" know that. I guess you could say that they are indirectly "applying" it to reality without contradiction, which I would be fine with.

    For example, I hold the law of non-contradiction as true. From this I believe it is plausible that the moon is made out of green cheese. Separately from this, I believe it is plausible that the sun is really run by a giant lightbulb at its core. The basis of the law of contradiction between them has no bearing on the evaluation of comparing the plausibilities.

    I think that, because the law of noncontradiction is one of the (if not the) fundamental axiom there is, it is easy to consider it irrelevant to the comparison of two different plausibilities; however, nevertheless, I think that it plays a huge, more fundamental, factor in the consideration of them. For example, if my knowledge of physics (or any other relevant subject matter) that makes it "impossible" (aka has no potential to occur) for green cheese to be able to make up a moon, then, before I have even started thinking about hierarchical inductions, I have exhausted the idea to its full capacity (which, in this case, isn't much). Furthermore, if I have knowledge that both examples (the giant lightbulb and the green cheese moon) are "impossible" (have no potential to occur), then they are equally as useless as each other, but, more importantly, notice that I still compared them to a certain degree. Now, I could hold that the law of noncontradiction isn't as black and white as I presume we both think: maybe I have a warped understanding of superpositioning, for example. Maybe I believe, prior to even having the ideas of the green cheese or light bulb sun, that A can be and not be at the same time as long as there is no observant entity forcing an outcome. Now, this is not at all how superpositioning works (I would say), but someone could, nevertheless, hold this position. Moreover, with the stipulation that there are no observers, even if I have solid evidence that green cheese can't make up a planet, the planet could be made of green cheese and green cheese can't "possibly" makeup a planet at the same time. This refurbished understanding of the law of noncontradiction poses whole new problems, but notice that they wouldn't be objectively wrong: only wrong in the sense that we don't share the same fundamental context (i.e. the same understanding of the law of noncontradiction).

    That being said, you can compare the belief in the law of non-contradiction, versus the belief of its denial. If you hold the law of non-contradiction as applied knowledge, or an induction that you believe in, you can evaluate an inductions chain, and reject any inductions that relay on the law of non-contradiction being false within its chain.

    This is, essentially, what I am trying to convey. That would be a consideration prior to hierarchical inductions and would provide an underlying basis to compare two different plausibilities. I think we do this with a lot more than just the law of noncontradiction.

    I "think" this is what you are going for. If so, yes, you can determine which inductions are more cogent by looking in its links, and rejecting links that you do not know, or believe in. But this is much clearer if you are trying to decide whether the moon is plausibly made out of green cheese, or something else, then trying to compare the moon and the sun. Does that make sense?

    Correct me if I am wrong, but I think that you are trying to convey that, once all the underlying beliefs are evaluated and coincide with the given belief in question, you can't compare two different contexts' hierarchical induction chains. I don't think this is necessarily the case either, but I want to focus on the more fundamental disputes first before segueing into that.

    My main point is that potentiality is completely removed when "possibility" is refurbished in your epistemology. The problem is that there are no distinctions between applicable plausibilities. For example, imagine I have 2,000 5 ft bricks. Now, imagine two claims: (1) "you can fit 200 of these bricks in a 10 x 10 x 10 room" and (2) "you can fit 2,000 of these bricks into a 10 x 10 x 10 room". Let's say that I've never experienced filling a room with 5 ft bricks. I think, according to your definitions, both claims would not be possibilities but, rather, applicable plausibilities because I haven't ever experienced either (and "possibility" is something that has been experienced before). However, I don't need to attempt to apply both directly to reality to figure out which one has the potential to occur. Even though they are plausibilities, #1 has the potential to occur (meaning that, although I could be wrong since it is an induction, all my knowledge aligns with this having the potential to occur) while #2 does not (because, assuming my math is sound, 1000 Sq Ft / 5 Ft only allows for 200 5 ft bricks). So, even if I haven't directly attempted to fill a room with 200 nor 2,000 5 ft bricks, I can soundly believe that one claim is more cogent than the other because one aligns with my current knowledge while the other does not. If we were to put them both as plausibilities, then I would say one is "highly plausible" while the other is "highly implausible" to make a meaningful distinction between the two.

    Another fundamental problem is what constitutes experiencing something before? How exact of a match does it have to be? If it is an exact match, then we hold very little possibilities and a vast majority of our knowledge is ambiguously labeled as "plausibilities". For example, I have internal monologue. I think that it is "possible" (in accordance with my use of the terms) that other people have internal monoloqes too; however, I have never experienced someone else having an internal monologue, therefore it isn't a "possibility" in accordance with your terms. I think the obvious counter argument would be that I have experienced my own internal monologue, therefore it is "possible". But my experience of my own internal monologue is not an experience that is an exact match to the claim in question ("other people have internal monologue"). Someone could walk up to me and rightly claim that my own experience of my own internal monologue is in no way associated with the experience of someone else having an internal monologue, therefore I don't know if it is possible (according to your terms): and they would be correct. However, I would still hold that other people have the potential to have internal monologue because they have the necessary faculty (very similar to mine) for it to occur. As of now, I am not saying that they definitely have internal monologue, or that they can, just that they have the potential to. To take this a step further, the belief that other people can have internal monologue is an "inapplicable plausibility" (I can't demonstrate any other than my own). However, although I can't claim that another person can have internal monologue, I would not tell someone else who claims to have internal monologue that that is "impossible" (according to your terms), even though I haven't experienced someone else having internal monologue, because I have a more fundamental parent context that I abide by: empathy. If I were in their shoes, and I actually did have internal monologue (regardless of whether, in my current state, I can actually claim it is "possible"), I would want the other person to give me the benefit of the doubt out of respect. So my empathetic parental context would overrule, so to speak, my factual consideration and, therefrom, I would walk around claiming that it is "possible" for someone else to have internal monologue although technically I can't claim that within your definitions. So basically: I can claim that they have the potential to have internal monologue and, although I can't claim they can have internal monologue, I will claim they can regardless.

    This brings up a more fundamental issue (I think): the colloquial term "possibility" is utterly ambiguous. When someone says "it is possible", they may be claiming that "it can occur" or that "it can potentially occur", which aren't necessarily synonymous. To say something "can occur", as you rightly point out, is only truly known if the individual has experienced it before, however to say something "can potentially occur" simply points out that the claim doesn't violate any underlying principles and beliefs. I think this is a meaningful distinction. If I claim that it is "possible" (in my terms) for a rock to fall if someone drops from a mountain top, it depends on if I have directly experienced it or not whether I am implicitly claiming that it "can occur" (because I've experienced it) or that it "can potentially occur" (because, even though I haven't experienced it before, my experiences, which are not direct nor exact matches of the given claim, align with the idea that it could occur). I think this can get a bit confusing as "can" and "can potentially" could mean the same thing definitions wise, but I can't think of a better term yet: it's the underlying meaningful distinction here that I want to retain.

    Also, as a side note, I like your response to the object rolling off hills example, however this is getting entirely too long, so I will refrain from elaborating further.

    Look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,
    I caught "The Covid," and have been fairly sick. Fortunately I'm vaccinated, so recovery is going steady so far.

    Oh no! I am glad that you are recovering and I hope you have a speedy recovery!

    An applicable plausibility is something which can be applied to reality if we so choose. For example, "If I go outside within five minutes, it will rain on me as soon as I step outside of the door." I do not know if it is raining, nor can I figure it out from within the house. There is nothing preventing me from going outside within the next five minutes. Its an applicable plausibility that I will be rained on, because I can test it.

    An inapplicable plausibility is a plausibility that either cannot be tested, or is designed not to be able to be tested. If for example I state, "There is a unicorn that exists that cannot be sensed by any means," this is inapplicable. There is nothing to apply to reality with this idea, as it is undetectable within reality. Perhaps there is a unicorn that exists that cannot be sensed in reality. But we will never be able to apply it, therefore it is something that cannot be applicably known.

    This is all and well, but I think you defined "plausibility" (in your previous post) as exactly what you just defined as an "applicable plausibility"--and that was all I have trying to point out. You defined "plausibility" as "the belief that distinctive knowledge that has never been applicably known, can be applicably known". A "plausibility", under your terms (I would say), is not restricted to what "can be applicably known" (that is a subcategory called "applicable plausibilities"), whereas "plausibility" is a much more generic term than that (as far as I understand your terms).

    Just because two built contexts are dissimilar, it doesn't mean they cannot have commonalities. But commonalities do not mean they can necessarily be evaluated against the different inductions within their independent contexts.

    I agree in that two contexts can be dissimilar and still have commonalities, but those commonalities are more fundamental aspects to those contexts and, therefore, although they are dissimilar, they are not separate. Even the most distinct contexts share some sort of dependency (or dependencies). An induction (within a context) that contradicts a parent context is less cogent than an induction (within a different context) that doesn't.

    The human eye and iron floating on water with butter are just too disparate to compare.

    You can compare them relative to their shared dependencies (such as the law of noncontradiction). You could say that, since iron floating on water (even if you haven't experienced it before) cannot occur based off of what you have learned (experienced) about densities in chemistry and your acceptance of the law of noncontradiction, then this is not as cogent as the eye example since it violates thereof. This is a comparison of potentiality, where both are compared to an accepted principle that engulfs both of them (which are part of the context, but is shared). From what I understand from your hierarchical inductions, the idea that (1) A can be A and not A and that (2) A will have the same identity as another A are both not possible unless we experience it, with no distinction between the two (preliminarily). However, I am saying that #1 has no potential to exist while #2 does because I have accepted the law of noncontradiction and law of identity as underlying principles which rules out #1 and allows for #2. However, if I did not accept the law of noncontradiction and I did not accept the law of identity, then #1 has the potential to be "true" while #2 does not. It is also relative to the parent contexts: the shared dependencies (more fundamental concepts that the given contexts at hand depend on).

    The law of non-contradiction simply means you have an irrational inductive belief, which is completely divorced from rationality

    I would say that it means that the subject has accepted the axiom as "true" and, therefore, it will be a dependency for many future ideas (or beliefs) they will have (as they will build off of it). It isn't necessarily "true" in all contexts, we just share that more fundamental principle.

    To add, the comparison is about finding the best induction to take within that context.

    I think that that is one goal, but the comparing of all contexts is also desired. All knowledge stems from the same tree, therefore one can derive any given contexts back to a common node. I am just saying that the idea that you strictly cannot compare contexts eliminates potentiality. When I say something can potentially exists, or happen, it means that it does not violate any of my parental contexts (any underlying principles that would be required for the concept to align with my knowledge as it is now). Hitherto, your epistemology eliminates this altogether: you either have a possibility or plausibility (probability encompasses the idea of a possibility) and you can't preliminarily determine whether one plausibility has the potential to occur or not.

    no comparing the probability of improving the eye, the the options of plausibility vs irrationality with iron floating on water with butter

    You would be comparing it one step deeper than that: iron floating on water has no potential to occur whereas improving the eye does (I would call this "possibility").

    Can you clarify this? I interpreted this as follows.

    I applicably know A and B.
    I applicably know C, D, and E.
    I applicably know that the numbers two and three are not synonymous.
    Therefore A and B, and C,D, and E are synonymous.

    I don't believe that's what you're trying to state, but I could not see what you were intending.

    You are absolutely right: I was trying to keep it as fundamental as possible, but I see how that was confusing. I was merely pointing out essentially that the law of noncontradiction is an underlying principle (which is apart of the context) that can determine one context more cogent than another because they exist within a plane. I was just making up a contradictory example off the top of my head and I apologize--as it wasn't very good at all.

    I still wasn't quite sure what you meant by parent contexts in these examples. I think what you mean is the broader context of "things" versus "round objects". Please correct me here. For my part, it depends on how we cut hairs so to speak. If the first person does not applicably know that things can roll down a hill as well, then neither statement is more cogent than the other. If the first person knows that "things" can also roll down hill, then there's no cogent reason why they would conclude the "thing" would fly off the hill over roll down the hill.

    At its most fundamental level, I was trying to convey that the law of similarity could be another example of a parent context, where one may determine two completely even contexts (i.e. possibility -> plausibility and one that is a possibility -> plausibility) based off of an underlying principle that governs both contexts. If I have witnessed a "thing" fly and roll off of a hill, but the "things" that I have seen fly look less similar to the "thing" on the hill now and the "thing" looks more similar to the "things" that I have seen roll down a hill, then I might determine one context more cogent than the other based off of the fact that I accept the law of similarity as an underlying principle that engulfs both the contexts in question. It is a more fundamental examination then the hierarchical inductions.

    What might help is to first come up with a comparison of cogency for a person within a particular context first. Including two people complicates comparing inductions greatly, but generally follows the same rules as a person comparing several inductive options they are considering within their own context.

    I agree. I think that, within an individual context, the subject will compare their knowledge based off of a tree like structure (or plane like structure where principles engulf other principles) and decide their credence levels based off of that (which includes your hierarchical induction chains). I think that multiple subjects do essentially the same thing, but they will accept their own experiences are more cogent than others (because it is more immediate to them as the subject) and, therefore, that is the most vital factor.

    Look forward to your response.
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,
    The dots have finally clicked for me! I think I understand what you are stating now and, so, most of what I said has been negated (I apologize for the confusion). However, I do still have a couple quarrels, so I will elaborate on those in a concise manner (that way, if I am still not understanding it correctly, you can correct me without having to address too many objections).

    Possibility - the belief that because distinctive knowledge has been applicably known at least once, it can be known again.

    Plausibility- the belief that distinctive knowledge that has never been applicably known, can be applicably known.

    I have no problem with the underlying meaning of "possibility", however I think it still leaves out potentiality, but more on that later. With respect to "plausibility", I think you just defined, in accordance with your essays, an "applicable plausibility", contrary to an "inapplicable plausibility", which is not just a "plausibility". You defined it in the quote that it "can be applicably known", which is what I thought an "applicable plausibility" was. Maybe I am just misremembering.

    First, we cannot compare cogency between different branches of claims. This is because cogency takes context into account as well, and the difference between evaluating the human eye, and an floating iron block, are two fairly separate contexts

    I think you are sort of right. I think that you are thinking of the hierarchical inductions within a particular context as a linear dependency (i.e. a possibility -> plausibility is more cogent than a possibility -> plausibility -> plausibility); However, I think it is more of a plane, contexts engulfing contexts, style of hierarchies: no context is strictly isolated from any other context as they all are dependent on a more fundamental context which engulfs them together. Think of it as the evaluating the human eye context and the floating iron block context as separate contexts, indeed, but residing within a shared context(s) which is where they can be cross-examined from. A great example is the context in which the law of noncontradiction is a valid axiom: this contextual plane would engulf, because it is more fundamental, the two aforementioned contexts. Therefore, in the abstract, if context A and B reside within the law of noncontradiction context, and A does not abide by the law of noncontraction while B does, then A is less cogent than B on a more fundamental contextual plane--regardless of the fact that their hierarchical inductions are considered separately. Their are always parent contexts that engulf a given context unless you are contemplating the axioms from which all others are derived (then it gets tricky).

    Before I continue to your post, let me briefly try to explain the difference between "possibility" (in your terms) and potentiality. Let's use two examples:

    I applicably know what two "things" are.
    I applicably know what three "things" are.
    I applicably know that the underlying meaning of "two" and "three" are not synonymous.
    Therefore, "two" "things" and "three" "things" are synonymous.

    I applicably know the eye can see X colors.
    I applicably know we can improve the eye's ability to see with greater focus.
    Therefore I believe we can improve the eye to see greater than X colors.

    Although they are to be considered separate from one another, in the sense of the induction chains, because they are two totally different contexts, we can still compare them because they both reside within a parent (or more than one parent) context; The law of noncontradiction, assuming the subject holds that as a fundamental axiom, would be a great example of a parent context that engulfs these two examples and, therefore, the former example is less cogent than the latter, despite their clearly different contexts, due to the parent context's negation of the former example's potentiality. Normally this would be called "possibility", but since you use it differently I think we are safe using potentiality instead. But, most importantly, notice that these two examples are not mutually exclusive, in a holistic sense, as they stem from more fundamental parent contexts.

    I applicably know the eye can see X colors.
    I applicably know we can improve the eye's ability to see with greater focus.
    Therefore I believe we can improve the eye to see greater than X colors.

    I understand your hierarchical induction chains, and they are brilliant (and great example)! However, consider this:

    1. I see a round object at the top of a hill.
    2. I have never experienced this round object before.
    3. I applicably know that it is windy out.
    4. I have experienced a round log fall down a hill during a windy day.
    5. I have never experienced a round log fly up off of a hill during a windy day.
    6. I have experienced "things" flying off of a hill.
    7. The round object is similar in size to the log, but isn't a log.

    Consider the following conclusions:
    1. The round object is going to fly off of the hill
    2. The round object is going to roll down the hill

    Now, I think that you are perfectly right in stating that the cogency of these two conclusions, since they are within the same context, can be evaluated based off of the induction chains. However, in this example, let's try it out:

    For conclusion 1:
    I applicably know that some "things" can fly off of hills.
    I applicably know that this round-object is a "thing".
    Therefore, the round-object will fly off the hill.
    I can apply this belief to reality to see if it holds.
    Therefore, I am holding an "applicable plausibility" based off of two possibilities.

    For conclusion 2:
    I applicably know that some round-like objects, such as a log, can roll down a hill.
    I applicably know that some round-like objects, such as a log, will roll down a hill in windy climates.
    Therefore, the round-like object will roll down the hill.
    I can apply this belief to reality to see if it holds.
    Therefore, I am holding an "applicable plausibility" based off of two possibilities.

    Notice that these are (1) both within the same context and (2) they both are at the same point in the induction chain. However, the latter is more cogent than the former because we have to evaluate the parent context(s) that they share: in this case, a good example is the law of similarity. If the subject holds that a generic "thing" is less cogent to base correlations off of than more particular groups of concepts, aka (I believe) the law of similarity, then they hold a more fundamental, parent context, that engulfs the two aforementioned conclusions and, thereby, one is more cogent than the other. Likewise, if the person held a parent context that directly contradicts the engulfed context, then that engulfed context would have to go (or the parent one would have to be refurbished to allow it to live on) and, more importantly, that context would be based off of the law of noncontradiction, which resides within another context: it is a hierarchy of planes that are engulfed by one another where the most fundamental is the most engulfing.

    Similarly, with respect to your example prior to eye surgery, that requires a more fundamental acceptance, a parent context, that the context of the situation nullifies the ability for me to say they were truly wrong or right, or that I am truly right compared to them (because it depends on the context). If I didn't hold that context, then I wouldn't agree with you in this sense and neither would be truly wrong: I would just be disagreeing with you at a more fundamental context that engulfs the other.

    I believe the above should cover what you meant by "qualitative likelihood".

    I am going to refrain from elaborating on "qualitative likelihoods" to restrict the amount of objections I give in this post (that way it is easier for you, hopefully). But we can most definitely talk about this after.

    According to this, there is no apriori.

    Originally I was going to object, but I think that a priori is perfectly compatible with your view (or at least how I understand it) and can elaborate on this further if you would like.

    With the chain of reasoning comparisons I noted above, we can definitely determine which is most cogent to pursue.

    Only within the particular context and not considering the parent contexts.

    In general, I like your epistemology! I think it is an empiricist leaning view that is more "sure" of the chick and less "sure" of the egg. I just think we can improve it (:

    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,
    I agree: I think that we are using terms drastically differently. Furthermore, I don't, as of now, agree with your use of the terminology for multiple different reasons (of which I will hereafter attempt to explain).

    Firstly, the use of "possibility" and "plausibility" in the sense that you have defined it seems, to me, to not account for certain meaningful distinctions. For example,let's consider two scenarios: person one claims that a new color could be sensed by humans if their eyes are augmented, while person two claims that iron can float on water if you rub butter all over the iron block. I would ask you, within your use of the terms, which is more cogent? Under your terms, I think that these would both (assuming they both haven't been applied to reality yet) be a "plausibility" and not a "possibility", and, more importantly, there is no hierarchy between the two: they only gain credibility if they aren't inapplicable implausibilities and, thereafter, are applied to reality without contradiction. This produces a problem, for me at least, in that I think one is more cogent than the other. Moreover, in my use of the terms, it would be because one is possible while the other can be proven to be impossible while they are both still "applicable plausibilities" (in accordance to your terms). However, I think that your terms do not account for this at all and, thereby, consider them equal. You see, "possibility", according to my terms, allows us to determine what beliefs we should pursue and which ones we should throw away before even attempting them (I think that your use of the terms doesn't allow this, we must apply it directly to reality and see if it fails, but what if it would require 3 years to properly set up? What if we are conducting an experiment that is clearly impossible, but yet considered an "applicable plausibility? What term would you use for that if not "possibility"?). Moreover, there is knowledge that we have that we cannot physically directly experience, which I am sure you are acquainted with as a priori, that must precede the subject altogether. I haven't, and won't ever, experience directly the processes that allow me to experience in the first place, but I can hold it as not only a "possibility" (in my sense of the term) but also a "highly plausible" "truth" of my existence. Regardless of what we call it, the subject must have a preliminary consideration of what is worth pursuing and what isn't. I think it is the term "possibility", I think that you are more saying that we must apply it to reality without contradiction--which confuses me a bit because that is exactly what I am saying but I would then ask you what you would call something that has the potential to occur in reality without contradiction? If you are thinking about the idea of iron floating on water, instead of saying "that is not possible", are you saying "I would not be able to apply that to reality without contradiction"? If so, then I think I am just using what I would deem a more concise word for the same thing: possibility. Furthermore, it is a preliminary judgement, not in term so claiming that something can be applied to reality to see if it holds: I could apply the butter rubbing iron on water idea and the color one, but before that I could determine one to be an utter waste of time.

    Secondly, your use of the terms doesn't account for any sort of qualitative likelihoods: only quantitative likelihoods (aka, probability). You see, if I say that something isn't "possible" until I have experienced it at least once, then a fighter jet flying at the speed of sound is not possible, only plausible, for me because I haven't experienced it directly nor have I measured it with a second hand tool. However, I think that it is "plausible", in a qualitative likelihood sense, because I've heard from many people I trust that they can travel that fast (among other things that pass my threshold of what can be considered "plausible"). I can also preliminarily consider whether this concept would contradict any of my discrete or applicable knowledge and, given that it doesn't, I would be able to categorize this as completely distinct from a claim such as "iron can float on water". I would say that a jet traveling at the speed of sound is "possible", therefore I should pursue further contemplation, and then I consider it "highly plausible" because it meets my standard of what is "highly plausible" based off of qualitative analysis. In your terms, I would have two "plausibilities" that are not "possible" unless I experience it (this seems like empiricism the more I think about it--although I could be wrong) and there is no meaningful distinction between the two.

    Thirdly, think that your use of the terms lacks a stronger, qualitative (rationalized), form of knowledge (i.e. what "plausibility" is for me). If a "plausibility" is weaker than a "possibility", and a "possibility" is merely that which one has experienced once, then we are left without any useful terms for when something has been witnessed once but isn't as qualitatively likely as another thing that has been witnessed multiple times. For example, the subject could have experienced a dog attack a human; therefore, it is "possible" and not "plausible" (according to your terms), but when a passerby asks them if their dog will attack them if they pet it, the subject now has to consider, not just that it is "possible" since they have witnessed it before, the qualitative likelihood that their dog is aggressive enough to be a risk. They have to necessarily create a threshold, of which is only useful in this context if the passerby agrees more or less with it, that must be assessed to determine if the dog will attack or not. They must both agree, implicitly, because if the subject has too drastically different of a threshold then the passerby then the passerby's question will be answer in a way that won't portray anything meaningful. For example, if the subject thinks that its dog will be docile as long as the passerby doesn't pet its ears and decides to answer "no, it won't attack you", then that will not be very useful to the other subject, the passerby, unless they also implicitly understand that they shouldn't pet the ears. Most importantly, the subject is not making any quantitative analysis (as we have discussed earlier) but, rather, I would say qualitative analysis that I would constitute in terms of "plausibility". However, if you have another term for this I would be open to considering it as I think that your underlying meaning is generally correct.

    Fourthly, I think that your redefinitions would be incredibly hard to get the public to accept in any colloquial sense (or honestly any practical sense) because it 180s their perception of it all and, as I previously mentioned, doesn't provide enough semantical options for them to accurately portray meaning. I am not trying to pressure you into having to abide by common folk: I just think that, if the goal is to refurbish epistemology, then you will have to either (1) keep using the terms as they are now or (2) accompany their redefinitions with other terms that give people the ability to still accurately portray their opinions.

    We cannot state that it is possible that there are other shades of color that humanity could see if we improved the human eye, because no one has yet improved the human eye to see currently unseeable colors

    I would say that this reveals what I think lacks in your terminology: we can't determine what is more cogent to pursue. In my terminology, I would be able to pursue trying to augment the eye to see more shades of colors because it is "possible". I am not saying that I "know" that they exist, only that I "know" that they don't contradict any distinctive or applicable knowledge I have (what I would call immediate forms of knowledge: perception, thought, emotion, rudimentary reason, and memories). I'm not sure what term you would use here in the absence of "possibility", but I am curious as to know what!

    But what does "likely" mean in terms of the knowledge theory we have? Its not a probability, or a possibility, because the distinctive knowledge of "I think there are other colors the human eye could see if we could make it better." has never been applicably known.

    Again, I think this is another great example of the problem with your terms; If it isn't possible or probable then it is just a plausibility like all the other plausibilities. But I can consider the qualitative likelihood that it is true and whether it contradicts all my current knowledge, which will also determine whether I pursue it or not. I haven't seen a meteor, nor a meteor colliding into the moon, but I have assessed that it is (1) possible (in my use of the term) and (2) plausible (in my use of the term) because I have assessed whether it passes my threshold. For example, I would have to assess whether the people that taught me about meteors would trick me or not (and whether they are credible and have authority over the matter--both of which require subjective thresholds). Are they liars? Does what they are saying align with what I already know? Are they trying to convince me of iron floating on water? These are considerations that I think get lost in the infinite sea of "plausibilities" (in your terms). The only thing I can think of is that maybe you are defining what I would call "possible" as an "applicable plausibility" and that which is "impossible" as a "inapplicable plausibility". But then I would ask what determines what is "applicable"? Is it that I need to test it directly? Or is it the examination that it could potentially occur? I think that to say it "could potentially occur" doesn't mean that I "know" that it exists, just that, within my knowledge, it has the potential too. I think your terms removes potentiality altogether.

    I feel that "Plausibility" one of the greatest missing links in epistemology. Once I understood it, it explained many of the problems in philosophy, religion, and fallacious thinking in general. I understand your initial difficulty in separating plausibilities and possibilities. Plausibilities are compelling! They make sense in our own head. They are the things that propel us forward to think on new experiences in life. Because we have not had this distinction in language before, we have tied plausibilities and possibilities into the same word of "possibility" in the old context of language. That has created a massive headache in epistemology.

    I understand what you mean to a certain degree, but I think that it isn't fallacious to say that something could potentially occur: I think it becomes fallacious if the subject thereafter concludes that because it could occur it does occur. If I "know" something could occur, that doesn't mean that I "know" that it does occur and, moreover, I find this to be the root of what I think your are referring to in this quote.

    But when we separate the two, so many things make sense. If you start looking for it, you'll see many arguments of "possibility" in the old context of "knowledge", are actually talking about plausibilities. When you see that, the fault in the argument becomes obvious.

    I agree in the sense that one should recognize that just because something is "possible" (in my use of the term) that doesn't mean that it actually exists, it just means that it could occur (which can be a useful and meaningful distinction between things that cannot). I also understand that, within your use of the terms, that you are 100% correct here: but I think that the redefining of the terms leads to other problems (which I have and will continue to be addressing in this post).

    Plausbilities cannot have immediateness, because they are only the imaginations of what could be within our mind, and have not been applied to reality without contradiction yet.

    I think that they are applied to reality without contradiction in an indirect sense: it's not that they directly do not contradict reality, it's that they don't contradict any knowledge that I currently have (distinct and applicable). This doesn't mean that it does happen, or is real, but, rather, that it can happen: it is just a meaningful distinction that I think your terms lack (or I am just not understanding it correctly). And, to clarify, I think that "applicable plausibilities" aren't semantically enough for "things that could occur" because then I am no longer distinguishing "applicable" and "unapplicable" plausibilities based off of whether I can apply it to reality or not. In my head, it is a different to claim that I could apply something to reality to see if it works and saying that, even if I can't apply it, it has the potential to work. If I state that the teapot example is an "inapplicable plausibility", then I think that the butter on iron example, even if it took me three years to properly setup the experiment, would be an "applicable plausibility" along with the shades of colors example. But I think that there is a clear distinction between the shades of colors example and the button on iron example--even if I can apply them, given enough time, to reality to see if they are true: I have applied enough concepts that must be presupposed for the idea of iron to float on water to work and, therefore, it doesn't hold even if I can't physically go test it.

    As one last attempt to clarify, when you state it doesn't contradict any immediate forms of knowledge, do you mean distinctive knowledge, or applicable knowledge? I agree that it does not contradict our distinctive knowledge. I can imagine a horse flying in the air with a horn on its head. It has not been applied to reality however. If I believe it may exist somewhere in reality, reality has "contradicted" this distinctive knowledge, by the fact that it has not revealed it exists. If I believe something exists in reality, but I have not found it yet, my current application to reality shows it does not exist.

    I mean both (I believe): my experiences and memories which is the sum of my existence. However, I am not saying that it exists, only that it could exist. This is a meaningful distinction between things that could not exist. I would agree with you in the sense that I don't think unicorns exist, but not because they can't exist, but because I don't have any applicable knowledge (I believe is what you would call it) of it. So I would agree with you that I can't claim to "know" a unicorn exists just because it could exist: but I can claim that an idea of a unicorn that is "possible" is more cogent than one that is "impossible", regardless of whether I can directly test anything or not.

    But they are not confirmations of what is real, only the hopes and dreams of what we want to be real.

    I sort of agree. There is a distinction to be made between what is merely a hope or dream, and that which could actually happen. I may wish that a supernatural, magical unicorn exists, but that is distinctly different from the claim that a natural unicorn could exist. One is more cogent than the other, and, thereby, one is hierarchically higher than the other.

    I look forward to hearing your response,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    Absolutely splendid post! I thoroughly enjoyed reading it!

    Upon further reflection, I think that we are using the terms "possibility" and "plausibility" differently. I am understanding you to be defining a "possibility" as something that has been experienced at least once before and a "plausibility" as something that has not been applicable tested yet. However, I as thinking of "possibility" more in terms of its typical definition: "Capable of happening, existing, or being true without contradicting proven facts, laws, or circumstances". Furthermore, I was thinking of "plausibility" more in the sense that it is something that is not only possible, but has convincing evidence that it is the case (but hasn't been applicably tested yet). I think that you are implicitly redefining terms, and that is totally fine if that was the intention. However, I think that to say something is "possible" is the admit that it doesn't directly contradict reality in any way (i.e. our immediate forms of knowledge) and has nothing directly to do with whether I have ever experienced it before. For example, given our knowledge of colors and the human eye, I can state that it is possible that there are other shades of colors that we can't see (but with better eyes we could) without ever experiencing any new shades of colors. It is possible because it doesn't contradict reality, whereas iron floating on water isn't impossible because I haven't witnessed it but, rather, because my understanding of densities (which are derived from experiences of course) disallows such a thing to occur. Moreover, to state that something is "plausible", in my mind, implies necessarily that it is also "possible"--for if it isn't possible then that would mean it contradicts reality and, therefore, it cannot have reasonable evidence for it being "plausible". Now, don't get me wrong, I think that your responses were more than adequate to convey your point, I am merely portraying our differences in definitions (semantics). I think that your hierarchy, which determines things that are derived more closely to "possibilities" to be more cogent, is correct in the sense that I redefine a "possibility" as something experienced before (or, more accurately, applicably known). However, I think that you are really depicting that which is more immediate to be more cogent and not that which is possible (because I would define "possibility" differently than you). Likewise, when you define a "plausibility" to be completely separate from "possibility", I wouldn't do that, but I think that the underlying meaning you are conveying is correct.

    an old possibility is still more cogent than a newer plausibility.
    I would say: that which is derived from a more immediate source (closer to the processes of perception, thought, and emotion--aka experience) is more cogent than something that is derived from a less immediate source.

    Plausibility does not use immediateness for evaluation, because immediateness is based on the time from which the applicable knowledge was first gained.

    Although, with your definitions in mind, I would agree, I think that plausibility utilizes immediateness just as everything else: you cannot escape it--it is merely a matter of degree (closeness or remoteness).

    So taking your example of a person who has lived with different memories (A fantastic example) we can detail it to understand why immediateness is important. It is not that the memories are old. It is that that which was once possible, is now no longer possible when you apply your distinctive knowledge to your current situation.

    I agree! But because possibility is derived from whether it contradicts reality--not whether I have experienced it directly before. Although I may be misunderstanding you, if we define possibility as that which has been applicably known before, then, in this case, it is still possible although one cannot apply it without contradiction anymore (because one would have past experiences of it happening: thus it is possible). However, if we define possibility in the sense that something doesn't contradict reality, then it can be possible with respect the memories (in that "reality") and not possible with respect to the current experiences (this "reality") because we are simply, within the context, determining whether the belief directly contradicts what we applicably and distinctly know.

    We don't even have to imagine the fantastical to evaluate this. We can look at science. At one time, what was determined as physics is different than what scientists have discovered about physics today. We can look back into the past, and see that many experiments revealed what was possible, while many theories, or plausibilities were floating around intellectual circles, like string theory.

    Although I understand what you are saying and it makes sense within your definitions, I would claim that scientific theories are possible and plausible. If it wasn't possible, then it is isn't plausible because it must first be possible to be eligible to even be considered plausible. However, I fully agree with you in the sense that we are constantly refining (or completely discarding) older theories for better ones: but this is because our immediate forms of knowledge now reveal to us that those theories contradict reality in some manner and, therefore, are no longer possible (and, thereby, no longer plausible either). Or, we negate the theory by claiming it no longer meets our predefined threshold for what is considered plausible, which in no way negates its possibility directly (although maybe indirectly).

    However, as pluasibilities are applied to reality, the rejects are thrown away, and the accepted become possibilities. Sometimes these possibilities require us to work back up the chain of our previous possibilities, and evaluate them with our new context. Sometimes, this revokes what was previously possible, or it could be said forces us to switch context. That which was once known within a previous context of time and space, can no longer be known within this context.

    I think you are sort of alluding to what I was trying to depict here, but within the idea that an applied plausibility can morph into a possibility. However, I don't think that only things I have directly experienced are possible, or that what I haven't directly experienced is impossible, it is about how well it aligns with what I have directly experienced (immediate forms of knowledge). Now, I may be just conflating terms here, but I think that to state that something is plausible necessitates that it is possible.

    Is it possible that the tree is not there anymore, or is it plausible?

    Both. If I just walked by the tree 10 minutes ago, and I claim that it is highly plausible that it is still there, then I am thereby also admitting that it is possible that it is there. If it is not possible that it is there, then I would be claiming that the tree being there contradicts reality but yet somehow is still plausible. For example, if I claimed that it is plausible that the tree poofed into existing out of thin air right now (and I never saw, I'm just hypothesizing it from my room which has no access to the area of land it allegedly poofed onto), then you would be 100% correct in rejecting that claim because it is not possible, but it is not possible because contradicts every aspect of my immediate knowledge I have. However, if I claimed that it is highly plausible that a seed, in the middle of spring, in an area constantly populated with birds and squirrels, has been planted (carried by an animal, not purposely planted by humans) in the ground and will sprout someday a little tree, I am claiming that it is possible that this can occur and, not only that, but it is highly "likely" (not in a probabilistic sense, but based off of immediate knowledge) that it will happen. I don't have to actually have previously experienced this process in its entirety: if I have the experiential knowledge that birds can carry seeds in their stomachs (which get pooped out, leaving the seed in fine condition) and that a seed dropped on soil, given certain conditions, can cause a seed to implant and sprout, then I can say it is possible without ever actually experiencing a bird poop a seed out onto a field and it, within the right conditions, sprout. A more radical example is the classic teapot flouting around (I can't quite remember which planet) Jupiter. If the teapot doesn't violate any of my immediate forms of knowledge, then it is possible; however, it may not be plausible as I haven't experienced anything like it and just because the laws allow it doesn't mean it is a reasonable (or plausible) occurrence to take place. Assuming the teapot doesn't directly contradict reality, then I wouldn't negate a belief in it based off of it not being possible but, rather, based on it not being plausible (and, more importantly, not relevant to the subject at all).

    The reality, is this is a plausibility based off of a possibility. Intuitively, this is more reasonable then a plausibility based off of a plausibility. For example, its plausible that trees have gained immortality, therefore the tree is still there. This intuitively seems less cogent, and I believe the reason why, is because of the chain of comparative logic that its built off of.

    In this specific case, I would claim that trees being immortal is not plausible because it contradicts all my immediate knowledge pertaining to organisms: they necessarily have an expiration to their lives. However, let's say that an immortal tree didn't contradict reality, then I would still say it is implausible, albeit possible, because I don't have any experiences of it directly or indirectly in any meaningful sense. If the immortality of a tree could somehow be correlated to a meaningful, relevant occurrence that I have experienced (such as, even if I haven't seen a cell, my indirect contact with the concept of "cells" consequences), then I would hold that it is "true". If it passes the threshold of a certain pre-defined quantity of backed evidence, then I would claim it is, thereafter, considered "plausible".

    But the end claim, that one particular tree is standing, vs not still standing, is a plausibility.

    Upon further examination, I don't think this is always the case. It is true that it can be a plausibility, but it is, first and foremost, a possibility. Firstly, I must determine whether the tree being there or not is a contradiction to reality. If it is, then I don't even begin contemplating whether it is plausible or not. If it isn't, then I start reasoning whether I would consider it plausible. If I deem it not plausible, after contemplation, it is still necessarily possible, just not plausible.

    You can rationally hold that it is plausible that it is still standing, but how do we determine if one plausibility is more rational than another?

    By agreeing upon a bar of evidence and rationale it must pass to be considered such. For example, take a 100 yard dash sprint race. We both can only determine a contestant's run time "really fast", "fast", "slow", or "really slow" if we agree upon thresholds: I would say the same is true regarding plausibility and implausibility. I might constitute the fact that I saw the tree there five minutes ago as a characterization that it is "highly plausible" that it is there, whereas you may require further evidence to state the same.

    I believe it is by looking at the logic chain that the plausibility is linked from.

    Although I have portrayed some differences, hereforth, between our concepts of possibility and plausibility, I would agree with you here. However, I think that it is derived from the proximity of the concept to our immediate forms of knowledge, whereas I think yours, in this particular case, is based off of whether it is closer to a possibility or not (thereby necessarily, I would say, making something that is possible and something that is plausible mutually exclusive).

    I think the comparative chains of logic describes how (1) it aligns with our immediate knowledge and inductive hierarchies. I believe (2) relevancy to the subject can be seen as making our distinctive knowledge more accurate.

    Again, I agree, but I would say that the "chains of logic" here is fundamentally the proximity to the immediate forms of knowledge (or immediateness as I generally put it) and not necessarily (although I still think it is a solid idea) comparing mutually exclusive types, so to speak, such as possibility and plausibility, like I think you are arguing for.

    Going to your unicorn example, you may say its possible for an animal to have a horn, possible for an animal to have wings, therefore it is plausible that a unicorn exists. But someone might come along with a little more detail and state, while its possible that animals can have horns on their head, so far, no one has discovered that its possible for a horse to. Therefore, its only plausible that a horse would have wings or a horn, therefore it is only plausible that a unicorn exists

    I would say that someone doesn't have to witness a horned, winged horse to know that it is possible because it doesn't contradict any immediate forms of knowledge (reality): it abides by the laws of physics (as far as I know). This doesn't mean that it is plausible just because it is possible: I would say it isn't plausible because it doesn't meet my predefined standards for what I can constitute as plausible. However, for someone else, that may be enough to claim it is "plausible", but I would disagree and, more importantly, we would then have to discuss our thresholds before continuing the conversation in any productive manner pertaining to unicorns. Again, maybe I am just conflating the terms, but this is as I currently understand them to mean.

    Logically, what is pluasible is not yet possible

    I don't agree with this, but I am open to hearing why you think this is the case. I consider a possibility to be, generally speaking, "Capable of happening, existing, or being true without contradicting proven facts, laws, or circumstances" and a plausibility, generally speaking, to be "Seemingly or apparently valid, likely, or acceptable; credible". I could potentially see that maybe you are saying that what is "seemingly...valid, likely, or acceptable" is implying it hasn't been applicably known yet, but this doesn't mean that it isn't possible (unless we specifically define possibility in that way, which I will simply disagree with). I would say that it is "seemingly...valid, likely, or acceptable" because it is possible (fundamentally) and because it passes a certain predefined threshold (that other subjects can certainly reject).

    I think this fits with your intuition then. What is plausible is something that has no applicable knowledge. It is more rational to believe something which has had applicable knowledge, the possible, over what has not, the plausible

    Again, the underlying meaning here I have no problem with: I would just say it is about the proximity and not whether it is possible or not (although it must first be possible, I would say, for something to be plausible--for if I can prove that something contradicts reality, then it surely can't be plausible). I think that we are just using terms differently.

    So then, there is one last thing to cover: morality. You hit the nail on the head. We need reasons why choosing to harm other people for self gain is wrong. I wrote a paper on morality long ago, and got the basic premises down. The problem was, I was getting burned out of philosophy. I couldn't get people to discuss my knowledge theory with me, and I felt like I needed that to be established first. How can we know what morality is if we cannot know knowledge?

    Finally, it honestly scared me. I felt that if someone could take the fundamental tenants of morality I had made, they could twist it into a half truth to manipulate people. If you're interested in hearing my take on morality, I can write it up again. Perhaps my years of experience since then will make me see it differently. Of course lets finish here first.

    I would love to hear your thoughts on morality and ethics! However, I think we need to resolve the aforementioned disagreements first before we can explore such concepts (and I totally agree that epistemology precedes morality).

    Applicable knowledge cannot claim it is true. Applicable knowledge can only claim that it is reasonable.

    I absolutely love this! However, I would say that it is "true" for the subject within that context (relative truth), but with respect to absolute truths I think you hit the nail on the head!

    I look forward to your response,
    Bob
  • If there is no free will, does it make sense to hold people accountable for their actions?
    Hello @T Clark,

    Although I may have just missed it, I think that most of this discussion has been centered around free will vs determinism and the implications of the world being deterministic with respect to accountability. However, hopefully, hereforth, I can provide a different perspective. Although I understand that you are not interested, within the scope of this discussion, about whether the world actually is determined or not, I would like to state that my position is that the world is determined, but we still have free will.

    The way I see it, determinism does not, in itself, necessitate that all the forms of free will are incompatible, it's thereafter that another discussion can arise between what are typically called hard determinists (also called incompatibilists) and soft determinists (also called compatibilists), where the former argues for the eradication of all forms of free will and the latter argues against some forms of free will while still holding at least one form to be true. I am a compatibilist. I think that, although everything is determined, there are meaningful distinctions that hard determinists cannot account for. The most notable of these distinctions is the difference in where the individual's action was derived within the causal chain: the distinction between an "inner will" heavily coerced by external factors (basically anything not directly of the "inner will") and an "inner will" acting of its own accord (which has no dependency on it be a libertarian choice, where one could have decided to do otherwise). For example, the scenario in which I "decide" to eat vanilla ice cream and the scenario in which I eat vanilla ice cream because someone is holding a gun up to my head are both determined but, nevertheless, are meaningfully distinguishable in the sense that the latter is obviously the external factors heavily coercing the inner will whereas the former is it acting of its own accord. Now, we can get into what "inner will" and "external factors" really mean, and the fact that no one is ever completely acting in a way that is at least somewhat influenced by external factors, but I will save that for a later post if you are interested: for now, I am simply giving you the jist. Personally, this is where my frustration came from with respect to hard determinism (and, subsequently, the eradication of all forms of free will): it necessarily leaves obviously meaningful distinctions unaccounted for. On the contrary, more libertarian (not in terms of politics, but in terms of libertarian free will) minded individuals will incessantly advocate that we can do otherwise, which is, in my opinion, thoroughly refuted. This is, in my opinion, where a vast majority of the "free will vs determinism" arguments and dilemmas lie: in the false assumption that you have to pick a side, it's an either/or fallacy. Libertarian free will can be successfully negated by determinism and determinism can still, at the same time, allow for other definitions, or forms, of free will--such as one that is based off of the relation of external vs internal wills (or factors) instead of choices (in a "you could have done otherwise" sense of the term), which can, consequently, be utterly determined.

    With that in mind, when you ask if we should hold each other accountable, I would ask for further clarification on what you mean. You see, I wouldn't say, technically speaking, that "punishment" is the right term but, rather, "prevention". However, I would also so that "justice", or "retaliation", is perfectly justifiable under a deterministic worldview. First of all, I think that "prevention" instead of "punishment" rightly shifts the individual's thinking process away from the incessant presumption that the individual could have done otherwise, which, in my opinion, is nonsense and, more importantly, produces an unnecessary level of apathy. However, with that being said, I can most definitely still blame someone for their actions, but, in doing so, I am not implicitly declaring that they should have done otherwise: I'm declaring that I do not approve of what they did, albeit my disapproval is just as much determined as their action, and I am rightly deriving the source of the action towards them because they did it primarily in alliance with their "inner will". On the contrary, if I know that the person was, in one way or another, not in their normal state of mind (and their normal state of mind I completely agree with--in the sense that they don't normally do things I would characterize as "bad") when they committed the act, then I would only blame them in a very strict sense that the action can be traced back to their body, but with the meaningful distinction that they normally do not act in this way (albeit that both are necessarily determined)(and it is much more complicated than this, as their tendency to be under the influence is also a factor here). Furthermore, if I know that the person did something "bad" but it was heavily coerced by another external factor (or factors), then I would not necessarily blame them at all. Most importantly, even with the fact that everything is determined, all of the aforementioned is still distinguishable in a meaningful sense that cannot be stripped down to merely "we can't judge anyone because its all a causal chain".

    To keep this brief, I will conclude with one final note: I find retaliation and justice perfectly justifiable within the idea of determinism. If I witness a lion gruesomely devour my mother, I am surely going to be angry at that lion, however, more importantly, my anger won't be as directed at it as say a human if I consider humans to have the ability to do otherwise. This is what I mean by stating that libertarian free will can cause unnecessary amounts of apathy (or decreased amounts of empathy, however you want to think about it) because the individual, in my opinion, is holding a completely unjustified belief that holds humans to a much higher status than all other animals (and objects in general). Don't get me wrong, the spectrum of consciousness definitely elevates humans to have higher capacities which, thereby, can rightly determine humans to be held to a higher standard than other animals. But, most importantly, this higher standard, I would say, is only insofar as we make a meaningful distinction between the capacities of organisms in our world. This is why my lion analogy isn't completely analogous afterall: the lion doesn't have the same capacity, neural and thereby "causal potential" (so to speak), as a human; therefore, we can't, in a complete sense, erode a human to the level of a lion because human's have a higher capacity, but that capacity is still determined!

    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    First and foremost, Merry Christmas! I hope you have (had) a wonderful holiday! I apologize as I haven't had the time to answer you swiftly: I wanted to make sure I responded in a substantive manner.

    There is a lot of what you said that I wholeheartedly agree with, so I will merely comment on some things to spark further conversation.

    I believe immediateness is a property of "possibility". Another is "repetition". A possibility that has been repeated many times, as well as its immediateness in memory, would intuitively seem more cogent than something that has occurred less often and farther in the past. Can we make that intuitiveness reasonable?

    I think you are right in saying immediateness is a property of possibility, and therefrom, also repetition. Moreover, I would say that immediateness, in a general sense, is "reasonableness". What we use to reason, at a fundamental level, is our experiences and memories, and we weigh them. I think of it, in part, like Hume's problem of induction: we have an ingrained habit of weighing our current experiences and our memories to determine what we hold as "reality", just like we have an ingrained sense of the future resembling the past. I don't think we can escape it at a fundamental level. For example, imagine all of the your memories are of a life that you currently find yourself not in: your job, your family, your intimate lover, your hobbies, etc within your memories directly contradicts your current experiences of life (like, for instance, all of your pictures that you are currently looking at explicitly depict a family that contradicts the family you remember: they don't even look similar at all, they have different names, they aren't even the same quantity of loved ones you remember). In the heat of the moment, the more persistently your experiences continue to contradict your memories, the more likely you are to assert the former over the latter. But, on the contrary, if you only experience for, let's say, 3 minutes this alternate life and then are "sucked back into" the other one, which aligns with your memories, then you are very likely to assert that your memories were true and you must have been hallucinating. However, 3 years into experiencing that which contradicts your memories will most certainly revoke any notion that your original memories are useful, lest you live in a perpetual insanity. That would be my main point: it is not really about what is "true", but what is "useful" (or relevant). Even if your original memories, in this case, are "true", they definitely aren't relevant within your current context. This is what I mean by "weighing them", and we don't just innately weigh one over the other but, rather, we also compare memories to other memories. Although I am not entirely sure, I think that we innately compare memories to other memories in terms of two things: (1) quantity of conflicting or aligning memories and (2) current experience. However, upon further reflection, it actually seems that we are merely comparing #1, as #2 is actually just #1: our "current" experience is in the past. By the time the subject has determined that they have had an experience (i.e. they have reasoned their way into a conclusory thought amongst a string of preceding thoughts that they are, thereby, convinced of) they are contemplating something that is in the past (no matter how short a duration of time from when it occurred). Another way of putting it is: once I've realized that the color of these characters, which I am typing currently, is black, it is in the past. By the time I can answer the question of "Is my current experience I am having in the present?", I am contemplating a very near memory. My "current" mode of existence is simply that which is the most recent of past experiences: interpretations are not live in a literal sense, but only in a contextual sense (if "present" experience is going to mean anything, it is going to relate to the most recent past experience, number 1 in the queue). The reason I bring this up is because when we compare our "current" experience to past experiences, we are necessarily comparing a past experience to a past experience, but, most notably, one is more immediate than the other: I surely can say that what is most recent in the queue of past experiences, which necessarily encompasses "life" in general (knowledge)(discrete experiences, applicable knowledge, and discrete knowledge), is more "sure" than any of the past experiences that reside before it in the queue of memories. However, just because I am more "sure" of it doesn't make it more trustworthy in an objective sense: it becomes more trustworthy the more it aligns with the ever prepending additions of new experiences. For example, if I have, hypothetically speaking, 200 memories and the oldest 199 contradict the newest 1, then the determining factor is necessarily, as an ingrained function of humanity, how consistently each of those two contradicting subcategories compares to the continual prepending of new experiences (assuming 1 is considered the "current" experience and 2 is the most recent experience after 1 and so forth). But, initially, since the quantity of past experiences is overwhelmingly aligned and only contradicted by the most recent one, I would assert the position that my past 199 experiences are much more "true" (cogent). However, as I continue to experience, at a total of 500 experiences, if the past 300 most recent experiences align with that one experience that contradicted the 199, then the tide has probably turned. However, if I, on the next experience after that one contradicting experience, start experiencing many things that align with those 199, then I would presume that that one was wrong (furthermore, when I previously stated I would initially claim the 199 to be more “true” than the 1, this doesn’t actually happen until I experience something else, where that one contradictory experience is no longer the most recent, and the now newest experience is what I innately compare to the 1 past contradictory one and the other 199). Now, you can probably see that this is completely contextual as well: there's a lot of factors that go into determining which is more cogent. However, the "current" experiences are always a more "sure" fact and, therefore, the more recent the more "sure". For example, if I have 200 past experiences and the very next 10 are hallucinations (thereby causing a dilemma between two "realities", so to speak), I will only be able to say the original, older 200 past experiences were the valid ones if I resume experiencing in a way that aligns with those experiences and contradicts those 10 hallucinated experiences. If I started hallucinated right now (although we, in hindsight know it, let's say I don't), then I will never be able to realize it is really false representations until I start having "normal" experiences again. Even if I have memories of my "normal experiences" that contradict my "current" hallucinated ones, I won't truly deem it so (solidify my certainty that it really was hallucinated) until the hallucinated chain of experiences is broken with ones that align with my past experiences of "normal experiences". Now, I may have my doubts, especially if I have a ton of vivid memories of "normal experiences" while I am still hallucinating, but the more it goes on the more it seems as though my "normal experiences" were the hallucinated ones while the hallucinated ones are the "normal experiences". I'm not saying that necessarily, in this situation, I would be correct in inverting the terms there, but it seems as though only what is relevant to the subject is meaningful: even if it is the case that my memories of "normal experiences" are in actuality normal experiences, if I never experience such "normal experiences" again, then, within context, I would be obliged to refurbish my diction to something that is more relevant to my newly acquired hallucinated situation. Just food for thought.

    I'll clarify plausibility. A plausibility has no consideration of likelihood, or probability. Plausibility is simply distinctive knowledge that has not been applicably tested yet. We can create plausibilities that can be applicably tested, and plausibilities that are currently impossible to applicably test. For example, I can state, "I think its plausible that a magical horse with a horn on its head exists somewhere in the world." I can then explore the world, and discover that no, magical horses with horns on their head do not exist.

    I could add things like, "Maybe we can't find them because they use their magic to become completely undetectable." Now this has become an inapplicable plausibility. We cannot apply it to reality, because we have set it up to be so. Fortunately, a person can ignore such plausibilities as cogent by saying, "Since we cannot applicably know such a creature, I believe it is not possible that they exist." That person has a higher tier of induction, and the plausibility can be dismissed as being less cogent.

    Although I was incorrect in in saying plausibility is likelihood, I still have a bit of a quarrel with this part: I don't think that all unapplicable plausibilities are as invalid as you say. Take that tree example from a couple of posts ago: we may never be able to applicably test to see if the tree is there, but I can rationally hold that it is highly plausible that it is. The validity of a plausibility claim is not about if it is directly applicable to reality or not, it is about (1) how well it aligns with our immediate knowledge (our discrete experiences, memories, discrete knowledge, and applicable knowledge) and (2) its relevancy to the subject. For this reason, I don't think the claim that unicorns exist can be effectively negated by claiming that it is not possible that they exist. A winged horse-like creature with a horn in the middle of its skull is possible, in that it doesn’t defy any underlying physics or fundamental principles, and, therefore, it is entirely possible that there is a unicorn out there somewhere in the universe (unless, in your second example, it has a magical power that causes it to be undetectable—however the person could claim that it is a natural cloaking instead of supernatural, in a magical sense, just like how we can’t see the tiny bacteria in the air, maybe the unicorn is super small). For me, it isn’t, in the case of a unicorn, that it is not possible that makes me not believe that they exist, it is (1) its utter irrelevancy to the subject and (2) the complete lack of positive evidence for it. I am a firm believer in defaulting to not believing something until it is proven to be true, and so, naturally, I don’t believe unicorns exist until we have evidence for them (I don’t think possibility is strong evidence for virtually anything—it is more of just a starting point). This goes hand-in-hand with my point pertaining to plausibility: the lack of positive evidence for a unicorn’s existence goes hand-in-hand, directly, with our immediate forms of knowledge. If nobody has any immediate forms of knowledge pertaining to unicorns (discrete experiences, applicable knowledge, and discrete knowledge), then, for me, it doesn’t exist—not because it actually doesn’t exist (in the case that it is not a possibility), but because it has no relevancy to me or anyone else (anything that we could base off of unicorns would be completely abstract—a combination of experiences and absences/negations of what has been experienced in ways that produce something that the subject hasn’t actually ever experienced before). Now, I think this gets a bit tricky because someone could claim that their belief in a unicorn existing makes them happier and, thereby, it is relevant to them. I think this becomes a contextual difference, because, although I would tell them “you do you”, I would most certainly claim that they don’t “know” unicorns exist (and, in this case, they may agree with me on that). You see, this gets at what it means to be able to “applicably know” something: everything a subject utilizes is applicable in one way or another. If the person tells me that “I don’t know if unicorns exist, but I believe that they do because it makes me happy”, they are applying their belief to the world without contradiction with respect to their happiness: who am I to tell them to stop being happy? However, I would say that they don’t “know” it (and they agreed with me on that in this case), so applying a belief to reality is not necessarily a form of knowledge (to me, at least). But in a weird way, it actually is, because it depends on what they are claiming to know. In my previous example, they aren’t claiming to “know” unicorns exist, but they are implicitly claiming to “know” that believing in it makes them happier and I think that is a perfectly valid application of belief that doesn’t contradict reality (it just isn’t pertaining to whether the unicorn actually exists or not). Now, if I were to notice some toxic habits brewing from their belief in unicorns, then I could say that are holding a contradiction because the whole point was to be happier and toxic habits don’t make you happier (so basically I would have to prove that they are not able to apply their “belief in unicorns=happier” without contradiction). Just food for thought (:

    In the case that someone pulled an ace every time someone shuffled the cards, there is the implicit addition of these limits. For example, "The person shuffling doesn't know the order of the cards." The person shuffling will doesn't try to rig the cards a particular way." "There is no applicable knowledge that would imply an ace would be more likely to be picked than any other card."

    In the instance in which we have a situation where probability has these underlying reasons, but extremely unlikely occurrences happen, like an ace is drawn every time someone picks from a shuffled deck, we have applicable knowledge challenging our probable induction. Applicable knowledge always trump's inductions, so at that point we need to re-examine our underlying reasons for our probability, and determine whether they still hold.

    I completely agree with you here: excellent assessment! My main point was just that it is based off of one’s experiences and memories: that is it. If we radically change the perspective on an idea that we hold as malleable (such as an “opinion”), such that it is as concrete as ever in our experiences and memories, then we are completely justified in equating it with what we currently deem to be concrete (such as gravity).

    I believe I've mentioned that we cannot force a person to use a different context. Essentially contexts are used for what we want out of our reality. Of course, this can apply to inductions as well. Despite a person's choice, it does not negate that certain inductions are more rational. I would argue the same applies to contexts.

    I would, personally, rephrase “Despite a person’s choice, it does not negate that certain inductions are more rational” to “Despite a person’s choice, it does not negate that certain inductions are more rational within a fundamentally shared subjective experience”. I would be hesitant to state that one induction is actually absolutely better than another due to the fact that they only seem that way because we share enough common ground with respect to our most fundamental subjective experiences. One day, there could be a being that experiences with no commonalities with me or you, a being that navigates in a whole different relativity (different scopes/contexts) than us and I wouldn’t have the authority to say they were wrong—only that they are wrong within my context as their context (given it shares nothing with me) is irrelevant to my subjective experience.

    This would be difficult to measure, but I believe one can determine if a context is "better" than another based on an evaluation of a few factors.

    I love your three evaluative principles for determining which context is “better”! However, with that being said, I think that your determinations are relative to subjects that share fundamental contexts. For example, your #3 (degree of harm) principle doesn’t really address two ideas: (1) the subject may not share your belief that one ought to strive to minimize the degree of harm and (2) the subject may not care about the degree of harm pertaining to other subjects due to their actions (i.e. psychopaths). To put it bluntly, I think that humans become cognitively convinced of something (via rudimentary reason) and it gets implemented if enough people (with the power in society—or have the ability to seize enough power) are also convinced of it (and I am using “power” in an incredibly generic sense—like a foucault kind of complexity, not just brute force or guns or something). That’s why society is a wave, historically speaking, and 100 generations later we condemn our predecessors for not being like us (i.e. for the horrific things they did), but why would they be like us? We do not share as much in common contextually with them as we do with a vast majority of people living within the present with us (or people that lived closer to our generation). I think that a lot of the things that they did 200 years ago (or really pick any time frame) was horrendous: but was it objectively wrong? I thing nietzsche put it best: “there are no moral phenomenon, just moral interpretations of phenomenon”. I am cognitively convinced that slavery is abhorrent, does that make it objectively wrong? The moral wrongness being derived from cognition (and not any objective attribute of the universe) doesn’t make slavery any more “right”, does it? I think not. The reason we don’t have slavery anymore (or at least at such a large scale as previous times) is because enough people who held sufficient power (or could seize it) were also convinced that it is abhorrent and implemented that power to make a change (albeit ever so slow as it was). My point is that, even though I agree with you on your three points, you won’t necessarily be able to convince a true psychopath to care about his/her neighbors, and their actions are only “wrong” relative to the subject uttering it. We have enough people that want to prevent psychopaths from doing “horrible” things (a vast majority of people can feel empathy, which is a vital factor) and, therefore, pyschopaths get locked up. I am just trying to convey that everything is within a context (and I think you agree with me on this, but we haven’t gone this deep yet, so I am curious as to what you think). It is kind of like the blue glasses though experiment, if we all were born with blue glasses ingrained into our eyeballs, then the “color spectrum” would consist of different colors of blue and that would be “right” relative to our subjective experience. However, if, one day, someone was born with eyes that we currently have, absent of blue glasses, then their color spectrum would be “right” for them, while our blue shaded color spectrum would be “right” for us. Sadly, this is where “survival of the fittest” (sort of) comes into play: if there is a conflict where one subjective experience of the color spectrum needs to be deemed the “right” one, then the one held by the most that hold the “power” ultimately will determine their conclusion to be the “truth”: that is why we call people who see green and red flip-flopped “color blind”, when, in reality, we have no clue who is actually absolutely “right” (and I would say we can’t know, and that is why each are technically “blind” with respect to the other—we just only strictly call them “color blind” because the vast majority ended up determining their “truth” to be the truth). When we say “you can’t see red and green correctly”, this is really just subjectively contingent on how we see color.

    I think that my main point here is that absolutely determining which context is better is just as fallacious, in my mind, as telling ourselves that we must determine whether our “hand” exists as we perceive it or whether it is just mere atoms (or protons, or quarks) and that we must choose one: they don’t contradict eachother, nor do fundamental contexts. Yes we could try to rationalize who has a better context (and I think your three points on this are splendid!), but that also requires some common ground that must be agreed upon and that means that, in some unfortunate cases, it really becomes a “does this affect me where I need to take action to prevent their context?” (and “do I have enough power to do anything about it?” or “can I assemble enough power, by the use of other subjects that agree with me, to object to this particular person’s context”).

    I look forward to your response!
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,
    I apologize, as things have been a bit busy for me, but, nevertheless, here's my response!

    My apologies is this is a little terse for me tonight. I will have more time later to dive into these if we need more detail, I just wanted to give you an answer without any more delay.

    Absolutely no problem! I really appreciated your responses, so take your time! I think your most recent response has clarified quite a bit for me!

    I understand exactly what you are saying in this paragraph. I've deductively concluded that these inductions exist. Just as it is deductively concluded that there are 4 jacks in 52 playing cards.

    There are likely degrees of probability we could break down. Intuitively, pulling a jack out of deck of cards prescribes very real limits. However, if I note, "Jack has left their house for the last four days at 9am. I predict today on Friday, they will probably do the same," I think there's an intuition its less probably, and more just possible.

    Perhaps the key is the fact that we don't know what the denominator limit really is. The chance of a jack would be 4/52, while the chance of Jack leaving his house at 9 am is 4 out of...5? Does that even work? I have avoided these probabilities until now, as they are definitely murky for me.

    Although I am glad that we agree about the deduction of probability inductions, I think that we are using "probability" in two different ways and, therefore, I think it is best if I define it, along with "plausibility", for clarification purposes (and you can determine if you agree with me or not). "Plausibility" is a spectrum of likelyhoods, in a generic sense, where something is "Plausible" if it meets certain criteria (of which do not need to be derived solely from mathematics) and is "Implausible" if it is meets certain other criteria. In other words, something is "plausible" if it has enough evidence to be considered such and something is "implausible" if it has enough evidence to be considered such. Now, since "plausibility" exists within a spectrum, it is up to the subject (and other subjects: societal contexts) to agree upon where to draw the line (like the exact point at which anything thereafter is considered "plausible" or anything below a certain line is considered "implausible"). Most importantly, I would like to emphasize that "plausibility", although one of its forms of evidence can be mathematics, does not only encompass math. On the contrary, "probability" is a mathematical concrete likelyhood: existing completely separate from any sort of spectrum. The only thing that subjects need to agree upon, in terms of "probability", is mathematics, whereas "plausibility" requires a much more generic subscription to a set (or range) of qualifying criteria of which the spectrum is built on. For example, when I say "X is plausible", this only makes sense within context, where I must define (1) the set (range) of valid forms of evidence and (2) how much quantity of them is required for X to be considered qualified under the term "plausible". However, if I say "X is probable", then I must determine (1) the denominator, (2) numerator (possibilities), and (3) finally calculate the concrete likelyhood. When it is "plausible", it simply met the criteria the subject pre-defined, whereas saying there is a 1% chance of picking a particular card out of 100 cards is a concrete likelyhood (I am not pre-defining any of it). Likewise, if I say that X is "plausible" because it is "probable", then I am stating that (1) mathematical concrete likelyhoods are a valid form of evidence for "plausibilities" and (2) the mathematical concrete likelyhood of X is enough for me to consider it "plausible" (that the "probability" was enough to shift the proposition of X past my pre-defined line of when things become "plausible").

    You see, when you say "while the chance of Jack leaving his house at 9 am is 4 out of...5?", I think you are conflating "probability" with "plausibility"--unless you can somehow mathematically determine the concrete likelyhood of Jack leaving (I don't think you can, or at least not in a vast majority of cases). I think that we colloquially use "probable" and "plausible" interchangeably, but I would say they are different concepts in a formative sense. Now it is entirely possible, hypothetically speaking, that two subjects could determine that the only valid form of evidence is mathematically concrete likelyhoods (or mathematically derived truths in a generic sense) and that, thereby, that is the only criteria by which something becomes worthy of the term "plausible" (and, thereby, anything not derived from math is "implausible"), but I would say that those two people won't "know" much about anything in their lives.

    Ah, I'm certain I cut this out of part four to whittle it down. A hierarchy of inductions only works when applying a particular set of distinctive knowledge to an applicable outcome. We compare the hierarchy within the deck of cards. We know the probability if pulling a jack, we know its possible we could pull a jack, but the probability is more cogent that we won't pull a jack.

    The intactness of the tree would be evaluated separately, as the cards have nothing to do with the trees outcome. So for example, if the tree was of a healthy age, and in a place unlikely to be harmed or cut down, it is cogent to say that it will probably be there the next day. Is it plausible that someone chopped it down last night for a bet or because they hated it? Sure. But I don't know if that's actually possible, so I would be more cogent in predicting the tree will still be there tomorrow with the applicable knowledge that I have.

    Ah, I see! That makes a lot more sense! I would agree in a sense, but also not in a sense: this seems to imply that we can't compare two separate claims of "knowledge" and determine which is more "sure"; however, I think that we definitely can in terms of immediateness. I think that you are right in that a "probability" claim, like all other mathematical inductions, is more cogent than simply stating "it is possible", but why is this? I think it is due to the unwavering, inflexibility of numbers. All my life, from my immediate forms of knowledge (my discrete experiences and memories), I have never come in contact with such a thing as a "flaky number" because it is ingrained fundamentally into the processes that makeup my immediate forms of knowledge (i.e. my discrete experiences have an ingrained sense of plurality and, thereby, I do too). Therefore, any induction I make pertaining to math, since it is closer to my immediate forms of knowledge (in the sense that it is literally ingrained into them), assuming it is mathematically sound, is going to trump something less close to my immediate forms of knowledge (such as the possibility of something: "possibility" is just a way of saying "I have discretely experienced it before without strong correlation, therefore it could happen again", whereas a mathematical induction such as "multiplication will always work for two numbers regardless their size" is really just a way of saying "I have discretely experienced this with strong correlation (so strong, in fact, that I haven't witness any contradicting evidence), therefore it will always happen again". When I say "immediateness", am not entirely talking merely about physical locations but, rather, about what is more forthright in your experiences: the experience itself is the thing by which we derive all other things and, naturally, that which corresponds to it will be maintained over things that do not.

    For example, the reason, I think, human opinions are wavering is due to me having experiences of peoples' opinions changing, whereas if I had always experienced (and everyone else always experienced) peoples' opinions unchanging (like gravity or 2+2 = 4), then I would logically characterize it with gravity as a concrete, strongly correlated, experience held very close to me. Another example is germ theory: we say we "know" germs make us sick, and that is fine, but it is the correlation between the theory and our immediate forms of knowledge (discrete experiences and memories) that make us "know" germ theory to be true. We could be completely wrong about germs, but it is undeniable that something makes us sick and that everything adds up so far that it is germs (it is strongly correlated) (why? because that is apart of our immediate knowledge--discrete experiences and memories).

    With that in mind, let's take another look at the tree and cards example: which is more "sure"? I think that your epistemology is claiming that they must be separately, within their own contexts, evaluated to determine the most cogent form of induction to use within that particular context (separately), but, upon determining which is more cogent within that context, we cannot go any farther. On the contrary, I think that I am more "sure" of the deduction of 2/3 probability because it is tied to my immediate forms of knowledge (discrete experiences and memories). But I am more "sure" of the tree still being their (within that context) than that I am going to actually draw the ace because I have more immediate knowledge (I saw it 2 hours ago, etc) of the tree that adds up to it still being their than me actually getting an ace. Another way to think about it is: if my entire life (and everyone else testified to it in their lives as well), when presented with three cards (two of which are aces), I always randomly drew an ace--as in every time with no exceptions--then I would say the the "sureness" reverses and my math must have been wrong somehow (maybe probability doesn't work after all? (: ). This is directly due to the fact that it would be no different than my immediate knowledge of gravity or mathematical truths ( as in 2+2 = 4, or the extrapolation of such). Now, when I use this example, I am laying out a very radical example and I understand that me picking an ace 10 times in a row does not constitute probability being broken; however, if everyone all attested to always experiencing themselves picking an ace every time, and that is how I grew up, then I see no difference between this and the reality of gravity.

    I was going to say much more, and elaborate much more, but this is becoming entirely too long. So I will leave you with my conclusion: the cogency (or "sureness", as I put it) of knowledge is not, at its most fundamental level, about which kind of induction the given claim stems from, but more about the degree of immediateness to the "I". — Bob Ross


    With the clarification I've made, do you think this still holds?

    Sort of. I think that, although I would still hold the claim that it is based off of immediateness, I do see your point in terms of cogency within a particular scenario, evaluated separately from the others, and I think, in that sense, you are correct. However, I don't think we should have to limit our examinations to their specific contexts: I think it is a hierarchy of hierarchies. You are right about the first hierarchy: you can determine the cogency based off of possibility vs probability vs plausibility vs irrationality. However, we don't need to stop there: we can, thereafter, create a hierarchy of which contextual claims we are more "sure" of and which ones we are less "sure" of (it is like a hierarchy within a spectrum).

    In these cases, we don't have the denominator like in the "draw a jack" example. In fact, we just might not have enough applicable knowledge to make a decision based on probability. The more detailed our applicable knowledge in the situation, the more likely we are to craft a probability that seems more cogent. If we don't know the destructive level of the storm, perhaps we can't really make a reasonable induction. Knowing that we can't make a very good induction, is also valuable at times too.

    I think that most cases we cannot create an actual probability of the situation: I think most cases of what people constitute as "knowledge" are plausibilities. On another note, I completely agree with you that it is entirely the case that there is a point at which we should suspend judgment: but what is that point? That is of yet to decide! I think we can probably cover that next if you'd like.

    I look forward to your response,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    I have never been able to discuss this aspect with someone seriously before, as no one has gotten to the point of mostly understanding the first three parts.

    I am glad that we are able to agree and discuss further as well! Honestly, as our discussion has progressed, I have realized more and more that we hold incredibly similar epistemologies: I just didn't initially understand your epistemology correctly.

    Applicably knowing something depends on our context, and while context can also be chosen, the choice of context is limited by our distinctive knowledge. If, for example I did not have the distinctive knowledge that my friend could lie to me, then I would know the cat was in the room. But, if I had the distinctive knowledge that my friend could lie to me, I could make an induction that it is possible that my friend could be lying to me. Because that is an option I have no tested in application and due to my circumstance, cannot test even if I wanted to, I must make an induction.

    I think that this is fair enough: we are essentially deriving the most sure (or reasonable) knowledge we can about the situation and, in a sense, it is like a spectrum of knowledge instead of concrete, absolute knowledge. However, with that being said, I think that a relative, spectrum-like epistemology (which I think both our epistomologies could be characterized as such) does not account for when we should simply suspend judgement. You see, if we are always simply determining which is more cogent, we are, thereby, never determining if the most cogent form is really cogent enough, within the context, to be worth even holding as knowledge in the first place.

    Arguably, I think we applicably know few things. The greater your distinctive knowledge and more specific the context, the more difficult it becomes to applicably know something.

    I completely agree. Furthermore, I also understand your reference to 1984 and how vocabulary greatly affects what one can or cannot know with their context because, I would say, their vocabulary greatly determines their context in the first place.

    I think before I get into your response to the cat example I need to elaborate on a couple things first. Although I think that your hierarchical inductions are a good start, upon further reflection, I don't think they are quite adequate enough. Let me try to explain what I mean.

    Firstly, let's take probabilistic inductions. Probability is not, in itself, necessarily an induction. It is just like all of mathematics: math is either deduced or, thereupon, induced. For example, in terms of math, imagine I am in an empty room where I only have the ability to count on my fingers and, let's say, I haven't experienced anything, in terms of numbers, that exceeded 10. Now, therefrom, I could count up to 10 on my fingers (I understand I am overly simplifying this, but bare with me). I could then believe that there is such a thing as "10 things" or "10 numbers" and apply that to reality without contradiction: this is a deduction. Thereafter, I could then induce that I could, theoretically, add 10 fingers worth of "things" 10 times and, therefore, have 100 things. Now, so far in this example, I have no discrete experiences of 100 things but determined that I know 100 "fingers" could exist. So logically, as of now, 100 only exists in terms of a mathematical induction, whereas 10 exists in terms of a deduction. I would say the same thing is true for probability. Imagine I am in a room that is completely empty apart from me and a deck of 52 cards. Firstly, I can deductively know that there is 52 "things". Secondly, I could deductively know an idea of "randomness" and apply that without contradiction as well. Thirdly, I could deductively know that, at "random", me choosing a kind out of the deck is a chance of 4/52 and apply that without contradiction (I could, thereafter, play out this scenario ad infinitum, where I pick a card out of a "randomly" shuffled deck, and my results would slowly even out to 4/52). All of this, thus forth, is deductive: created out of application of beliefs towards reality in the same way as your sheep example. Now, where induction, I would say, actually comes into play, in terms of probability, is an extrapolation of that probabilistic application. For example, let's take that previous 52 deck scenario I outlined above: I could then, without ever discretely experiencing 100 cards, induce that the probability of picking 1 specific card out of 100 is 1/100. Moreover, I could extrapolate that knowledge, which was deduced, of 4/52 and utilize that to show whether something is "highly probable" or "highly improbable" or something in between: this would also be an induction. For example, if I have 3 cards, two of which are aces and one is a king, I could extrapolate that it is "highly probable" that I will randomly pick an ace out of the three because of my deduced knowledge that the probability of picking an Ace is 2/3 in this case. My point is that I view your "probabilistic inductions" as really being a point towards "mathematical inductions", which does not entirely engross probability. Your 52 card deck example in the essays is actually a deduction and not an induction.

    Secondly, I think that probabilistic inductions and plausible inductions are not always directly comparable. To be more specific, a probabilistic "fact" (whether deduced or induced) is comparable to plausible inductions and, in that sense, I think you are right to place the former above the latter; however, I do not think that "extended" probabilistic claims are comparable (always) to plausible inductions. For example, let's say that there is a tree near my house, of which I can't see from where I am writing this, that I walk past quite frequently. Let's also say that I have three cards in front of me, two of which are aces. Now, I would say that the "fact" that the probability of me randomly picking an ace is 2/3 is "surer" (more cogent form of knowledge) than any claim I could make in terms of an inapplicable plausibility induction towards the tree still being where it was last time I walked past it (let's assume I can't quickly go look right now). However, if I were to ask myself "are you surer that you will pick an ace out of these three cards or that the tree is still intact", or I think you would put it "is it more cogent to claim I will pick an ace out of these three cards or that the tree is still intact", I am now extending my previous "fact" (2/3) into an "extended", contextual, claim that weighs the "highly probableness" of picking an ace out of three cards with the plausibility of the tree still being intact. These are two, as they were stated in the previous sentence, completely incompatible types of claims and, therefore, one must be "converted" into the other for comparison. To claim something is probable is purely a mathematical game, whereas plausible directly entails other means of evidence other than math (I may have walked by the tree yesterday, there may have been no storms the previous night, and I may have other reasons to believe it "highly implausible" that someone planned a heist to remove the tree). In this example, although I may colloquially ask myself "what are the odds that someone moved the tree", I can't actually convert the intactness of the tree into purely probability: it is plausibility. I think this shows that probability and plausibility, in terms of "extended" knowledge claims stemming from probability, are not completely hierarchical.

    Thirdly, building off of the previous paragraph, even though they are not necessarily comparable in the sense of probability, they can be compared in terms of immediateness (or discrete experiential knowledge--applicable and distinct knowledge): the "probabilistic deductions" are "surer" (or more cogent) than "plausible inductions", but "probabilistic inductions" are not necessarily "surer" than "plausible inductions" (they are only necessarily surer if we are talking about the "fact" and not an "extension"). Let's take my previous example of the tree and the 3 cards, but let's say its 1000 cards and one of them is an ace: I think I am "surer" that the tree is still there, although it is argument made solely from an inapplicable plausible induction (as I haven't actually calculated the probability nor have I, in this scenario, the ability to go discretely experience the tree) than me getting an ace from those 1000 cards. However, I am always "surer" that the probability of getting an ace out of 1000 cards is 1/1000 (given there's only one) than the intactness of the tree (again, assuming I can't discretely experience it as of now). Now I may have just misunderstood your fourth essay a bit, but I think that your essay directly implies that the hierarchy, based off of proximity towarfd deductions, is always in that order of cogency. However, I think that sometimes the "extension" of probability is actually less cogent than a plausible induction.

    I was going to say much more, and elaborate much more, but this is becoming entirely too long. So I will leave you with my conclusion: the cogency (or "sureness", as I put it) of knowledge is not, at its most fundamental level, about which kind of induction the given claim stems from, but more about the degree of immediateness to the "I". You see, the probabilistic "fact" of picking an ace out of three cards (two of which are aces: 2/3) is "surer", or more cogent, because it is very immediate to the "I" (it is a deduction directly applied to discrete experiences). The probabilistic "extension" claim, built off of a mathematical deduction in this case (but could be an induction if we greatly increased the numbers), that I am "surer" of getting an ace out of three cards (2/3) is actually less cogent (or less "sure") of a claim than the tree is intact because, in this example, the tree intactness is more immediate than the result of picking a card being an ace. Sure, I know it is 2/3 probability, but I could get the one card that isn't an ace, whereas, given that I walked past the tree yesterday (and couple that with, let's say, a relatively strong case for the tree being there--like there wasn't a tornado that rolled through the day before), the "sureness" is much greater; I have a lot of discrete experiences that lead me to conclude that it is "highly plausible" (note that "highly probable" would require a conversion I don't think possible in this case) that the tree is still there. Everything, the way I see it, is based off of immediateness, but it gets complicated really fast. Imagine that I didn't have an incredibly strong case for the tree still being there (like I walked past it three weeks ago and there was a strong storm that occurred two weeks ago), then it is entirely possible, given an incredible amount of analysis, that the "sureness" would reverse. As you have elegantly pointed out in your epistemology, this is expected as it is all within context (and context, I would argue, is incredibly complicated and enormous).

    I will leave it at that for now, as this is getting much longer than I expected, so, I apologize, I will address your response to the cat example once we hash this out first (as I think it is important).

    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    I think that the first issue I am pondering is the fact that neither of our epistemologies, as discussed hitherto, really clarifies when a person "knows", "believes", or "thinks" something is true. Typically, as you are probably well aware, knowledge is considered with more intensity (thereby requiring a burden of proof), whereas belief and to simply "think" something is true, although they can have evidence, do not require any burden of proof at all. According to your epistemology, as I understand it, considers a person to "know" something if they can apply it to "reality" without contradiction (i.e. applicable knowledge)--which I think doesn't entirely work. For example, I could claim that I "know" that my cat is in the kitchen with no further evidence than simply stating that the claim doesn't contradict my or anyone else who is in the room's "reality". Hypothetically, let's say I (and all the other people) are distantly away from the kitchen and so we cannot verify definitively the claim: do I "know" that my cat is in the kitchen? If so, it seems to be an incredibly "weak" type of knowledge to the point that it may be better considered a "belief" or maybe merely a theory (in a colloquial sense of the term, not scientific)(i.e. I "think").

    Likewise, we could take this a step further: let's say that I, and everyone else in the room, get on a phone call with someone who is allegedly in that very kitchen that we don't have access to (in which I am claiming the cat to reside) and that person states (through the phone call) that the cat is not in the kitchen. Do I now "know" that the cat is not in the kitchen? Do I "know" that that person actually checked the kitchen and didn't just make it up? Do I "know" that that even was a person I was talking to? I heard a voice, of which I assigned to a familiar old friend of mine whom I trust, but I am extrapolating (inducing) that it really was that person and, thereafter, further inducing off of that induction that that person actually checked and, thereafter, that they checked in an honest manner: it seems as though their is a hierarchy even within claims of knowledge themselves. I think that your hierarchy of inductions is a step in the right direction, but what is a justified claim of knowledge? I don't think it would be an irrational induction to induce that the person calling me is (1) the old, trustworthy friend I am assigning the voice to and (2) that they actually didn't discretely experience the cat being in the kitchen, but am I really justified? If so, is this form of "knowledge" just as "strong", so to speak, as claiming to "know" that the cat isn't in the room in which I am in right now? Is it just as "strong" as claiming to "know" that the cat isn't in the room I was previously in, but have good reason to believe the cat hadn't somehow traveled that far and snuck its way into the room, whereas do I really "know" the cat didn't find its way into the kitchen (which is quite a distance away from me, let's say in a different country or something)?

    Another great example I have been pondering is this: do I "know" that a whale is the largest mammal on earth? I certainly haven't discretely experienced one and I most certainly haven't measured all the animals, let alone any one, on this earth. So, how am I justified in claiming to "know" it? Sure, applying my belief that a whale is the largest mammal doesn't contradict my "reality", but does that really constitute as "knowledge"? In reality, I simply searched it and trusted the search results. This seems, at best, to be a much "weaker" form of knowledge (of some sorts, I am not entirely sure).

    I think after defining the personal context and even the general societal context of claims, as you did in your essays, and even after discussing hierarchical inductions, I am still left with quite a complexity of problems to resolve in terms of what is knowledge.

    Bob
  • A Methodology of Knowledge
    @Philosophim,

    I see now! I now understand your epistemology to be the application of deductions, or inductions that vary by degree of cogency, within a context (scope), which I completely agree with. This kind of epistemology, as I understand it, heavily revolves around the subject (but not in terms of simply what one can conceive of course) and not whatever "objective reality", or the things-in-themselves, may be: I agree with this assessment. For the most part, in light of this, I think that your brief responses were more than adequate to negate most of my points. So I will generally respond (and comment) on some parts I think worth mentioning and, after that, I will build upon my newly acquired understanding of your view (although I do not, without a doubt, completely understand it I bet) .

    Instead of the word "error" I would like to use "difference/limitiations". But you are right about perfectly inaccurate eyes being as blind as eyes which are able to see in the quantum realm, if they are trying to observe with the context of normal healthy eyes. Another contextual viewpoint is "zoom". Zoom out and you can see the cup. Zoom in on one specific portion and you no longer see the cup, but a portion of the cup where the elements are made from.

    I agree: it is only "error" if we deem it to be "wrong" but, within context, it is "right".

    Contradictions of applicable knowledge can never be cogent within a particular context.

    In light of context, I agree: I was attempting to demonstrate contradictions within all contexts, which we both understand and accept as perfectly fine. On a side note, I also agree with your assessment of Theseus' ship.

    Recall that the separation of "this" and "that" is not an induction in itself, just a discrete experience. It is only an induction when it makes claims about reality. I can imagine a magical unicorn in my head. That is not an induction. If I believe a magical unicorn exists in reality, that is a belief, and now an induction.

    Upon further reflection, I think that I was wrong in stating that differentiation is an "ingrained induction"; I think the only example of "ingrained inductions" is, at its most fundamental level, Hume's problem of induction. That is what I was really meaning by my gravity example, although I was wrongly stating it as induction itself, that I induce that an object will fall the next time I drop it. This is a pure induction and, I would argue, is ingrained in us (and I think you would agree with me on that). After thinking some more, I have come to the conclusion that I am really not considering differentiation an "ingrained induction" but, rather, an assumption (an axiom to be more specific). I am accepting, and I would argue we all are accepting, the principle of noncontradiction as a metalogical principle, a logical axiom, upon which we determine something to either be or not be. However, as you are probably aware, we cannot "escape", so to speak, the principle of noncontradiction to prove or disprove the principle of noncontradiction, just like how we are in no position to prove or disprove the principle of sufficient reason or the principle of the excluded middle. You see, fundamentally, I think that your epistemology stems from "meaningfulness" with respect to the subject (and, thereafter, multiple subjects) and, therefrom, you utilize the most fundamental axiom of them all: the principle of noncontradiction as a means towards "meaningfulness". It isn't that we are right in applying things within context of a particular, it is that it is "meaningful" for the subject, and potentially subjects, to do so and, therefore, it is "right". This is why I don't think you are, in its most fundamental sense, proving any kind of epistemology grounded on absolute grounds but, rather, you are determining it off of "meaningfulness" on metalogical principles (or logical axioms). You see, this is why I think a justified, true, belief (and subsequently classical epistemology) has been so incomplete for such a long time: it is attempting to reach an absolute form of epistemology, wherein the subject can finally claim their definitive use of the term "know", whereas I think that to be fundamentally misguided: everything is in terms of relevancy to the subject and, therefore, I find that relevancy directly ties to relative scope (or context as you put it) (meaningfulness).

    I also apply this (and I think you are too) to memories: I don't think that we "know" any given memory to truly be a stored experience but, rather, I think that all that matters is the relevance to the subject. So if that memory, regardless of whether it got injected into their brain 2 seconds ago or it is just a complete figment of the imagination, is relevant (meaningful) to the subject as of "now", then it is "correct" enough to be considered a "memory" for me! If, on the contrary, it contradicts the subject as of "now", then it should be disbanded because the memory is not as immediate as experience itself. I now see that we agree much more than I originally thought!

    I would also apply this in the same manner to the hallucinated "real" world and the real world example I originally invoked (way back when (: ). For me, since it is relative to context, if the context is completely limited to the hallucinated "real" world, then, for me, that is the real world. Consequently, what I can or cannot know, in that example, would be directly tied to what, in hindsight, we know to be factually false; however, the knowledge, assuming it abides by the most fundamental logical axiom (principle of noncontradiction), is "right" within my context. Just like the "cup" and "table" example, we only have a contradiction within multiple "contexts", which I am perfectly fine with. With that being said, I do wonder if it is possible to resolve the axiomatic nature of the principle of noncontradiction, because I don't like assuming things.

    Furthermore, in light of our epistemologies aligning much better than I originally thought, I think that your papers seem to only thoroughly address the immediate forms of knowledge (i.e. your depiction of discrete experiences, memories, and personal context is very substantive), but do not fully address thereafter. It seems to get into what I would call mediate forms of knowledge (i.e. group contexts and the induction hierarchies) in a general sense, sort of branching out a bit past the immediate forms, but I think that there's much more to discuss (I also think that there's a fundamental question of when, even in a personal context, hierarchical inductions stretch to far to have any relevancy). This is also exactly what I have been pondering in terms of my epistemology as well, so, if you would like, we could explore that.

    I look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    I agree, I think that we both understand each others' definition of "I" and that I have not adequately shown the relevance of my use of "I". Furthermore, I greatly appreciate your well thought-out replies, as they have helped me understand your papers better! In light of this, I think we should progress our conversation and, in the event that it does become pertinent, I will not hesitate to demonstrate the significance (and, who knows, maybe, as the discussion progresses, they dissolve themselves). Until then, I think that our mere recognition of each others' difference of terminology (and the underlying meanings) will suffice. To progress our conversation, I have re-read your writings a couple times over (which does not in any way reflect any kind of mastery of the text) and I have attempted to assess it better. Moreover, I would like to briefly cover some main points and, thereafter, allow you to decide what you would like to discuss next. Again, these pointers are going to be incredibly brief, only serving as an introduction, so as to allow you to determine, given your mastery (naturally) of your own writings, what we ought to discuss next. Without further ado, here they are:

    Point 1: Differentiation is a product of error.

    When I see a cup, it is the error of my perception. If I could see more accurately, I would see atoms, or protons/neutrons/electrons or what have you, and, thereby, the distinction of cup from the air surrounding it becomes less and less clear. Perfectly accurate eyes are just as blind as perfectly inaccurate eyes: differentiation only occurs somewhere in between those two possibilities. Therefore, a lot of beliefs are both applicable knowledge and not applicable knowledge: it is relative to the scope. For example, the "cup" is a meaningful distinction, but is contradicted by reality: the more accurately we see, or sense in general, the more the concept of a "cup" contradicts it. Therefore, since it technically contradicts reality, it is not applicable knowledge. However, within the relative scope of, let's say, a cup on a table, it is meaningful to distinguish the two even though, in "reality", they are really only distinguishable within the context of an erroneous eye ball.

    Point 2: Contradictions can be cogent.

    Building off of point 1, here's an example of a reasonable contradiction:
    1. There are two objects, a cup and a table, which are completely distinct with respect to every property that is initially discretely experienced
    2. Person A claims the cup and table to be separate concepts (defining one as a 'cup' and the other as a 'table')
    3. Person B claims that the cup and the table are the same thing.
    4. Person A claims that Person B has a belief that contradicts reality and demonstrates it by pointing out the glaring distinctions between a cup and a table (and, thereby, the contradictions of them being the same thing).
    5. Person B argues that the cup and table are atoms, or electrons/protons/neutrons or what have you, and, therefore, the distinction between the cup and the table is derived from Person A's error of perception.
    7. In light of this, and even in acknowledgement of this, Person A still claims there is a 'cup' and a 'table'.
    8. Person A is now holds two contradictory ideas (the "cup" and "table" are different, but yet fundamentally they are not different in that manner at all): the lines between a 'cup' and a 'table' arise out of the falseness of Person A's discrete experiences.
    9. Person B claims Person A, in light of #8, holds a belief that is contradicted by reality and that Person A holds two contradictory ideas.

    Despite Person A's belief contradicting reality, it is still cogent because, within relative scope of their perceptions, there is a meaningful distinction between a 'cup' and a 'table'--but only compared to reality in a relative scope. Also, Person A can reasonably hold both positions, even though they negate one another, because the erroneous nature of their existence produces meaningful distinctions that directly contradict reality. In this instance, there is no problem with a person holding (1) a belief that contradicts reality and (2) two contradictory, competing views of reality.

    Point 3: Accidental and essential properties are one in the same
    Building off of point 1 and 2, the distinction between an accidental and essential property seem to be only different in the sense of scope. I think this is the right time to invoke Ship of Theseus (which you briefly mention in the original post in this forum). When does a sheep stop being a sheep? Or a female stop being a female? Or an orange stop being an orange?

    Point 4: The unmentioned 5th type of induction

    There is another type of induction: "ingrained induction". You have a great example of this that you briefly discuss in the fourth essay: Hume's problem of induction. Another example is that the subject has to induce that "this" is separate from "that", but it is an ingrained, fundamental induction. The properties and characteristics that are apart of discrete experience do not in themselves prove in any way that they are truly differentiating factors: the table and the chair could, in reality, be two representations of the same thing, analogous to two very different looking representations of the same table directly produced by different angles of perspective. We have to induce that use of these properties and characteristics (such as light, depth, size, quantity, shape, color, texture, etc) are reasonable enough differentiating factors to determine "this" separate from "that". For example, we could induce that, given the meaningfulness of making such distinctions, we are valid enough in assuming they are, indeed, differentiating factors. Or we could shift the focus and claim that we don't really care if, objectively speaking, they are valid differentiating factors, but, rather, the meaningfulness is enough.

    Point 5: Deductions are induced

    Building off of point 4, "ingrained induction" is utilized to gather any imaginable kind of deductive principle: without such, you can't have deductions. This directly implies that it is not completely the case that deductions are what one should try to anchor inductions off of (in terms of your hierarchical structure). For example, the fact of gravity (not considering the theory or law), which is an induction anchored solely to the "ingrained induction", is far a "surer" belief, so to speak, than the deductive principle of what defines a mammal. If I had to bet on one, I would bet on the continuation of gravity and not the continuation of the term "mammal" as it has been defined: there are always incredible gray areas when it comes to deductive principles and, on some occasions, it can become so ambiguous that it requires refurbishment.

    Point 6: Induction of possibility is not always cogent

    You argue in the fourth essay that possibility inductions are cogent: this is not always the case. For example:

    A possibility is cogent because it relies on previous applicable knowledge. It is not inventing a belief about reality which has never been applicably known.

    1. You poofed into existence 2 seconds ago
    2. You have extremely vivid memories (great in number) of discretely experiencing iron floating on water
    3. From #2, you have previous applicable knowledge of iron floating on water
    4. Since you have previous applicable knowledge of iron floating on water, then iron floating on water is possible.
    5. We know iron floating on water is not possible
    6. Not all inductive possibilities are cogent

    Yes you could test to see if iron can float, but, unfortunately, just because one remembers something occurring doesn't mean it is possible at all: your applicable knowledge term does not take this into account, only the subsequent plausibility inductions make this sub-distinction.

    Point 7: the "I" and the other "I"s are not used equivocally

    Here's where the ternary distinction comes into play: you cannot prove other "I"s to be a discrete experiencer in a holistic sense, synonymous with the subject as a discrete experiencer, but only a particular subrange of it. You can't prove someone else to be "primitively aware", and consequently "experience", but only that they have the necessary processes that differentiate. In other words, you can prove that they differentiate, not that they are primitively aware of the separation of "this" from "that".

    Hopefully those points are a good starting point. I think hit on a lot of different topics, so I will let you decide what to do from here. We can go point-by-point, all points at once, or none of the points if you have something you would like to discuss first.

    I look forward to hearing from you,
    Bob
  • A Methodology of Knowledge
    Hello @Philosophim,

    Thank you for the clarification! I see now that we have pin pointed our disagreement and I will now attempt to describe it as accurately as I can. I see now that your definition of "experience" is something that I disagree with on many different accounts, hopefully I can explain adequately hereafter.

    Firstly, I apologize: I should have defined the term "awareness" much earlier than this, but your last post seems to be implying something entirely different than what I was meaning to say by "awareness". I am talking about an "awareness" completely separate from the idea of whether I am aware of (recognize) my own awareness (sorry for the word salad here). For example, when you say:

    It is irrelevant if a being that discretely experience realizes they are doing this, or not. They will do so regardless of what anyone says or believes.

    You are 100% correct. I do not need to recognize that I am differentiating the letters on my keyboard from the keyboard itself: the mere differentiation is what counts . But this, I would argue, is a recognition of your "awareness" (aka awareness of one's awareness), not awareness itself. So instead, I would say that I don't need to be aware (or recognize) that I am aware of the differentiation of the letters on my keyboard from the keyboard itself: all that must occur is the fundamental recognition (awareness) that there even is differentiation in the first place. To elaborate further, when you say:

    A discrete experiencer has the ability to create some type of identity, to formulate a notion that "this" is separate from "that" over there within this undefined flood.

    I think you are wrong: "I" am not differentiating (separating "this" from "that"), something is differentiating from an undefined flood and "I" recognize the already differentiated objects (this is "awareness" as I mean it). To make it less confusing, I will distinguish awareness of awareness (i.e. defining terms or generally realizing that I am aware) from merely awareness (the fundamental aspect of existence) by defining the former as "sophisticated awareness" and the latter as "primitive awareness". In light of this, I think that when you say:

    If I discretely experience that I feel pain, I feel pain. Its undeniable by anything in existence, because it is existence itself...Again, a discrete experiencer does not have to realize that their act of discretely experiencing, is discrete experiencing. Discrete experience is not really a belief, or really knowledge in the classical sense.

    I think you are arguing that one doesn't have to be aware (recognize) that they are aware of the products of these processes ("sophisticated awareness") to be able to discretely experience, which would be directly synonymous with what I think you would claim to be our realization of our discrete experiences. This is 100% true, but this doesn't mean that we don't, first and foremost, require awareness ("primitive awareness") of those processes. The problem is that it is too complicated to come up with a great example of this, for if I ask you to imagine that "you" didn't see the key on your keyboard separate from one another, then you would say that, in the absence of that differentiation, "your" "discrete experiences" would lack that specific separation. But I am trying to go a step deeper than that: the differentiation (whether the key is separated from the keyboard or it is one unified blob) is not "you" because that process of differentiating is just as foreign, at least initially, as inner workings of your hands. "You" initially have nothing but this differentiation (in terms of perception) to "play with", so to speak, as it is a completely foreign process to "you". What isn't a foreign process to "you" is the "thing" that is "primitively aware" of the distinction of "this" from "that" (aka "you") but, more importantly, "you" didn't differentiate "this" from "that": it is just there.

    To make it clearer, when you describe experience in this manner:

    Experience is your sum total of existence.
    At a primitive level it is pain or pleasure. The beating of something in your neck. Hunger, satiation. It is not contradicted by existence, because it is the existence of the being itself.

    I think you are wrong in a sense. Your second quote here, in my opinion, is referring directly to the products of the processes, which cannot be "experienced" if one is not "primitively aware" of them. I'm fine with saying that "experience" initially precedes definition (or potentially that it even always precedes definition), but I think the fundamental aspect of existence is "primitive awareness". If the beating of something in your neck, which is initially just as foreign to you as your internal organs, wasn't something that you were "primitively aware" of, then it would slip your grasp (metaphorically speaking). With respect to the first quote here, I don't think that my "primitive awareness", although it is the fundamental aspect of existence, is the sum total of all existence: the representations (the products of the processes), the processes themselves, and the "primitive awareness" of them are naturally tri-dependent. However, and this is why I think "primitive awareness" is the fundamental aspect, it is not an equal tri-dependency: for if the "primitive awareness" is removed, then the processes live on, whereas if "primitive awareness" is removed then, thereby, "I" am removed. Furthermore, the processes themselves are never initially known at all (which I agree with you on this), but only their products and, naturally I would say, they are, in turn, useless if "I" am not "primitively aware" of them. So it is like a ternary distinction, but not an equal ternary distinction in terms of immediateness (or precedence) to the "I". For example, if something (the processes) wasn't differentiating the keys on my keyboard, then I would not, within my most fundamental existence, "experience" the keys on a keyboard. On the contrary, I see no reason to belief that, in the event that I was no longer "primitively aware" of the differentiation between the keys and the keyboard (of which I did not partake in and is just, initially, as foreign to me as the feeling of pain), the processes wouldn't persist. What I am saying is "experience" is "primitive awareness" and it depends on the products of the processes to "experience" anything (and, upon further reflection and subsequently not initially, the processes themselves).

    Now, I totally understand that the subject, initially speaking, does not (and will not) be aware of my terminology, but just like how they don't have to be aware of your term "discrete experience" to discretely experience, so too I would argue they don't have be aware of my term "primitive awareness" to "experience".

    In other words, when you say:
    Experience is your sum total of existence. At first, this is undefined. It precedes definition.

    I agree that discrete experiences, in terms of the products of the processes, is initially undefined, but the "primitive awareness" is not. You don't have to know what a 'K' means on your keyboard to know that you are aware that there is something in a 'K' shape being differentiated from another thing (of which we would later call a keyboard). This is way the "primitive awareness" is more fundamental than the products of the processes: you don't have to make any sense of the perceptions themselves to immediately be "primitively aware" of those perceptions.

    And, lastly, I would like to point something out that doesn't pertain to the root of our discussion:
    In questioning the idea of being able to discretely experience I wondered, are the discrete experiences we make "correct"? And by "correct" it seems, "Is an ability to discretely experience contradicted by reality?" No, because the discrete experience, is the close examination of "experience"

    I would not constitute this as a real proof: that discretely experiencing doesn't contradict reality and, therefore, it is "correct". I understand what you are saying, but fundamentally you are comparing the thing to itself. You are really asking: "Is an ability to discretely experience contradicted by discretely experiencing". You are asking, in an effort to derive an orange, "does an orange contradict an orange?". You set criteria (that it can't contradict reality) and then define it in a way where it is reality, so basically you are asking "does reality contradict reality". Don't get me wrong, I would agree that we should define "correct" as what aligns with experience, but it is an axiom and not a proof. It is entirely possible that experience is completely wrong due to the representations being completely wrong and, more importantly, you can't prove the most fundamental by comparing it to itself. I think my critique is what you are sort of trying to get at, but I don't see the use in asking the question when it is circular: it is taken up, at this point, as an axiom, but your line of reasoning here leads me to believe that you may be implying that you proved it to be the case.

    In light of all I have said henceforth, do you disagree with my assessment of "experience"?

    I look forward to hearing back from you,
    Bob
  • A Methodology of Knowledge
    @Philosophim

    I am glad that we have reached an agreement! I completely understand that views change over time and we don't always refurbish our writings to reflect that: completely understandable. Now I think we can move on. Although this may not seem like much progression, in light of our definition issues, I would like you to define "experience" for me. This definition greatly determines what your argument is making. For example, if the processes that feed the "I" and the "I" itself are considered integrated and, therefore, synonymous, then I think your "discrete experiencer" argument is directly implying that one can have knowledge without being aware of it (and that the processes and the "I" that witnesses those processes are the same thing). On the contrary, if the processes that feed the "I" and the "I" itself, to any sort of degree, no matter how minute, are distinguished then, therefrom, I think you are acknowledging that, no matter to what degree, awareness is an aspect of knowledge. If neither of those two best describe your definition of "experience", then I fear that you may be using the term in an ambiguous way that integrates the processes (i.e. perception, thought, etc) with the "I" without necessarily claiming them to be synonymous (which would require further clarification as I don't think it makes sense without such). If nothing I have explained henceforth applies to your definition of "experience", then that is exactly why I would like you to define it in your own words.
    Bob