• Philosophim
    2.6k
    Thank you for the wait Bob. I wanted to make sure I answered you fully and fairly.

    you stated (in, I believe, your first essay): “In recognizing a self,, I am able to create two “experiences”.  That is the self-recognized thinker, and everything else.” I think that this is the intuitive thing to do, but it is only an incredibly general description and, therefore, doesn’t go deep enough for me.Bob Ross

    It is fine if you believe this is too basic, but that is because I must start basic to build fundamentals. At this point in the argument, I am a person who knows of no other yet.

    There are three distinctions to be made, not simply “I” and “everything else”: the interpreter, the interpretations (representations), and self-consciousness.Bob Ross

    I have no objection with discussing this sub divisions of the "I" later. At the beginning though, it is important to examine this from the perspective of a person who is coming into knowledge of themselves for the first time. A "rough draft" if you will. Can we say this person has knowledge in accordance to the definitions and the logic shown here, not the defintions another human being could create. I needed to show you the discrete knowledge of what an "I" was, which is essentialy a discrete experiencer. Then I needed to show you how I could applicably know what an "I" was, which I was able to do.

    To this end, we can say that a discrete experiencer does not need the addition of other definitions like consciousness for the theory to prove itself through its own proposals.

    it would translate (I think) into a discrete experiencer (self-consciousness and the interpreter joined into to one concept)Bob Ross

    I think this is a fine assessment. We can make whatever definitions and concepts we want. That is our own personal knowledge. I am looking at a blade of grass, while you are creating two other identities within the blade of grass. There is nothing wrong with either of us creating these identities. The question is, can we apply them to reality without contradiction? What can be discretely known is not up for debate. What can be applicably known is.

    That is why I define an “experience” as a witnessing of immediate knowledge (the process of thinking, perception, and emotion) by means of rudimentary reason, and a “remembrance” (or memory as you put it in subsequent essays) as seemingly stored experiences.Bob Ross

    This is a great example of when two people with different contexts share their discrete knowledge. I go over that in part 3 if you want a quick review. We have several options. We could accept, amend, reinterpret, or reject each other's definitions. I point this out for the purposes of understanding the theory, because I will be using the theory, to prove the theory.

    What I will propose at this point is your definition additions are a fine discussion to have after the theory is understood. The question is, within the definitions I have laid out, can I show that I can discretely know? Can I show that I can applicably know this? You want to discuss the concept of the square root of four, while I want to first focus on the number 2. You are correct in wanting to discuss the square root of four, but we really can't fully understand that until we understand the number 2.

    Back to context. If you reject, amend, or disagree with my definitions, we cannot come to an agreement of definitive knowledge within our contexts. This would not deny the theory, but show credence to at least is proposals about distinctive knowledge conflicts within context. But, because I know you're a great philosopher, for now, please accept the definitions I'm using, and the way I apply it. Please feel free to point out contradictions in my discrete knowledge, or misapplications of it. I promise this is not some lame attempt to avoid the discussion or your points. This is to make sure we are at the core of the theory.

    To recap: An "I" is defined as a discrete experiencer. That is it. You can add more, that's fine. But the definition I'm using, the distinctive knowledge I'm using, is merely that. Can I apply that discrete knowledge to reality without contradiction? Yes. In fact, it would be a contradiction for me to say I am not a discrete experiencer. As such I applicably know "I" am a discrete experiencer. Feel free to try to take the set up above, and using the definitions provided, point out where I am wrong. And at risk of over repeating myself, the forbiddance of introducing new discrete knowlege at this point is not meant to avoid conversation, it is meant to discover fundamentals.

    It’s kind of like how some animals can’t even recognize themselves in a mirror: I would argue that they do not have any knowledge if (and its a big if in this case) they are not self-conscious. Yes they have knowledge in the sense that their body will react to external stimuli, but that isn’t really knowledge (in my opinion) as removing self-consciousness directly removes “me” (or “I”) from the equation and that is all that is relevant to "me"Bob Ross

    Would an animal be an "I" under the primitive fundamental I've proposed and applicably know? If an "I" is a discrete experiencer, then I have to show an animal is a discrete experiencer without contradiction in reality. If an animal can discern between two separate things, then it is an "I" as well. Now I understand that doesn't match your definition for your "I". Which is fine. We could add in the defintion of "consciousness" as a later debate. The point is, I've created a defintion, and I've applied it to reality to applicably know it.

    Thus as a fundamental, this stands within my personal context. I note in part 3 how limitations on discrete knowledge can result in broad applications for certain contexts that ignore detail in other contexts. It is not the application of this distinctive context to reality that is wrong, it is a debate as to whether the discrete knowledge is detailed enough, or defined the way we wish. But for the single person without context, if they have defined "I" in this way, this is the only thing they could deduce in their application of that definition to reality.

    As an example, let's take your sheep example: what if that entire concept that you derived a deductive principle from (namely tenants that constitute a sheep) were all apart of a hallucination.Bob Ross

    I'll repost a section in part 2 where I cover this:

    "What if the 'shep' is a perfectly convincing hologram? My distinction of a sheep up to this point has been purely visual. The only thing which would separate a perfectly convincing hologram from a physical sheep would be other sensory interactions. If I have no distinctive knowledge of alternative sensory attributes of a sheep, such as touch, I cannot use those in my application. As my distinctive knowledge is purely visual, I would still applicably know the “hologram” as a sheep. There is no other deductive belief I could make."

    So for your example, the first thing we must establish is, "How does the hallucinator distinctly know a sheep?" The second is, "Can they apply that distinctive knowledge to reality without contradiction?" When you say a deductive principle, we must be careful. I can form deductions about distinctive knowledge. That does not mean those deductions will be applicably known once applied to reality. Distinctive knowledge can use deduction to predict what can be applicably known. Applicable knowledge itself, cannot be predicted. We can deduce that our distinctive knowledge can be applied to reality without contradiction, when we apply it to reality. But those deductions are based on the distinctive knowledge we personally have, and the deductions we conclude when we apply them to reality.

    To simplify once again, distinctive knowledge are the conceptualizations we make without applying them to reality. This involves predictions about reality and imagination. Applicable knowldge is when we attempt to use our conceptualizations in reality without reality contradicting them. With that in mind, come back to the hallucination problem and identify the distinctive knowledge the person has, and then applicably what they are trying to prove.

    Once this fundamental is understood and explored, then I believe you'll see the heirarchy of inductions makes more sense. First we must understand what a deduction is within the system. Within distinctive knowledge, I can deduce that 1+1 = 2. When I apply that to reality, by taking one thing, and combining it with another thing, I can deduce that it is indeed 2 things.

    The most fundamental aspect of our lives (I would argue) is rudimentary reason, which is the most basic (rudimentary) method by which we can derive all other things.Bob Ross

    True. But I have attempted to define and apply rudimentary reason as a fundamental, and the above paper is what I have concluded. Again, I am not trying to be dismissive of your creativity or your world view in any way! I would love to circle back to those points later. I am purely trying to guide you to the notion that we do not need these extra additions of definitions to learn these fundamentals, nor could we discuss them without first understanding the fundamentals proposed here.
    I also wanted to leave you with one of your points on the table.

    What if you really snorted a highly potent hallucinogen in the real world and it is so potent that you will never wake up in the real world but, rather, you will die in your hallucinated world once your body dies in the real one. Do you truly have knowledge of the sheep (in the hallucinated world) now, given that the world isn't real?Bob Ross

    This is one of the best critiques I've seen about the theory. Yes, I have an answer for this, but until the fundamentals are truly understood, I fear this would be confusing. if you understand what I've been trying to say in this response, feel free to go over this proposal again. We are at the point where we are going over addition and subtraction, and here you went into calculus with binary! I will definitely respond to this point once I feel the basics are understood.

    Thank you again Bob, I will be able to answer much quicker now that it is the weekend!
  • Bob Ross
    1.7k
    Hello @Philosophim,
    Thank you for the wait Bob. I wanted to make sure I answered you fully and fairly.

    Absolutely no problem! I would much rather wait for a detailed, thought-provoking response than a quick, malformed (so to speak) one! I thoroughly enjoy reading your replies, so take as much time as you need!

    After reading your response, I think your forbiddence of my terminology is fair enough! So I will do my best to utilize your terminology from now on (so, in advance, I apologize if I misuse them--and please correct me when I do). However, I think there is a still, in the absence of my terminology, a fundamental problem with your essays and so I will try to elaborate hereafter in your terminology as best I can.

    It is fine if you believe this is too basic, but that is because I must start basic to build fundamentals. At this point in the argument, I am a person who knows of no other yet.

    I understand that the intuitive thing to do is the start off with "I" vs "everything else" for most people, and I have no problem with that, but it seems to me, as your essays progress, they retain this position which I, as far as I understand your argument hitherto, believe to be false (and not in terms of later applicable knowledge nor a priori, but in the immediate experiential knowledge that you begin your derivation from). Intuitiveness is not synonymous with immediateness, and I think this is a vital distinction for your proposition. For example, the intuitive thing to do is the separate as you did, the "I" and "everything else", for most people but that is not the most immediately known thing to the subject. For example, if, in your sheep example, you were an individual with depersonalization disorder, the disassociation of the "I", the process of discretely perceiving the sheep, and the discrete manifestations of "everything else" (including the sheep) would be incredibly self-apparent and discrete--although, to a person where their brain associates these concepts very tightly it would be the subjects first and most fundamental mistake to assume it is really the "I" discretely perceiving and the discrete manifestations of "everything else". This is why, although I have to invoke some applicable knowledge (that, although you or I may have never had such a disassociation like depersonalization disorder), for a person with such a disassociation, your essay would entirely miss its mark (in my opinion) because they do not share such a basic fundamental distinction as you. However, I would argue that you both actually share a much much more fundamental basic than the binary distinction your proposition invokes: the "I", the process of discretely perceiving the sheep, and all discrete manifestations--a ternary distinction.

    This is where I may need some clarification, because I might have misinterpreted what your proposition was. You see, I thought that "I am a discrete experiencer" was a broadened sense of stating "I am a discrete perceiver, thinker, etc". I read your essays as directly implying (by examples such as the sheep) that a specific instance of "I am a discrete experiencer" was "I am a discrete perceiver and thinker"--and I believe this to be false (if that is not what you are implying, then please correct me). This is not the most fundamental basic separation for the subject. Now I understand that a lot of people may initially be in the same boat as you (in that binary distinction) and then derive the ternary distinction I make, but, most importantly, not all (and it would be a false claim). I totally agree with you that we shouldn't utilize an a priori or applicable knowledge yet, just the discrete experiences as you put it, but "I am a discrete experiencer", based off of such discrete experiences (which can be more self-apparent to people with psychological disorders), is a false basis for knowledge. Again, I do think their is a lot of applicable knowledge that we could both discuss about this ternary distinction I am making, but for your argument I would not need to invoke any as discrete experiences is enough. So, I would argue, either the essays need to begin with a more fundamental basis for knowledge (ternary distinction) or, as the essays progress, they need to morph into a ternary distinction. With that being said, I totally understand (in hindsight) that a lot of my first post was irrelevant to specifically your proposal (I apologize), but I don't agree in deriving the different levels, so to speak, of induction with a basis that isn't the most immediate. Is this me being to technical? Would it overly complicate your proposition? Possibly. This is why I understand if you would like to keep it broad in your proposition, but, in that case, I would temporarily agree with you in this sense; moreover, more importantly, I wouldn't use it as a basis of knowledge because the most fundamental aspect of your epistemology I think to be false. Hopefully that made some sense.

    I think this is a fine assessment. We can make whatever definitions and concepts we want. That is our own personal knowledge. I am looking at a blade of grass, while you are creating two other identities within the blade of grass. There is nothing wrong with either of us creating these identities. The question is, can we apply them to reality without contradiction? What can be discretely known is not up for debate. What can be applicably known is.

    Again, I completely agree that we can derive a prior (and applicable) knowledge that we could both dispute heavily against one another, but I am not trying to utilize any of such: I am disputing what can be discretely known. I am debating on whether your proposition is right in its assessment of what is discretely known. I am debating whether your proposition is right in persisting such a binary distinction all the way to its last conclusion. As another example, psychedelic drugs (that produce extreme disassociations) can also, along with psychological disorders, demonstrate that an individual within your sheep example, in your very shoes, would not base their derivations on a binary distinction. Again, I understand that, although I said I wouldn't invoke applicable knowledge, I just did (in the case that I do not have depersonalization disorder): but for that subject, in your example, your basis would simply be factually incorrect and if the use of such applicable knowledge is not satisfactory for you, then you could (although I am not advocating you to) produce this disassociation and try the sheep example in real life. Now, I am not trying to be condescending here at all, I am deadly serious and, therefore, I apologize if it seemed a bit condescending--I think this is an important problem, fundamentally, with your derivation in the essays. I could be misunderstanding you completely, and if that is the case then please correct me!

    This is a great example of when two people with different contexts share their discrete knowledge. I go over that in part 3 if you want a quick review. We have several options. We could accept, amend, reinterpret, or reject each other's definitions. I point this out for the purposes of understanding the theory, because I will be using the theory, to prove the theory.

    I think that this is the issue with your derivation: it only works if the individual reading it shares your intuitive binary distinction. I agree generally with your assessment of how to deal with competing discrete experiences, but to say "I am a discrete experiencer" is only correct if you are meaning "experiencer" in the sense that you are witnessing such processes, not in the sense that "I am a discrete perceiver". I think this is a matter of definitions, but an important matter, that I would need your clarification on. In both bases, ternary and binary, it may happen that both conclude the same thing, but the latter would be deriving it from a false premise that can be determined to be false by what is immediately known (regardless of whether it is intuitive or not).

    You want to discuss the concept of the square root of four, while I want to first focus on the number 2.

    I think this analogy (although I could be mistaken) is implying that the binary distinction is what we all begin with and then can later derive a ternary distinction: I don't think that is always the case and, even when it is, it is a false one that only requires those discrete experiences to determine it. In other words, one who does begin with a binary distinction can by use of those very discrete experiences determine that there original assertion was wrong and, in fact, it is ternary. Even for a person who doesn't have extreme disassociation, the fact that these processes, that are indeed discrete, get served, so to speak, to the experiencer and not produced by them is enough to show that it is not a binary distinction. Again, I do understand that, for most, it is intuitive to start off with a binary distinction, but applicable and a priori knowledge is not required to realize that it is a false conclusion. I would agree that my assessment is more complicated, which is directly analogous to your analogy here. However, numbers (like the number 2) are first required to understand square roots, which is not analogous to what I am at least trying to say. Again, just as intuitive a binary distinction may be to you, so is a ternary distinction for a depersonalized disorder patient (or even some that have never had a disorder of any kind)--and they simply won't agree with you on this (and it is a dispute about the fundamental discrete experiences and not simply applicable knowledge).

    But, because I know you're a great philosopher, for now, please accept the definitions I'm using, and the way I apply it. Please feel free to point out contradictions in my discrete knowledge, or misapplications of it. I promise this is not some lame attempt to avoid the discussion or your points. This is to make sure we are at the core of the theory.

    Fair enough! I totally understand that I got ahead of myself with my first post. Hopefully I did a somewhat better job of directly addressing your OP. If not, please correct me!

    To recap: An "I" is defined as a discrete experiencer. That is it. You can add more, that's fine.

    Again, it depends entirely on what you mean by "experiencer" whether this is always true for the subject reading your papers. If you mean, in terms of a specific example, that the "I" encompasses the idea that it is a discrete perceiver; I think this is wrong and can be immediately known without a prior knowledge, regardless of whether I am personally in a state of mind that directs me towards an initial binary distinction.

    And at risk of over repeating myself, the forbiddance of introducing new discrete knowlege at this point is not meant to avoid conversation, it is meant to discover fundamentals

    Absolutely fair! However, hopefully I have demonstrated that I am disputing whether the binary distinction even is a fundamental or not.

    Would an animal be an "I" under the primitive fundamental I've proposed and applicably know? If an "I" is a discrete experiencer, then I have to show an animal is a discrete experiencer without contradiction in reality. If an animal can discern between two separate things, then it is an "I" as well. Now I understand that doesn't match your definition for your "I". Which is fine. We could add in the defintion of "consciousness" as a later debate. The point is, I've created a defintion, and I've applied it to reality to applicably know it.

    Firstly, and I am not trying to be too reiterative here, I am disputing the idea that it is a primitive fundamental: I think it is not. Yes, I see your point if you are trying to determine if the animal is a discrete experiencer in the sense that they perceive external stimuli (for example), but this gets ambiguous for me quite quickly. If that is what you mean, that you can demonstrate that they discretely experience as in they discretely perceive (and whatever else you could demonstrate), then that use of the term "experiencer" would not be the same as the "experiencer" in "I am a discrete experiencer", unless you are stating that "experience" is synonymous with the processes of perception, thought, etc. If you are stating that "experience" is synonymous with the processes that feed it, then you have eliminated "you" from the picture and, therefore, the "I" in "I am a discrete experiencer" is simply "The body is a discrete experiencer", which I don't think that would make any sense to either of us if we were to derive our knowledge from the body and not the "I" (again, I could be misunderstanding you here). In other words, although I understand that I am redefining terms, I do not think you can demonstrate that animals experience, only that they have processes similar to which feeds our experiences. What I am trying to say, and I'm probably not doing a very good job, is that you can't demonstrate an animal to experience equivocally to when you determined that you experience: the "I" (you) cannot determine that the animal, or anyone else for that matter, is an "I" in the sense that both of my uses of "I" are equivocal. Now this brings up a new issue of whether we could determine that other people, for example, have "I"s at all. I would say that we can, but it is later on: applicable knowledge in your terminology. To be clear, you would have discrete experiences that demonstrate that there are other entities that have processes required to experience, but any inferences after that are applicable knowledge claims. This is why it is ambiguous for me: when you say you can prove there are other "I"s like your "I", I think you are utilizing the term "I" in two different senses--the former is simply an entity that has the processes similar to what feeds your experiences (which can be derived from discrete experiences), while the latter is experience itself (which cannot be extended via discrete experiences to any other entities but, rather, can be inferred by applicable knowledge to be the case).

    But for the single person without context, if they have defined "I" in this way, this is the only thing they could deduce in their application of that definition to reality.

    Hopefully I did an adequate job of presented evidence that this is not true.

    I think, for the sake of making this shorter, I will leave your further comments for later discussion, because I think that you are right in saying that we need to discuss the more fundamental aspects to your OP first. I think that the hallucinated dilemma is a discussion for after what I have stated here is hashed out.

    Thank you for a such an elaborate response and I look forward to hearing back from you!
    Bob
  • Philosophim
    2.6k
    Likewise, thank you for your response again Bob! And no, I do not find you condescending. I would much rather the point was over explained then not enough. Feel free to always point out where I'm wrong, its the only way to put the theory through its paces.

    If there is a disagreement with the foundation, lets focus on that first. The rest is irrelevant if that is wrong.

    First, lets focus on definitions. To clarify, a discrete experience is the ability to part and parcel what we "experience". A lens focusses light into a camera, but the lens does not discriminate or filter the light into parts. We do. Discrete experiences include observations, and our consciousness. Discrete experience is the "now", our memory, and anytime you think. As you note, we could split up discrete experience into different categories. I could include consciousness, but consciousness is still a discrete experience.

    So what is distinctive knowledge? The awareness of any discrete experience. I discretely know when I sense. When I have a memory. What I define my own consciousness to be. Since I can know that I discretely experience, I know whatever it is that I discretely experience. That is discrete knowledge.

    I read your essays as directly implying (by examples such as the sheep) that a specific instance of "I am a discrete experiencer" was "I am a discrete perceiver and thinker"--and I believe this to be falseBob Ross

    No, the only thing I am claiming is, "I am a discrete experiencer". Perception is a discrete experience, as well as thinking, consciousness, and whatever other definitions and words you want to divide up the notion of what we can discretely experience. Being a discrete experiencer does not requires consciousness, or even any notion of an "I". For beings like us, we can divide what we discretely experience into several definitions. I can create sub-conscious, meta-conscious, meta-meta-conscious, and conscious-unconscious. I can write books, and essays, and have debates about metaphysical meta-conscious-unconsciousness.

    Yes, these words are not real words within the context of society, but there is nothing to prevent a person from making up these words, and attributing it to some part of their "self" or a portion that they discretely experience. The subdivisions are unimportant at a base claim of knowledge, as they are all discrete experiences, and all of them if created by an individual, are distinctly known to that individual.

    In other words, one who does begin with a binary distinction can by use of those very discrete experiences determine that there original assertion was wrong and, in fact, it is ternary.Bob Ross

    What I am saying is, someone can subdivide the notion of an "I" even further if they like. They can even change the entire definition of "I", and state it requires consciousness, thus excluding certain creatures. There's nothing against that. The definition of "I" am using is based upon the fact I can discretely experience. Me changing the definition of an "I" does not negate that underlying fact. What is ultimately important when one decides on a bit of distinctive knowledge, is to see if they can apply it to reality without contradiction.

    So, if I define "I" as simply a discrete experiencer, then I could apply this to reality and state that things which are deemed to discretely experience are "I"'s.

    If you define "I" as needing consciousness, then when you applied to reality, any thing that discretely experienced that did not have consciousness would not be an "I". Yours would add in the complication of needing to clearly define consciousness, then show that application in reality.

    Both of us are correct in our definitions, and both of us are correct in our application. I would discretely and applicably know an "I" in my context, while you would have both knowledges in your context. It is just like the sheep and goat example in part 3. Someone could define a "goat" to encompass both a sheep and a goat. Or they could create "sheep" and "goat" as being separate. Or they could go even further, and state that a goat of 20 years of age is now, "The goat". It doesn't matter what we create for our definitions for individual use. We distinctly know them all. The question is whether we can apply them to reality without contradiction, so then we can claim we can applicably know them as well.

    The only way to prove that someone's definitions are not going to be useful on the applicable level, is to demonstrate two definitions that they hold contradict each other. For example, using the context of regular English, if I said, "Up is down" in a literal sense, we could know that describes an application of reality that is a contradiction. But I could be an illogical being that uses two contradictory definitions in my head. That is what I distinctively know.

    The question of "correctness" comes in when two contexts encounter one another.
    The keys when discussing the two parts of knowledge come down to whether the distinctive knowledge proposed can be applied to reality without contradiction, and whether the distinctive knowledge that can be applied, is specific and useful enough for our own desired purposes.

    In your case, you are dissatisfied with my definition of an "I", because you want some extra sub distinctions for your own personal view of what "I" is. But "I" is merely a placeholder for me at this time for the most basic description of, "that which discretely experiences". Why am I so basic here? Because it avoids leading the discussion where it does not need to go at this time. Further, it serves to avoid the issue of solipsism. Finally, it avoids the discussion of knowledge from focusing squarely on human beings, or particular types of human beings. Notice that your addition of consciousness adds a whole extra addition to the discussion. A new word that needs to be defined, and applied to reality without contradiction. But I am not trying to get the specifics of what we can derive from the our ability to discretely experience as human beings. I am just trying to get the most fundamental aspects of knowledge as a tool.

    We could of course conflict further on the notion of what we discretely experience, going round and round as to what consciousness entails, what what all sorts of sub-assessments entail. But it is a fruitless discussion for the purposes of what I'm using the definitions for. I am not wrong, and you are not wrong in our own contexts. We must come to an agreed upon context of distinctive knowledge, and the way to do that for the most number of people, is to get the concepts as basic as possible. The symbol of "I" is unimportant, as long as you understand the concept underneath the "I" that I made from my own personal context. You are entering into "I" as the context developed by myself, while you can hold the "I" as the context on your end. The final "I" is the agreement of compromise between the context of you and I together. We can hold all three in our head without contradiction. It is not the word or symbol that matters. It is again, the underlying concept.

    So with this, before you build upon it, before you subdivide it, I ask you to think about the exact definitions of discrete experience, distinctive knowledge, and applicable knowledge. Have I contradicted myself? Have I applied these basic definitions to reality without contradiction? If I have done so, then I have shown a system of knowledge that I can use in my personal context. After that, we can address the notion of cross context further. Thanks again!
  • Bob Ross
    1.7k
    Hello @Philosophim,
    Thank you for your clarifications of your terminology: I understand it better now! I see now where our disagreements lie. For now, I am going to grant your use of "discrete experiencer" in a broader sense as you put it. However, this still does not remove my issue with your papers and, quite frankly, it is simply my lack of ability to communicate it clearly that is creating this confusion. So, therefore, I am going to take a different approach: I re-read your essays 1-3 (I left 4 out because it something which will be addressed after we hash this out) and I have gathered a few quotes from each of them with questions that I would like to get your answer to (and some are just elaborative). Hopefully this will elaborate a bit on my issue. Otherwise, feel free to correct me.

    Essay 1:

    I noted discrete experiences in regards to the senses, but what about discrete experience absent those senses?  Closing off my senses such as shutting my eyes reveals I produce discrete experiences I will call “thoughts.”  If I “think” on a thought that would contradict the discrete experience of “thoughts” I again run into a contradiction.  As such, I can deductively believe I have thoughts absent the senses as well.

    I think what you meant (and correct me if I am wrong here) is the five senses, not all senses. If you had no senses, you wouldn't have thoughts because you would not be aware of them. There are many senses to the body and it is always evolving, so I would argue that the awareness of thoughts is a sense (but is most definitely different than the five senses). I think your argument here would be that you are talking specifically senses that pertain to the bodies contact with external stimuli and that is fine. But thoughts are not absent of all senses. I think this arises due to your essays' lack of addressing the issue of the ternary distinction. This will hopefully make more sense as I move on to the next quotes.

    Essay 2:

    I will label the awareness of discrete experiences as “distinctive knowledge”.  To clarify, distinctive knowledge is simply the awareness of one’s discrete experiences.

    I agree. Your advocating here (as I understand it) that in the absence of awareness, you have no distinctive knowledge and, thereby, no applicable knowledge and, therefore, no knowledge at all. You are directly implying that even if you can determine another animal to be a discrete experiencer, you still have no reason to think that they have any distinctive knowledge because that doesn’t directly prove that they are aware of their discrete experiences. Am I correct in this? Therefore, the “I” for you as a discrete experiencer, at this point, has one extra property that cannot be demonstrated (yet) to be in another animals “I”: awareness of those discrete experiences. Therefore, both uses of the term “I” are not being used completely synonymously (equivocally).

    Essay 3:

    I have written words down, and if another being, which would be you, is reading the words right now then you too are an “I”.

    As of now, the “I” defined for you has another property that you haven’t proven to exist in the other “I”s: awareness of discrete experiences—distinctive knowledge as you defined it. Just like how I can park my car with a complete lack of awareness of how I got there, I could also be reading your papers without any awareness of it. So, with that in mind, I would ask: are you asserting that this is not the case? That you have sufficiently proven that, not only are other animals an “I” in the sense that they are discrete experiencers, they also have distinctive knowledge? To say "I" and your "I" both exist, in my head, is to directly imply that I am using "I" equivocally: I don't think your essays do that. Your essays, thereby, seem to assume that you have used the term "I" equivocally and immediately starts compared one "I"s knowledge to another "I"s knowledge: but, again, you haven't demonstrated that anyone besides your self has distinctive knowledge, only discrete experiences.

    If I come across you reading these words and understanding these words,, and you are not correlative with my will, then you are an “I” separate from myself.

    Only in the sense that “I” is used to denote a discrete experiencer and not distinctive knowledge. Therefore, your essays don't actually determine anyone else to know anything, only that they discretely experience. Am I incorrect here?

    If other people exist as other “I’s” like myself, then they too can have deductive beliefs.

    I agree. But at this point in your essay we have no reason to believe that they are aware of them, which would be required for “I” to be used equivocally. In saying "other "I's" like myself", you are implying (to me) that you think that you have proven other "I's" to have distinctive knowledge (or knowledge in any sense), but I don't think you have.

    A person's genetics or past experiences may incline them to discretely experience properties different from others when experiencing the same stimulus.

    This is why, I would argue, not everyone who reads your paper is going to fundamentally agree with you with respect to your sheep example. They will attempt to gather knowledge starting from the subject, just like you, but they may not view it as a binary distinction. If it is initialized with a ternary distinction, then, as you hinted earlier, solipsism becomes a problem much quicker and, therefore, your ease of derivation (in terms of a binary distinction) will not be obtained by them. For example, for a person that starts their subjective endeavor with a ternary distinction, it is entirely possible that they must address the issue of “where are these processes coming from?” and “am I justified in assuming they are true?” way before they can get to any kind of induction hierarchy that resides within the production of those processes. So I would like to ask: do you think that the barrier between the “I” and the processes feeding it, and thereby the previous questions, is not more fundamental, and therefore must be addressed first, before the subject can continue their derivation with respect to your essays? My problem is that you skip ahead straight to the sheep example, which is an analysis of the products of the processes, when you haven’t addressed the more fundamental problem of whether those very processes are accurate or not (you just seem to imply that it should be taken on an axiom of some sort). Or if we even can know if they are accurate or not. Or if it really matters if they are or not. Don’t get me wrong, I have no problem with your derivation (for the most part) with respect to your analysis of the products of those processes, but your writings imply axioms that are not properly addressed.

    To sum it up, your essays, in an effort to conclude induction hierarchies, completely skip over the justification for why one should even start analyzing the products of those processes in a serious manner, to assuming they are true, in the way that you did. If we are to derive a basis for knowledge, then we must assume nothing, starting from the subject, which includes doubting the assumption that we have any reasonably grounds to assume the "I" should utilize the discrete experiences that get thrown at it. Now you could say, and I think this may be what your essays imply, that, look, we have these processes that are throwing stuff at the "I", of which it is aware of, such as perception and thought, and here's what we can do with it. If that's what your essays are trying to get at, then that is fine. But that doesn't start with the most basic derivation of the subject: you are skipping addressing the problem of whether knowledge can even be based off of these products of the processes. In other words, if your essays are simply dealing with what we can "know" in the sense that all we care about is having knowledge pertaining to the product of these processes and, therefore, it doesn't matter if those processes are utterly false, then I think we are in agreement. But I would add that you aren't addressing this at all in the essays and that's why I can't personally use it, as it is now, to base knowledge. Hopefully that makes a bit more sense. If not, please let me know!
    Bob
  • Philosophim
    2.6k
    Great, I believe we've iterated through this and are closer to understanding each other.
    I think what you meant (and correct me if I am wrong here) is the five senses, not all senses. If you had no senses, you wouldn't have thoughts because you would not be aware of them.Bob Ross

    I had to laugh at this one, as I've never had senses defined in such a way as sensing thoughts. We have two definitions here, so let me point out the definition the paper was trying to convey. The intention of the senses in this case is any outside input into the body. Some call them the five senses, but I wasn't necessarily stating it had to be five. Anything outside of the body is something we sense. The thing which takes the senses and interprets it into concepts, or discrete experiences, is the discrete experiencer. For the purposes here, I have noted the ability to discretely experience is one thing you can know about your "self".

    Is it incorrect for me to say I have discrete experiences? I believe it is impossible to not. If I claim, "No." I must have been able to discretely experience the concept of words. As a fundamental, I believe that's as solid as can be.

    But thoughts are not absent of all senses.Bob Ross

    Could you go into detail as to what you mean? I'm not stating you are wrong. Depending on how you define senses, you could be right. But I can't see how that counters the subdivision I've made either. If I ignore my senses, meaning outputs entering into the body, then what is left is "thoughts". Now again, we can subdivide it. Detail it if you want. Say, "These types of thoughts are more like senses". I'm fine with that. It does not counter the fundamental knowledge that I am a discrete experiencer. If I know that I discretely experience, then what I discretely experience, is what I know.

    You are directly implying that even if you can determine another animal to be a discrete experiencer, you still have no reason to think that they have any distinctive knowledge because that doesn’t directly prove that they are aware of their discrete experiences. Am I correct in this?Bob Ross

    No. I hesitate to go into animals, because its just a side issue I threw in for an example, with the assumption that the core premises were understood. Arguing over animal knowledge is missing the point. If you accept the premises of the argument, then we can ask how we could apply these definitions to animal knowledge. If you don't accept the premises of the argument, then applying it to animals is a step too far. This is not intended to dodge a point you've made. This is intended to point out we can't go out this far without understanding the fundamanets. My apologies for jumping out here too soon! Instead, I'm going to jump to other people, which is in the paper.

    As of now, the “I” defined for you has another property that you haven’t proven to exist in the other “I”s: awareness of discrete experiences—distinctive knowledge as you defined it. Just like how I can park my car with a complete lack of awareness of how I got there, I could also be reading your papers without any awareness of it.Bob Ross

    You would not be able to read without the ability to discretely experience. This was implicit but perhaps should be made explicit. If you can read the letters on the page, you can discretely experience. If you can then communicate me with those letters back in kind, then you understand that they are a form of language. If you can do this, you can read my paper, and you can enter the same context as myself if you so choose. You can realize you are a discrete experiencer, and apply the test to reality.

    You cannot do that without being a discrete experiencer like me. I would have to come up with a new method of knowing if someone who could not read or communicate was an "I" as defined. But again, I am not concerned with branching out into detailing how this fundamanal process of knowledge could be used to show a person who cannot communicate is an I, but establishing the fundamental process of knowledge first, with which we can use to have that discussion.

    To that end, it doesn't matter if you're "conscious". It doesn't matter if you're spaced out, in a weird mental state, etc. You're a discrete experiencer like me. You run into the very problem of denying that you are a discrete experiencer, just like myself. So the rest follows that what you discretely experience, is what you distinctely know. And for you to conclude that, you must understand deductive beliefs, and be capable of doing them.

    Do you deny that you deductively think? That you can discretely experience? Of course not. So that is good enough for the purposes that I need to continue the paper into resolving how two discrete experiencers can come to discrete and applicable knowledge between them. All I need is one other discrete experiencer, and the theory can continue.

    A person's genetics or past experiences may incline them to discretely experience properties different from others when experiencing the same stimulus.
    This is why, I would argue, not everyone who reads your paper is going to fundamentally agree with you with respect to your sheep example.
    Bob Ross

    They can fundamentally disagree with me by distinctive knowledge. They cannot fundamentally disagree with me by application, unless they've shown my application was not deduced. But in doing so, they agree with the process to obtain knowledge that I've set up. The sheep examples are all intended to show we can invent whatever distinctive knowledge we want, but the only way it has use in the world, is to attempt to apply it without contradiction.

    I understand you're still concerned with the specifics of distinctive knowledge claims I've made such as, "What is an "I", when the real part to question is the process itself. What I'm trying to communicate, is that there is no third party arbiter out there deciding what "I" should mean, or what any word should mean. We invent the terms and words that we use. The question is whether we can create a process out of this that is a useful tool to help us understand and make reasonable decisions about the world.

    Is it incorrect that an individual can invent any words or internal knowledge that they use to apply to the world? Is it incorrect, that if I apply my distinctive knowledge to the world and the world does not contradict my application, that I can call that another form of knowledge, applicable knowledge? If you enter into the context of the words I have used, does the logic follow?

    If it is initialized with a ternary distinction, then, as you hinted earlier, solipsism becomes a problem much quicker and, therefore, your ease of derivation (in terms of a binary distinction) will not be obtained by them. For example, for a person that starts their subjective endeavor with a ternary distinction, it is entirely possible that they must address the issue of “where are these processes coming from?”Bob Ross

    It is not a problem for me at all if someone introduces a ternerary distinction. The same process applies. They will create their distinctive knowledge. Then, they must apply that to reality without contradiction. If they cannot apply it to reality without contradiction, then they have invented terms that are not able to be applicably known. Distinctive knowledge that implies solipsism tends to fail when applied to the world. In my case, I have terms that can be applicably known. Therefore I have a tool of reasoning that allows me to use my distinctive knowledge to step out in the world and handle it.

    My problem is that you skip ahead straight to the sheep example, which is an analysis of the products of the processes, when you haven’t addressed the more fundamental problem of whether those very processes are accurate or notBob Ross

    I don't doubt this is a problem for a reader, so thank you for pointing this out. Your feedback tells me I need to explicitly point out how if you are reading this, you are by the definitions I stated, an "I" as well. The sheep part itself I use to give examples to how distinct knowledge can change, and that's ok. The only thing that matters is if that knowledge can be applied to reality without contradiction. So I think I can retain that, I just need to add the detail I mentioned before.

    Now you could say, and I think this may be what your essays imply, that, look, we have these processes that are throwing stuff at the "I", of which it is aware of, such as perception and thought, and here's what we can do with it. If that's what your essays are trying to get at, then that is fine.Bob Ross

    Yes, this is a more accurate assessment of what I am doing. I am inventing knowledge as a tool that can be used. With this, I can say I distinctly know something, and I can applicably know something. I have a process that is proven, and the process itself can be applied to its own formulation. You can go back with the conclusions the paper makes, and apply it from the beginning. I use the process to create the process, and it does not require anything outside of the process as a basic foundation.

    Thank you again for your thoughts and critiques! I hope this cleared up what the paper is trying to convey in the first two papers. If these fundamentals are understood, and can withstand your critique, then we can address context, which I feel might need some tightening up. I look forward to your next thoughts!
  • Bob Ross
    1.7k
    Hello @Philosophim,

    I had to laugh at this one, as I've never had senses defined in such a way as sensing thoughts. We have two definitions here, so let me point out the definition the paper was trying to convey. The intention of the senses in this case is any outside input into the body. Some call them the five senses, but I wasn't necessarily stating it had to be five. Anything outside of the body is something we sense. The thing which takes the senses and interprets it into concepts, or discrete experiences, is the discrete experiencer. For the purposes here, I have noted the ability to discretely experience is one thing you can know about your "self".

    Although I understand what you are trying to say, this is factually false (even if we remove my claim that thoughts are sensed). Senses are not restricted to external stimuli (like the five senses, for example): there are quite a lot of senses (some up for debate amongst science circles). Some that would be pertinent (that aren't up for debate) to this discussion would be those that are internally based: Equilibrioception (sense of balance, which is not a sense of external stimuli but, rather, internal ear fluid), Nociception (the sensation of internal pain), Proprioception (sense of body parts without external input), and Chemoreception (sense of hunger, thirst, vomiting, etc). I could go on, but I think you probably understand what I mean now: you have plenty of sensations that are deployed and received exclusively within your body. With regards to my claim that "thoughts are not absent of all senses", I do believe this to be true, but I think this isn't relevant to this discussion yet (if at all), so disregard that comment for now.

    Could you go into detail as to what you mean? I'm not stating you are wrong. Depending on how you define senses, you could be right. But I can't see how that counters the subdivision I've made either. If I ignore my senses, meaning outputs entering into the body, then what is left is "thoughts".

    I think this would digress our conversation even more, so I am going to leave this for a later date. Long story short, I think that just like how you can feel your heart beat pumping (which doesn't require external stimuli) you can also sense thoughts with seemingly conclusory thoughts (which are emotionally based convincements). Again, this isn't as pertinent to our conversation as I initially thought and I understand that it doesn't directly negate your idea that, apart from external stimuli, there is "thoughts". However, to say it is apart from all senses I think it is wrong.

    No. I hesitate to go into animals, because its just a side issue I threw in for an example, with the assumption that the core premises were understood. Arguing over animal knowledge is missing the point. If you accept the premises of the argument, then we can ask how we could apply these definitions to animal knowledge. If you don't accept the premises of the argument, then applying it to animals is a step too far. This is not intended to dodge a point you've made. This is intended to point out we can't go out this far without understanding the fundamanets. My apologies for jumping out here too soon! Instead, I'm going to jump to other people, which is in the paper.

    I understand your point that discussing animals is a step too far, but my critique also applies to humans.

    You would not be able to read without the ability to discretely experience. This was implicit but perhaps should be made explicit. If you can read the letters on the page, you can discretely experience. If you can then communicate me with those letters back in kind, then you understand that they are a form of language. If you can do this, you can read my paper, and you can enter the same context as myself if you so choose. You can realize you are a discrete experiencer, and apply the test to reality.

    I completely agree. However, I am not disputing whether you can determine me to be a discrete experiencer: I am disputing whether you can reasonably claim (within what is written in your essays) that you know that I have any sort of knowledge. By your essays' definition, distinctive knowledge is the awareness of discrete experiences: this is a separate claim from whether I have discrete experiences. In other words, you are right in stating that you have sufficient justification to say I am a discrete experiencer, but then you have to take it a step further and prove that I am aware of my discrete experiences. In your essays, both instances of knowledge that you define (distinctive and applicable) are a separate claims pertaining to the subject beyond the claim that they are discrete experiencers.

    In your essays, they do not define "discrete experiences" as a form of knowledge (correct if I am wrong here), but they define two forms of knowledge: the awareness of discrete experiences (distinctive knowledge) and, after there is distinctive knowledge, the application of beliefs (applicable knowledge). Since applicable knowledge is contingent on distinctive knowledge and distinctive knowledge is, in turn, contingent on awareness, therefore proving that I, as the reader, have discrete experiences does not in any way prove that I have any forms of knowledge as defined in your essays.

    You cannot do that without being a discrete experiencer like me
    To that end, it doesn't matter if you're "conscious". It doesn't matter if you're spaced out, in a weird mental state, etc. You're a discrete experiencer like me.
    True, but I could be a discrete experiencer without having any knowledge as defined by your essays (namely, without any distinctive or applicable knowledge).

    So the rest follows that what you discretely experience, is what you distinctely know.

    This is exactly what I have been trying to demonstrate: the first half of the above quote does not imply in any way the second half (for other "Is"). You define distinctive knowledge with the explicit contingency on awareness, not simply that you discretely experience. I am arguing that you can discretely experience without being aware of it. I think this is basically my biggest issue with your essays summed into one sentence (although I don't want to oversimplify your argument): you wrongly assume that you proving something is a "discrete experiencer" therefore proves that they have distinctive and applicable knowledge but, most importantly, you haven't demonstrated that that something is aware of any of it and, consequently, you haven't proven they have either form of knowledge.

    Do you deny that you deductively think?

    Yes. I inductively think my way into a deductive belief. I have a string of thoughts (inductively witnessed) that form a seemingly conclusory thought (which can most definitely be a deductive belief). My thoughts do not start, or initialize so to speak, with deduction.

    So that is good enough for the purposes that I need to continue the paper into resolving how two discrete experiencers can come to discrete and applicable knowledge between them

    Again, I think that you are wrongly assuming that proving that an individual discretely experiences directly implies that they have distinctive or applicable knowledge: you defined distinctive knowledge specifically to be contingent on awareness, which I don't think has anything to do with what you defined as "discrete experiences".

    They can fundamentally disagree with me by distinctive knowledge. They cannot fundamentally disagree with me by application, unless they've shown my application was not deduced

    I agree that, in the event that they begin partaking in their analysis of the products of those processes, they will apply them the same way as you. However, it doesn't begin with a deduction: you have to induce your way to a deduction that you can then apply. If that is what you mean when you say that it begins with deduction (that it is a induced deductive belief) then I agree with you on this.

    The sheep examples are all intended to show we can invent whatever distinctive knowledge we want, but the only way it has use in the world, is to attempt to apply it without contradiction.

    I agree with you here, but I don't think you are starting your writing endeavor (in terms of the essays) at the basis, you are starting at mile 30 in a 500 mile race. Once we agree up to 30, then I (generally speaking) agree with you up to 500 (or maybe 450 (: ) and I think you do a great job at assessing it from mile 30 all the way to 500. However, I think it is important to first discuss the first 30 miles, otherwise we are building our epistemology on axioms.

    What I'm trying to communicate, is that there is no third party arbiter out there deciding what "I" should mean, or what any word should mean. We invent the terms and words that we use. The question is whether we can create a process out of this that is a useful tool to help us understand and make reasonable decisions about the world.

    I understand and this is a fair statement. However, my biggest quarrel is that I believe you to be starting at mile 30 when you should be starting at mile 0. If you want to base your epistemology on something that doesn't address all the fundamentals, but more generally (with the help of axioms) addresses the issue in a way most people will understand, then that is fine. I personally don't agree that we should start at mile 30 and then loop back around later to discuss miles 0-29. It must start at 0 for me.

    Is it incorrect that an individual can invent any words or internal knowledge that they use to apply to the world? Is it incorrect, that if I apply my distinctive knowledge to the world and the world does not contradict my application, that I can call that another form of knowledge, applicable knowledge?

    This is absolutely correct. But I am not disputing this.

    If you enter into the context of the words I have used, does the logic follow?

    No it does not. Proving a being to be a discrete experiencer doesn't prove awareness of such. Therefore, by your definitions of knowledge, I don't understand how your essays prove others to have any forms of knowledge (distinctive or applicable). If you start at mile 30, and I start at mile 30, then I think that your logic (without commenting on the induction hierarchy yet, so just essays 1-3) is sound. But, again, that's assuming a lot to get to mile 30.

    It is not a problem for me at all if someone introduces a ternerary distinction. The same process applies. They will create their distinctive knowledge. Then, they must apply that to reality without contradiction. If they cannot apply it to reality without contradiction, then they have invented terms that are not able to be applicably known.

    I agree. But, again, this is starting at mile 30, not mile 0. You skip over the deeper questions here and generally start from the analysis of the products of the processes: this is not the base. If you don't want to start from the base, then that is totally fine (I just disagree). If you think that you are starting from the base, then I would interested to know if you think that the binary distinction is the base.

    I don't doubt this is a problem for a reader, so thank you for pointing this out. Your feedback tells me I need to explicitly point out how if you are reading this, you are by the definitions I stated, an "I" as well.

    Again, this only proves that you know that I, as the reader, am a discrete experiencer. Now you have to prove that I have distinctive and, thereafter, applicable knowledge. Distinctive knowledge was defined as the awareness of discrete experiences, not merely the discrete experiences themselves. Therefore, I don't think your "I" is being extended equivocally to other "Is" in your essays: your "I" is a discrete experiencer that is aware of it and, thereby, has both accounts of knowledge, whereas the other "Is" merely are proven to be a discrete experiencer (no elaboration or proof on whether they are aware of such). I get that, for me, I know I am aware, but to apply your logic to other people, it does not hold that I know that they are aware due to their discrete experiences. Therefore, when your essays start discussing context, it is wrongly assuming that the previous contents of the subsequent essays covered a proof of some sort that other "Is" are aware of their discrete experiences.

    Yes, this is a more accurate assessment of what I am doing. I am inventing knowledge as a tool that can be used.

    My critique here (that I am trying to portray) is that this tool is starting out at mile 30, not 0.

    I hope I wasn't too reiterative, but I think this is a vital problem with your essays. But, ironically, it isn't a problem if you wish to start at mile 30, and if that is the case then I will simply grant it (for the sake of conversation) and continue the conversation to whatever lies after it. I personally don't think it is a good basis for epistemology because it isn't a true basis: it utilizes axioms.
    Bob
  • Philosophim
    2.6k
    By your essays' definition, distinctive knowledge is the awareness of discrete experiencesBob Ross

    Ah, I see now. This is incorrect. A person's awareness of the vocabulary has nothing to do with it. Their awareness that they discretely experience, has nothing to do with it. Your discrete experiences ARE your distinctive knowledge. If you have a discrete experience meta-analyzing a discrete experience, or you don't, it isn't important.

    The point was I wondered whether I could prove myself wrong that I discretely experienced. I could not. Then I asked if I could prove that the discrete experiences I had did not exist. I find that I could not. Of course they exist. I'm having them. Therefore discrete experiences are knowledge of the individual. But a particular type of knowledge. It is when one tries to apply that discrete experience as representing external reality that one needs to evaluate whether that is an applicable belief, or an applicable piece of knowledge.

    That is why if you can read, I know you can discretely experience that language. Then I introduce the terms to you. Then I show you a process by which you can attempt to apply it to reality using deduction, apart from a belief. Perhaps the label of distinctive knowledge is confusing and unnecessary. All I wanted to show is that any discrete experience is something you know, whether you realize it or not. That is a type of knowledge within your personal context. This was to contrast with the application of that personal knowledge as a belief in its application to reality.

    If I removed distinctive knowledge from the terminology, and just used "discrete experiences when not applying to reality" would that make more sense? Do you think there is a better word or terminology? And does that clear up what is going on now? I agree with you by the way, if I tried to assert that distinctive knowledge required a person to be aware of their discrete experiences, I would be introducing a meta analysis on discrete experiences that could never be proven. I am not doing that.
  • Bob Ross
    1.7k
    @Philosophim
    I think that we are in agreement if you are removing the idea of awareness from discrete knowledge. However, I am still slightly confused, as here is your definition in your essay:

    I will label the awareness of discrete experiences as “distinctive knowledge”. To clarify, distinctive knowledge is simply the awareness of one’s discrete experiences.

    This explicitly defines distinctive knowledge as the awareness of discrete experiences. But now you seem to be in agreement with me that it can't have anything to do (within the context of your essays) with awareness. I would then propose you change the definition because, as of now, it specifically differentiates discrete experiences from distinctive knowledge solely based off of the term "awareness". I think that your essays are addressing what can be known based off of the analysis of the products of the processes and in relation to other discrete experiencers (where their awareness of such is irrelevant to the subject at hand, only that they also discretely experience). Is that fair to say? If so, then am I making any sense (hitherto) about why this is starting at mile 30? Maybe I am not explaining it well enough (it is entirely possible as I am not very good at explaining things). Are you ok with your essays starting their endeavor at mile 30, as opposed to mile 0? Or do you think it is starting at 0?
    Bob
  • Bob Ross
    1.7k
    @Philosophim

    Also, I would then be interested to what you refurbished definition of distinctive knowledge would be: is it simply discrete experiences and memories? If so, then I think this completely shifts the claims your essays are making and, subsequently, my critiques. But I won't get into that until after you respond (if applicable).
    Bob
  • Philosophim
    2.6k
    I will label the awareness of discrete experiences as “distinctive knowledge”. To clarify, distinctive knowledge is simply the awareness of one’s discrete experiences.

    This explicitly defines distinctive knowledge as the awareness of discrete experiences. But now you seem to be in agreement with me that it can't have anything to do (within the context of your essays) with awareness.
    Bob Ross

    No, you are quite right Bob. I wrote this decades ago when I was much younger and not as clear with my words. I believe you are one of the few who has read this seriously. Back then, I had a greater tendency to use words more from my own context and personal meaning, then what would be proper and precise English. This is a mistake in my writing.

    Yes, distinctive knowledge is the discrete experiences you have. Memory as well, is a discrete experience. If this is understood, then I think we can continue.
  • Bob Ross
    1.7k
    @Philosophim

    I am glad that we have reached an agreement! I completely understand that views change over time and we don't always refurbish our writings to reflect that: completely understandable. Now I think we can move on. Although this may not seem like much progression, in light of our definition issues, I would like you to define "experience" for me. This definition greatly determines what your argument is making. For example, if the processes that feed the "I" and the "I" itself are considered integrated and, therefore, synonymous, then I think your "discrete experiencer" argument is directly implying that one can have knowledge without being aware of it (and that the processes and the "I" that witnesses those processes are the same thing). On the contrary, if the processes that feed the "I" and the "I" itself, to any sort of degree, no matter how minute, are distinguished then, therefrom, I think you are acknowledging that, no matter to what degree, awareness is an aspect of knowledge. If neither of those two best describe your definition of "experience", then I fear that you may be using the term in an ambiguous way that integrates the processes (i.e. perception, thought, etc) with the "I" without necessarily claiming them to be synonymous (which would require further clarification as I don't think it makes sense without such). If nothing I have explained henceforth applies to your definition of "experience", then that is exactly why I would like you to define it in your own words.
    Bob
  • Philosophim
    2.6k

    Certainly Bob!

    Experience is your sum total of existence. At first, this is undefined. It precedes definition. It is that which definitions are made for and from. A discrete experiencer has the ability to create some type of identity, to formulate a notion that "this" is separate from "that" over there within this undefined flood.

    It is irrelevant if a being that discretely experience realizes they are doing this, or not. They will do so regardless of what anyone says or believes. In questioning the idea of being able to discretely experience I wondered, are the discrete experiences we make "correct"? And by "correct" it seems, "Is an ability to discretely experience contradicted by reality?" No, because the discrete experience, is the close examination of "experience". At a primitive level it is pain or pleasure. The beating of something in your neck. Hunger, satiation. It is not contradicted by existence, because it is the existence of the being itself. As such, what we discretely experience is not a belief. It is, "correct".

    If I discretely experience that I feel pain, I feel pain. Its undeniable by anything in existence, because it is existence itself. If I remember something from years past, that memory exists. If I choose to define an existence as something, I choose to do that. It is undeniable that I have chosen that. Therefore discrete experience is "known", by a discrete experiencer by the fact is it not contradicted by reality.

    Again, a discrete experiencer does not have to realize that their act of discretely experiencing, is discrete experiencing. Discrete experience is not really a belief, or really knowledge in the classical sense. When I say distinctive knowledge, it is the set of discrete experiences a thing has. A discrete experiencer, has discrete experiences. But, if a bit of distinctive knowledge is used in one extra step, to assume that what one discretely experiences can be used to accurately represent something more than the discrete experience itself, then we have a situation where it is a belief, or knowledge. When one has applied their distinctive knowledge, such as adjusting it to logically apply to reality without contradiction, I call applicable knowledge.

    That's basically the start, and I hope explains experience and discrete experience with greater clarity!
  • Bob Ross
    1.7k
    Hello @Philosophim,

    Thank you for the clarification! I see now that we have pin pointed our disagreement and I will now attempt to describe it as accurately as I can. I see now that your definition of "experience" is something that I disagree with on many different accounts, hopefully I can explain adequately hereafter.

    Firstly, I apologize: I should have defined the term "awareness" much earlier than this, but your last post seems to be implying something entirely different than what I was meaning to say by "awareness". I am talking about an "awareness" completely separate from the idea of whether I am aware of (recognize) my own awareness (sorry for the word salad here). For example, when you say:

    It is irrelevant if a being that discretely experience realizes they are doing this, or not. They will do so regardless of what anyone says or believes.

    You are 100% correct. I do not need to recognize that I am differentiating the letters on my keyboard from the keyboard itself: the mere differentiation is what counts . But this, I would argue, is a recognition of your "awareness" (aka awareness of one's awareness), not awareness itself. So instead, I would say that I don't need to be aware (or recognize) that I am aware of the differentiation of the letters on my keyboard from the keyboard itself: all that must occur is the fundamental recognition (awareness) that there even is differentiation in the first place. To elaborate further, when you say:

    A discrete experiencer has the ability to create some type of identity, to formulate a notion that "this" is separate from "that" over there within this undefined flood.

    I think you are wrong: "I" am not differentiating (separating "this" from "that"), something is differentiating from an undefined flood and "I" recognize the already differentiated objects (this is "awareness" as I mean it). To make it less confusing, I will distinguish awareness of awareness (i.e. defining terms or generally realizing that I am aware) from merely awareness (the fundamental aspect of existence) by defining the former as "sophisticated awareness" and the latter as "primitive awareness". In light of this, I think that when you say:

    If I discretely experience that I feel pain, I feel pain. Its undeniable by anything in existence, because it is existence itself...Again, a discrete experiencer does not have to realize that their act of discretely experiencing, is discrete experiencing. Discrete experience is not really a belief, or really knowledge in the classical sense.

    I think you are arguing that one doesn't have to be aware (recognize) that they are aware of the products of these processes ("sophisticated awareness") to be able to discretely experience, which would be directly synonymous with what I think you would claim to be our realization of our discrete experiences. This is 100% true, but this doesn't mean that we don't, first and foremost, require awareness ("primitive awareness") of those processes. The problem is that it is too complicated to come up with a great example of this, for if I ask you to imagine that "you" didn't see the key on your keyboard separate from one another, then you would say that, in the absence of that differentiation, "your" "discrete experiences" would lack that specific separation. But I am trying to go a step deeper than that: the differentiation (whether the key is separated from the keyboard or it is one unified blob) is not "you" because that process of differentiating is just as foreign, at least initially, as inner workings of your hands. "You" initially have nothing but this differentiation (in terms of perception) to "play with", so to speak, as it is a completely foreign process to "you". What isn't a foreign process to "you" is the "thing" that is "primitively aware" of the distinction of "this" from "that" (aka "you") but, more importantly, "you" didn't differentiate "this" from "that": it is just there.

    To make it clearer, when you describe experience in this manner:

    Experience is your sum total of existence.
    At a primitive level it is pain or pleasure. The beating of something in your neck. Hunger, satiation. It is not contradicted by existence, because it is the existence of the being itself.

    I think you are wrong in a sense. Your second quote here, in my opinion, is referring directly to the products of the processes, which cannot be "experienced" if one is not "primitively aware" of them. I'm fine with saying that "experience" initially precedes definition (or potentially that it even always precedes definition), but I think the fundamental aspect of existence is "primitive awareness". If the beating of something in your neck, which is initially just as foreign to you as your internal organs, wasn't something that you were "primitively aware" of, then it would slip your grasp (metaphorically speaking). With respect to the first quote here, I don't think that my "primitive awareness", although it is the fundamental aspect of existence, is the sum total of all existence: the representations (the products of the processes), the processes themselves, and the "primitive awareness" of them are naturally tri-dependent. However, and this is why I think "primitive awareness" is the fundamental aspect, it is not an equal tri-dependency: for if the "primitive awareness" is removed, then the processes live on, whereas if "primitive awareness" is removed then, thereby, "I" am removed. Furthermore, the processes themselves are never initially known at all (which I agree with you on this), but only their products and, naturally I would say, they are, in turn, useless if "I" am not "primitively aware" of them. So it is like a ternary distinction, but not an equal ternary distinction in terms of immediateness (or precedence) to the "I". For example, if something (the processes) wasn't differentiating the keys on my keyboard, then I would not, within my most fundamental existence, "experience" the keys on a keyboard. On the contrary, I see no reason to belief that, in the event that I was no longer "primitively aware" of the differentiation between the keys and the keyboard (of which I did not partake in and is just, initially, as foreign to me as the feeling of pain), the processes wouldn't persist. What I am saying is "experience" is "primitive awareness" and it depends on the products of the processes to "experience" anything (and, upon further reflection and subsequently not initially, the processes themselves).

    Now, I totally understand that the subject, initially speaking, does not (and will not) be aware of my terminology, but just like how they don't have to be aware of your term "discrete experience" to discretely experience, so too I would argue they don't have be aware of my term "primitive awareness" to "experience".

    In other words, when you say:
    Experience is your sum total of existence. At first, this is undefined. It precedes definition.

    I agree that discrete experiences, in terms of the products of the processes, is initially undefined, but the "primitive awareness" is not. You don't have to know what a 'K' means on your keyboard to know that you are aware that there is something in a 'K' shape being differentiated from another thing (of which we would later call a keyboard). This is way the "primitive awareness" is more fundamental than the products of the processes: you don't have to make any sense of the perceptions themselves to immediately be "primitively aware" of those perceptions.

    And, lastly, I would like to point something out that doesn't pertain to the root of our discussion:
    In questioning the idea of being able to discretely experience I wondered, are the discrete experiences we make "correct"? And by "correct" it seems, "Is an ability to discretely experience contradicted by reality?" No, because the discrete experience, is the close examination of "experience"

    I would not constitute this as a real proof: that discretely experiencing doesn't contradict reality and, therefore, it is "correct". I understand what you are saying, but fundamentally you are comparing the thing to itself. You are really asking: "Is an ability to discretely experience contradicted by discretely experiencing". You are asking, in an effort to derive an orange, "does an orange contradict an orange?". You set criteria (that it can't contradict reality) and then define it in a way where it is reality, so basically you are asking "does reality contradict reality". Don't get me wrong, I would agree that we should define "correct" as what aligns with experience, but it is an axiom and not a proof. It is entirely possible that experience is completely wrong due to the representations being completely wrong and, more importantly, you can't prove the most fundamental by comparing it to itself. I think my critique is what you are sort of trying to get at, but I don't see the use in asking the question when it is circular: it is taken up, at this point, as an axiom, but your line of reasoning here leads me to believe that you may be implying that you proved it to be the case.

    In light of all I have said henceforth, do you disagree with my assessment of "experience"?

    I look forward to hearing back from you,
    Bob
  • Philosophim
    2.6k
    Yes, I am enjoying the discussion of getting to the essence of the work. I much appreciate your desire to understand what the argument is trying to say, and I hope I am coming across as trying to understand the argument you are making as well.

    It is irrelevant if a being that discretely experience realizes they are doing this, or not. They will do so regardless of what anyone says or believes.

    You are 100% correct. I do not need to recognize that I am differentiating the letters on my keyboard from the keyboard itself: the mere differentiation is what counts . But this, I would argue, is a recognition of your "awareness" (aka awareness of one's awareness), not awareness itself. So instead, I would say that I don't need to be aware (or recognize) that I am aware of the differentiation of the letters on my keyboard from the keyboard itself: all that must occur is the fundamental recognition (awareness) that there even is differentiation in the first place.
    Bob Ross

    Good, I think we're thinking along the same lines now. That fundamental recognition matches the definition of discrete experiencing. Such discrete experiencing does not require words. We could say that words are a "higher" level of discrete experiencing. But I don't do that in the paper, because that differentiation is not important as a fundamental.

    Now can the theory be refined with this differentiation? It could. Someone could call that consciousness. Someone could say, "I" isn't the primitive part of me, "I" only requires that I have consciousness or higher level defining. The theory allows this without issue. But that refinement of I would be a different context of the "I" in the argument. The "conscious I" versus the "unconscious I" are one possible example.

    I think you are wrong: "I" am not differentiating (separating "this" from "that"), something is differentiating from an undefined flood and "I" recognize the already differentiated objects (this is "awareness" as I mean it).Bob Ross

    This is a perfect example of your discrete experience, versus mine. I am not wrong. My definition of "I" applies to reality without contradiction. Your definition of I is also100% correct. Can it apply to reality without contradiction? Perhaps. But we are not having a disagreement about the application of the word, we are having a disagreement about the construction of the definition.

    My "I" contains both the fundamental, and the "higher" level discrete experiences we make that I believe you are pointing out. Whether its the fundamental awareness, or meta awareness (making a fundamental awareness into a word for example), they are both discrete experiences. A house cat and a tiger are both cats. For certain arguments, it is important to differentiate between the two. And it may be necessary as the theory grows, or someone creates a new theory based on these fundamentals. But for now, for the fundamentals, I see no reason by application, why there needs to be a greater distinction or redefinition of "the primitive I". The only reason I have the primitive "I", is to quickly get into the idea of context without contradiction, or needing to dive into some form of consciousness, which would likely be another paper.

    You are putting your own desired definition of "I" into the argument. Which is fine and perfectly normal. You might be thinking I am stating that my definition of "I" is the definition that is 100% correct, and we should all use it forevermore. I am not. I am saying "I" in this context of understanding knowledge as a process is all that we need. I am not saying we couldn't have "I" mean something different in a different context. In psychology, "I" will be different. For a five year old, "I" will be different. Each person can define "I" as they wish. If they can apply it to reality without contradiction, then they have a definition that is useful to them in their context.

    "I" here is simply a definition useful within the context of showing the fundamental process of knowledge as a tool between more than one "I", or discrete experiencer.

    I'm fine with saying that "experience" initially precedes definition (or potentially that it even always precedes definition), but I think the fundamental aspect of existence is "primitive awareness". If the beating of something in your neck, which is initially just as foreign to you as your internal organs, wasn't something that you were "primitively aware" of, then it would slip your grasp (metaphorically speaking).Bob Ross

    Agreed. If you don't discretely experience something, then it is part of the undefined existence. To reiterate, this applies to primitive awareness. I'm not sure we both have the same intention when using this new phrase, so but for my part, its merely the barest of discrete experiences. Think of it this way. My primitive discrete experience is seeing a picture and the feelings associated with it. Then I look closer, and see a sheep in the field. Then I look again and see there is another sheep crouching in the grass in the field that I missed the first two times. While the crouching sheep was always in my vision, I did not discretely experience it. Or, as I think you are implying, have primitive awareness of it.

    For example, if something (the processes) wasn't differentiating the keys on my keyboard, then I would not, within my most fundamental existence, "experience" the keys on a keyboard.Bob Ross

    If you define "I", as consciousness, then you are correct within this context, and could applicably know that. But if I define "I" as a discrete experiencer, you are incorrect in your application. If I am able to pick out and type a "k" on the keyboard, that cannot be done without a discrete experience. Just because you haven't registered it beyond haptics, or have to put a lot of mental effort into it, doesn't mean it isn't a discrete experience.

    Do you see the importance of definitions within contexts? We have two different contexts of "I", and they are both correct within their contexts. The question is, which one do we use then? But if we are at this point, then we are at the level of understanding the fundamentals of the argument to address that point.

    First, I asked you to understand the context of "I" that I've introduced here, which I believe you have done more than admirably. I hopefully have returned the idea that I understand your context of "I" as well. At this point, we attempt to apply both to reality without contradiction. We both succeed. Why I'm asking you to use my "I", is because it helps us get to the part of the argument where we introduce context. Perhaps I could introduce "consciousness" and get to the same point. But that would likely extend the argument by pages, and would only be explaining a sub-division of discrete experience. Why introduce a sub-division when it doesn't seem necessary to talk about context? If you can explain why my definition of an "I" does not allow me to identify other discrete experiencers, then you will have a point. But so far, I do not see that. Therefore, I do not think we need that context of your "I".

    What I'm trying to indicate is that your context of "I" for the argument isn't the "I" of the context of this argument. Within my contextual use of "I", can I apply that to reality without contradiction? You might say yes, but feel that it is inadequate and does not address so many other thing you want to discuss. That is fine. My "I" does not negate your "I", nor its importance in application. If it makes you more comfortable, we could make a different word or phrase for it like, "Primitive I". It is not the word that matters. It is the underlying meaning and context. For the context of ultimately arriving at applicable knowledge, and then the idea that there are other discrete experiencers besides myself, is this enough?

    I would not constitute this as a real proof: that discretely experiencing doesn't contradict reality and, therefore, it is "correct".Bob Ross

    I do not believe it is an axiom. Someone can question if what they discretely experience is "real". The axiom I think is, "That which does not contradict reality is knowledge". I don't have any proof of this statement when it is introduced. I state it, then try to show it can be true. If the axiom is upheld, then I can conclude that what I discretely experience is known to me. But without the axiom of what knowledge is, I don't believe I claim that. Even then, I don't like the idea of "something that is true by default". I believe we can start with assumptions, but when we conclude there should be some proof that our assumptions are also correct in some way. But like you said, this is an aside to the conversation. I will not say you are wrong, and I am just giving an opinion that may also be wrong. The discussion of proofs and axioms could be a great topic for another time though!
  • Bob Ross
    1.7k
    Hello @Philosophim,

    I agree, I think that we both understand each others' definition of "I" and that I have not adequately shown the relevance of my use of "I". Furthermore, I greatly appreciate your well thought-out replies, as they have helped me understand your papers better! In light of this, I think we should progress our conversation and, in the event that it does become pertinent, I will not hesitate to demonstrate the significance (and, who knows, maybe, as the discussion progresses, they dissolve themselves). Until then, I think that our mere recognition of each others' difference of terminology (and the underlying meanings) will suffice. To progress our conversation, I have re-read your writings a couple times over (which does not in any way reflect any kind of mastery of the text) and I have attempted to assess it better. Moreover, I would like to briefly cover some main points and, thereafter, allow you to decide what you would like to discuss next. Again, these pointers are going to be incredibly brief, only serving as an introduction, so as to allow you to determine, given your mastery (naturally) of your own writings, what we ought to discuss next. Without further ado, here they are:

    Point 1: Differentiation is a product of error.

    When I see a cup, it is the error of my perception. If I could see more accurately, I would see atoms, or protons/neutrons/electrons or what have you, and, thereby, the distinction of cup from the air surrounding it becomes less and less clear. Perfectly accurate eyes are just as blind as perfectly inaccurate eyes: differentiation only occurs somewhere in between those two possibilities. Therefore, a lot of beliefs are both applicable knowledge and not applicable knowledge: it is relative to the scope. For example, the "cup" is a meaningful distinction, but is contradicted by reality: the more accurately we see, or sense in general, the more the concept of a "cup" contradicts it. Therefore, since it technically contradicts reality, it is not applicable knowledge. However, within the relative scope of, let's say, a cup on a table, it is meaningful to distinguish the two even though, in "reality", they are really only distinguishable within the context of an erroneous eye ball.

    Point 2: Contradictions can be cogent.

    Building off of point 1, here's an example of a reasonable contradiction:
    1. There are two objects, a cup and a table, which are completely distinct with respect to every property that is initially discretely experienced
    2. Person A claims the cup and table to be separate concepts (defining one as a 'cup' and the other as a 'table')
    3. Person B claims that the cup and the table are the same thing.
    4. Person A claims that Person B has a belief that contradicts reality and demonstrates it by pointing out the glaring distinctions between a cup and a table (and, thereby, the contradictions of them being the same thing).
    5. Person B argues that the cup and table are atoms, or electrons/protons/neutrons or what have you, and, therefore, the distinction between the cup and the table is derived from Person A's error of perception.
    7. In light of this, and even in acknowledgement of this, Person A still claims there is a 'cup' and a 'table'.
    8. Person A is now holds two contradictory ideas (the "cup" and "table" are different, but yet fundamentally they are not different in that manner at all): the lines between a 'cup' and a 'table' arise out of the falseness of Person A's discrete experiences.
    9. Person B claims Person A, in light of #8, holds a belief that is contradicted by reality and that Person A holds two contradictory ideas.

    Despite Person A's belief contradicting reality, it is still cogent because, within relative scope of their perceptions, there is a meaningful distinction between a 'cup' and a 'table'--but only compared to reality in a relative scope. Also, Person A can reasonably hold both positions, even though they negate one another, because the erroneous nature of their existence produces meaningful distinctions that directly contradict reality. In this instance, there is no problem with a person holding (1) a belief that contradicts reality and (2) two contradictory, competing views of reality.

    Point 3: Accidental and essential properties are one in the same
    Building off of point 1 and 2, the distinction between an accidental and essential property seem to be only different in the sense of scope. I think this is the right time to invoke Ship of Theseus (which you briefly mention in the original post in this forum). When does a sheep stop being a sheep? Or a female stop being a female? Or an orange stop being an orange?

    Point 4: The unmentioned 5th type of induction

    There is another type of induction: "ingrained induction". You have a great example of this that you briefly discuss in the fourth essay: Hume's problem of induction. Another example is that the subject has to induce that "this" is separate from "that", but it is an ingrained, fundamental induction. The properties and characteristics that are apart of discrete experience do not in themselves prove in any way that they are truly differentiating factors: the table and the chair could, in reality, be two representations of the same thing, analogous to two very different looking representations of the same table directly produced by different angles of perspective. We have to induce that use of these properties and characteristics (such as light, depth, size, quantity, shape, color, texture, etc) are reasonable enough differentiating factors to determine "this" separate from "that". For example, we could induce that, given the meaningfulness of making such distinctions, we are valid enough in assuming they are, indeed, differentiating factors. Or we could shift the focus and claim that we don't really care if, objectively speaking, they are valid differentiating factors, but, rather, the meaningfulness is enough.

    Point 5: Deductions are induced

    Building off of point 4, "ingrained induction" is utilized to gather any imaginable kind of deductive principle: without such, you can't have deductions. This directly implies that it is not completely the case that deductions are what one should try to anchor inductions off of (in terms of your hierarchical structure). For example, the fact of gravity (not considering the theory or law), which is an induction anchored solely to the "ingrained induction", is far a "surer" belief, so to speak, than the deductive principle of what defines a mammal. If I had to bet on one, I would bet on the continuation of gravity and not the continuation of the term "mammal" as it has been defined: there are always incredible gray areas when it comes to deductive principles and, on some occasions, it can become so ambiguous that it requires refurbishment.

    Point 6: Induction of possibility is not always cogent

    You argue in the fourth essay that possibility inductions are cogent: this is not always the case. For example:

    A possibility is cogent because it relies on previous applicable knowledge. It is not inventing a belief about reality which has never been applicably known.

    1. You poofed into existence 2 seconds ago
    2. You have extremely vivid memories (great in number) of discretely experiencing iron floating on water
    3. From #2, you have previous applicable knowledge of iron floating on water
    4. Since you have previous applicable knowledge of iron floating on water, then iron floating on water is possible.
    5. We know iron floating on water is not possible
    6. Not all inductive possibilities are cogent

    Yes you could test to see if iron can float, but, unfortunately, just because one remembers something occurring doesn't mean it is possible at all: your applicable knowledge term does not take this into account, only the subsequent plausibility inductions make this sub-distinction.

    Point 7: the "I" and the other "I"s are not used equivocally

    Here's where the ternary distinction comes into play: you cannot prove other "I"s to be a discrete experiencer in a holistic sense, synonymous with the subject as a discrete experiencer, but only a particular subrange of it. You can't prove someone else to be "primitively aware", and consequently "experience", but only that they have the necessary processes that differentiate. In other words, you can prove that they differentiate, not that they are primitively aware of the separation of "this" from "that".

    Hopefully those points are a good starting point. I think hit on a lot of different topics, so I will let you decide what to do from here. We can go point-by-point, all points at once, or none of the points if you have something you would like to discuss first.

    I look forward to hearing from you,
    Bob
  • Philosophim
    2.6k

    Fantastic points! It is a joy for me to see someone else understand the paper so well. I'm not sure anyone ever has. Lets go over the points you made.

    Point 1: Differentiation is a product of error.

    When I see a cup, it is the error of my perception. If I could see more accurately, I would see atoms, or protons/neutrons/electrons or what have you, and, thereby, the distinction of cup from the air surrounding it becomes less and less clear. Perfectly accurate eyes are just as blind as perfectly inaccurate eyes: differentiation only occurs somewhere in between those two possibilities.
    Bob Ross

    Instead of the word "error" I would like to use "difference/limitiations". But you are right about perfectly inaccurate eyes being as blind as eyes which are able to see in the quantum realm, if they are trying to observe with the context of normal healthy eyes. Another contextual viewpoint is "zoom". Zoom out and you can see the cup. Zoom in on one specific portion and you no longer see the cup, but a portion of the cup where the elements are made from.

    Fortunately, we are no only bound to sight with our senses. Not only do we have our natural senses, we can invent measurements to "sense" for us as well. Sight is when light is captured in your eyes, and your brain interprets it into something meaningful. Same with measurements at the nano, or macro level, are the same.

    Therefore, a lot of beliefs are both applicable knowledge and not applicable knowledge: it is relative to the scope.Bob Ross

    You've nailed it, as long as its realized what is applicable is within the contextual scope being considered. I can have applicable knowledge in one scope, but not another. This applies not only to my personal context, but to group contexts as well. In America at one time, swans were defined as being white, and applicably known as such. In Western Australia, "swans" can be black. Each had applicable knowledge of what a swan was in their own context, but once the contexts clashed, both had new challenges to their previous applied knowledge. The result of that, is within the context of world wide zoology, swans can be both black or white.

    For example, the "cup" is a meaningful distinction, but is contradicted by reality: the more accurately we see, or sense in general, the more the concept of a "cup" contradicts it. Therefore, since it technically contradicts reality, it is not applicable knowledge. However, within the relative scope of, let's say, a cup on a table, it is meaningful to distinguish the two even though, in "reality", they are really only distinguishable within the context of an erroneous eye ball.Bob Ross

    If you remove the word error, and replace it with "difference" I think you've nailed this. Within the context of having human eyes, we see the world, and know it visually a particular way. We do not see the ultra violet wavelength for example. In ultra violet light, blue changes to white. So is it applicably known as blue, or white? Within the context of a human eyeball, it is blue. In the context of a measurement that can see ultraviolet light, it is white. Within the context of scientific reflective wavelengths, it is another color. None are in error. They are merely the definitions, and applicable knowledge within those contextual definitions.

    Point 2: Contradictions can be cogent.Bob Ross

    I would like to alter this just slightly. Contradictions of applicable knowledge can never be cogent within a particular context. If there is a contradiction within that context, then it is not deduced, and therefore not knowledge. If two people hold two different sets of distinctive knowledge, but both can apply them within that particular context and gain applicable knowledge within that set of distinctive knowledge, then they are not holding a contradiction for themselves. But if two people are using the same distinctive context, then they cannot hold a contradiction in its application to reality.

    The real conflict is the conflict of which distinctive knowledge to when there is a conflict. I'll try not to repeat myself on how distinctive contexts are resolved within expanded context, but the examples I gave in part 3 show that. If you would like me to go over that again in this example, and also go point by point on your example, I will. I'm just trying to cover all of your points at a first pass, and I feel getting into the point by point specifics could be too long when trying to cover all of your initial points. Feel free to drill into, or ask me to drill further into any of these points more specifically on your follow up post.

    Building off of point 1 and 2, the distinction between an accidental and essential property seem to be only different in the sense of scope. I think this is the right time to invoke Ship of Theseus (which you briefly mention in the original post in this forum).Bob Ross

    Nailed it. And with this, we have an answer to the quandary that Theseus' ship posed. When is a ship not a ship anymore? Whenever we decide its not a ship anymore within the scale of context. The answer to the question, is that there is no one answer.

    For example, one society could state that both the original parts, and replaced parts, are Theseus' ship. However, the ship that is constructed with the newest parts is the original ship. So if two ships were built, Theseus ship would be the newest part ship, while the oldest part ship would be another ship made out of the originals old parts.

    Another society could reverse this. They could say that once a ship has replaced all of its old parts, it is no longer the original ship anymore, and needs to be re-registered with the government. This could be due to the fact that the government assures that all vessels are sea worthy and meet regulation, and it figures if all of the original parts are replaced, it needs to be re-inspected again to ensure it still meets the regulatory standards.

    It is a puzzle that has no specific answer, does have specific answers that fulfil the question, but has puzzled people because they believed there was only one answer.

    What is essential and accidental in each is within the context of each society. For accidental properties, perhaps society B wasn't detailed enough, and it turns out you can replace "most" of a part of a ship, like an engine besides one cog, and that's still "The original engine with a lot of pieces replaced on it." In society A, they might say "Its a new engine with one old piece left on it". In the first case it is essential that every piece be replaced for something to be considered a "new" part, while in the later, a few old parts put on a new part still means its a "new part with some old pieces".

    There is another type of induction: "ingrained induction". You have a great example of this that you briefly discuss in the fourth essay: Hume's problem of induction. Another example is that the subject has to induce that "this" is separate from "that", but it is an ingrained, fundamental induction.Bob Ross

    Recall that the separation of "this" and "that" is not an induction in itself, just a discrete experience. It is only an induction when it makes claims about reality. I can imagine a magical unicorn in my head. That is not an induction. If I believe a magical unicorn exists in reality, that is a belief, and now an induction.

    Now you could argue that in certain cases of discrete experience, we also load them with what you call "ingrained inductions". Implicitly we might quickly add, "that exists in reality" and "this exists in reality". You are correct. Most of our day to day experiences are not knowledge, but inductions based off of past things we've known, or cogently induced. Its much more efficient that way. Gaining knowledge takes time experimentation, and consideration. The more detailed the knowledge you want, the more detailed the context, and the more time and effort it takes to obtain it.

    And that is ok. I do not carry a ruler around with me to measure distance. Many times I estimate if that is a few feet with my eyeball. And for most day to day contexts, that is fine. Put me in a science lab, and I am an incompetent who should be banned. Put me in a situation in which I need to know that the stream is a little under a foot wide, and I can easily cross, and I am an efficient and capable person.

    For example, the fact of gravity (not considering the theory or law), which is an induction anchored solely to the "ingrained induction"Bob Ross

    Hm, I would ask you to specify where the induction is. Gravity is not a monolith, but built upon several conclusions of application. Is there a place in gravity that has been applied, and found to be inconclusive? The induction is not what gravity claims to describe itself as, the induction would be in its application. Off the top of my head I could state the idea that "Gravity is always applying a pull from anything that has mass to every other mass in the universe" an induction for sure. That does not negate its application between particular bodies we can observe.

    But more to your point, I believe the theory allows us to more clearly identify what we can conclude as knowledge, and what we can include as cogent, and less cogent inductions. It may require us to refine certain previous assumptions, or things that we have unintentionally let slide in past conclusions. As science is constantly evolving, I don't see a problem with this if it helps it evolve into a better state. If you would like me to go into how I see this theory in assisting science, I can go into it at a separate post if desired.

    The properties and characteristics that are apart of discrete experience do not in themselves prove in any way that they are truly differentiating factors: the table and the chair could, in reality, be two representations of the same thing, analogous to two very different looking representations of the same table directly produced by different angles of perspective.Bob Ross

    By discrete experience and context, they can, or cannot be. Recall the situation between a goat and a sheep. If I include what a goat is under the definition of a sheep, I can hold that both a goat and sheep, are a "sheep" The reason why we divide up identities into smaller groups of description is that they have some use to us. It turns out that while a goat and sheep share many properties, they are consistently different enough in behavior that it is easier and more productive to label them as two separate class of animals.

    The idea that the table and chair are two separate things is not a truth in reality apart from our contexts. So there could be a context where chair and tables are separate, or they are together as a "set". We can identify them as we like, as long as we are clear with our identities, and are able to apply them to reality without contradiction.

    Point 6: Induction of possibility is not always cogent

    You argue in the fourth essay that possibility inductions are cogent: this is not always the case.
    Bob Ross

    Cogency is a way to define a hierarchy of inductions. But an induction is still always an induction. Its conclusion is not necessarily true from the premises. Just because something existed once, does not mean it will ever exist again. We know its possible, because it has at least existed one time. So in the case where you have a memory of iron floating on water, as long as you believe in the accuracy of your memories, you will reasonably believe it is possible for iron to float on water.

    Of course, when you extended that context to another person, you would be challenged. Person after person would state, "No, I've never seen or heard of any test that showed iron floated on water." What you do is your choice. You could start doubting your memory. You could start testing and see that it fails time and time again. You are the only one in the world who thinks its possible, while the rest of society does not.

    And finally, inductions are not more reasonable than deductions. If you believe it is possible for iron to float on water, but you continually deduce it is not, you would be holding an induction over a current deduction. You might try to explain it away by stating that it was possible that iron floated on water. Maybe physics changed. Maybe your memories are false or inaccurate. And as we can see, holding a deduction as the greater value than the induction, gives us a reason to question our other inductions instead of holding them as true.

    And for our purposes, we might indeed be able to prove that their memories are false. Surely they had memories of parents. We could ask the parents if they knew of his birth. They would quickly realize they did not have an id, or a record of it anywhere in society. Once the memories were seen as doubtful, then they could not be sure they had actually seen iron float. At that point, its plausible that the person's memories of iron floating on water were applicably known, but it has been reduced from a possibility, and is even less cogent now then affirming the deduction of today, that iron does not float on water.

    Point 7: the "I" and the other "I"s are not used equivocally

    Here's where the ternary distinction comes into play: you cannot prove other "I"s to be a discrete experiencer in a holistic sense, synonymous with the subject as a discrete experiencer, but only a particular subrange of it. You can't prove someone else to be "primitively aware", and consequently "experience", but only that they have the necessary processes that differentiate. In other words, you can prove that they differentiate, not that they are primitively aware of the separation of "this" from "that".
    Bob Ross

    You may be correct. We would need to clarify the terms and attempt to apply them to reality. And that's fine. As for this line, " In other words, you can prove that they differentiate, not that they are primitively aware of the separation of "this" from "that", yes I can. Differentiation within existence is "primitive awareness". Lets not use that phrase anymore if it causes confusion. If we don't have solid definitions between us, we won't match up in the context of discussion.

    Another thing to consider, is I don't need to prove anything deeper in the "I" then I did in that context. If you read the paper and understand the concepts, are you a discrete experiencer? Can you deduce? Can you take the methodology, apply it, and it comes away with consistent results that give you a useful tool to interact with reality in a rational manner? It is there to prove yourself. If you can understand the paper and follow its conclusions, then you have actively participated in the act of distinctive and applicable knowledge. If you want to produce another "I" for your own personal context, there is nothing stopping you, or contradicting the "primitive I" in the paper.

    What I want to take away from this instead of debating over an "I" is a broader concept that there will be some things that we cannot applicably know based on the context we set up. Will I ever applicably know what it is to discretely experience as you do? No, nor you for I. But can I applicably know that this is impossible? Yes. Applicably knowing our limits is just as important. Calculus was invented to measure limits of calculation, where the calculation eventually forms an asymptote of results. While I may not be able to know what its like to discretely experience as yourself, I can know you discretely experience, and use that knowledge to formulate a tool that can evaluate up to our limits.

    There is my massive reply! Out of all that, pick 2 that you would like me to drill into for the next response. When you are satisfied with those, we can go back and drill into two more, so I don't approach the questionable limits of how much I can type in one post! Wonderful contributions as always.
  • Bob Ross
    1.7k
    @Philosophim,

    I see now! I now understand your epistemology to be the application of deductions, or inductions that vary by degree of cogency, within a context (scope), which I completely agree with. This kind of epistemology, as I understand it, heavily revolves around the subject (but not in terms of simply what one can conceive of course) and not whatever "objective reality", or the things-in-themselves, may be: I agree with this assessment. For the most part, in light of this, I think that your brief responses were more than adequate to negate most of my points. So I will generally respond (and comment) on some parts I think worth mentioning and, after that, I will build upon my newly acquired understanding of your view (although I do not, without a doubt, completely understand it I bet) .

    Instead of the word "error" I would like to use "difference/limitiations". But you are right about perfectly inaccurate eyes being as blind as eyes which are able to see in the quantum realm, if they are trying to observe with the context of normal healthy eyes. Another contextual viewpoint is "zoom". Zoom out and you can see the cup. Zoom in on one specific portion and you no longer see the cup, but a portion of the cup where the elements are made from.

    I agree: it is only "error" if we deem it to be "wrong" but, within context, it is "right".

    Contradictions of applicable knowledge can never be cogent within a particular context.

    In light of context, I agree: I was attempting to demonstrate contradictions within all contexts, which we both understand and accept as perfectly fine. On a side note, I also agree with your assessment of Theseus' ship.

    Recall that the separation of "this" and "that" is not an induction in itself, just a discrete experience. It is only an induction when it makes claims about reality. I can imagine a magical unicorn in my head. That is not an induction. If I believe a magical unicorn exists in reality, that is a belief, and now an induction.

    Upon further reflection, I think that I was wrong in stating that differentiation is an "ingrained induction"; I think the only example of "ingrained inductions" is, at its most fundamental level, Hume's problem of induction. That is what I was really meaning by my gravity example, although I was wrongly stating it as induction itself, that I induce that an object will fall the next time I drop it. This is a pure induction and, I would argue, is ingrained in us (and I think you would agree with me on that). After thinking some more, I have come to the conclusion that I am really not considering differentiation an "ingrained induction" but, rather, an assumption (an axiom to be more specific). I am accepting, and I would argue we all are accepting, the principle of noncontradiction as a metalogical principle, a logical axiom, upon which we determine something to either be or not be. However, as you are probably aware, we cannot "escape", so to speak, the principle of noncontradiction to prove or disprove the principle of noncontradiction, just like how we are in no position to prove or disprove the principle of sufficient reason or the principle of the excluded middle. You see, fundamentally, I think that your epistemology stems from "meaningfulness" with respect to the subject (and, thereafter, multiple subjects) and, therefrom, you utilize the most fundamental axiom of them all: the principle of noncontradiction as a means towards "meaningfulness". It isn't that we are right in applying things within context of a particular, it is that it is "meaningful" for the subject, and potentially subjects, to do so and, therefore, it is "right". This is why I don't think you are, in its most fundamental sense, proving any kind of epistemology grounded on absolute grounds but, rather, you are determining it off of "meaningfulness" on metalogical principles (or logical axioms). You see, this is why I think a justified, true, belief (and subsequently classical epistemology) has been so incomplete for such a long time: it is attempting to reach an absolute form of epistemology, wherein the subject can finally claim their definitive use of the term "know", whereas I think that to be fundamentally misguided: everything is in terms of relevancy to the subject and, therefore, I find that relevancy directly ties to relative scope (or context as you put it) (meaningfulness).

    I also apply this (and I think you are too) to memories: I don't think that we "know" any given memory to truly be a stored experience but, rather, I think that all that matters is the relevance to the subject. So if that memory, regardless of whether it got injected into their brain 2 seconds ago or it is just a complete figment of the imagination, is relevant (meaningful) to the subject as of "now", then it is "correct" enough to be considered a "memory" for me! If, on the contrary, it contradicts the subject as of "now", then it should be disbanded because the memory is not as immediate as experience itself. I now see that we agree much more than I originally thought!

    I would also apply this in the same manner to the hallucinated "real" world and the real world example I originally invoked (way back when (: ). For me, since it is relative to context, if the context is completely limited to the hallucinated "real" world, then, for me, that is the real world. Consequently, what I can or cannot know, in that example, would be directly tied to what, in hindsight, we know to be factually false; however, the knowledge, assuming it abides by the most fundamental logical axiom (principle of noncontradiction), is "right" within my context. Just like the "cup" and "table" example, we only have a contradiction within multiple "contexts", which I am perfectly fine with. With that being said, I do wonder if it is possible to resolve the axiomatic nature of the principle of noncontradiction, because I don't like assuming things.

    Furthermore, in light of our epistemologies aligning much better than I originally thought, I think that your papers seem to only thoroughly address the immediate forms of knowledge (i.e. your depiction of discrete experiences, memories, and personal context is very substantive), but do not fully address thereafter. It seems to get into what I would call mediate forms of knowledge (i.e. group contexts and the induction hierarchies) in a general sense, sort of branching out a bit past the immediate forms, but I think that there's much more to discuss (I also think that there's a fundamental question of when, even in a personal context, hierarchical inductions stretch to far to have any relevancy). This is also exactly what I have been pondering in terms of my epistemology as well, so, if you would like, we could explore that.

    I look forward to hearing from you,
    Bob
  • Philosophim
    2.6k
    I think you understand the theory Bob. Everything you said seemed to line up! Yes, I would be interested in your own explorations into epistemology. Feel free to direct where you would like to go next.
  • Bob Ross
    1.7k
    Hello @Philosophim,

    I think that the first issue I am pondering is the fact that neither of our epistemologies, as discussed hitherto, really clarifies when a person "knows", "believes", or "thinks" something is true. Typically, as you are probably well aware, knowledge is considered with more intensity (thereby requiring a burden of proof), whereas belief and to simply "think" something is true, although they can have evidence, do not require any burden of proof at all. According to your epistemology, as I understand it, considers a person to "know" something if they can apply it to "reality" without contradiction (i.e. applicable knowledge)--which I think doesn't entirely work. For example, I could claim that I "know" that my cat is in the kitchen with no further evidence than simply stating that the claim doesn't contradict my or anyone else who is in the room's "reality". Hypothetically, let's say I (and all the other people) are distantly away from the kitchen and so we cannot verify definitively the claim: do I "know" that my cat is in the kitchen? If so, it seems to be an incredibly "weak" type of knowledge to the point that it may be better considered a "belief" or maybe merely a theory (in a colloquial sense of the term, not scientific)(i.e. I "think").

    Likewise, we could take this a step further: let's say that I, and everyone else in the room, get on a phone call with someone who is allegedly in that very kitchen that we don't have access to (in which I am claiming the cat to reside) and that person states (through the phone call) that the cat is not in the kitchen. Do I now "know" that the cat is not in the kitchen? Do I "know" that that person actually checked the kitchen and didn't just make it up? Do I "know" that that even was a person I was talking to? I heard a voice, of which I assigned to a familiar old friend of mine whom I trust, but I am extrapolating (inducing) that it really was that person and, thereafter, further inducing off of that induction that that person actually checked and, thereafter, that they checked in an honest manner: it seems as though their is a hierarchy even within claims of knowledge themselves. I think that your hierarchy of inductions is a step in the right direction, but what is a justified claim of knowledge? I don't think it would be an irrational induction to induce that the person calling me is (1) the old, trustworthy friend I am assigning the voice to and (2) that they actually didn't discretely experience the cat being in the kitchen, but am I really justified? If so, is this form of "knowledge" just as "strong", so to speak, as claiming to "know" that the cat isn't in the room in which I am in right now? Is it just as "strong" as claiming to "know" that the cat isn't in the room I was previously in, but have good reason to believe the cat hadn't somehow traveled that far and snuck its way into the room, whereas do I really "know" the cat didn't find its way into the kitchen (which is quite a distance away from me, let's say in a different country or something)?

    Another great example I have been pondering is this: do I "know" that a whale is the largest mammal on earth? I certainly haven't discretely experienced one and I most certainly haven't measured all the animals, let alone any one, on this earth. So, how am I justified in claiming to "know" it? Sure, applying my belief that a whale is the largest mammal doesn't contradict my "reality", but does that really constitute as "knowledge"? In reality, I simply searched it and trusted the search results. This seems, at best, to be a much "weaker" form of knowledge (of some sorts, I am not entirely sure).

    I think after defining the personal context and even the general societal context of claims, as you did in your essays, and even after discussing hierarchical inductions, I am still left with quite a complexity of problems to resolve in terms of what is knowledge.

    Bob
  • Philosophim
    2.6k
    Wonderful! We are about to get into part 4, induction hierarchies. I have never been able to discuss this aspect with someone seriously before, as no one has gotten to the point of mostly understanding the first three parts. While we discuss, recall our methodology of distinctive knowledge, and deductively applying them as applicable knowledge still stands. Within part 4, I subdivided inductions into four parts, but I can absolutely see the need for additional sub-divisions, so feel free to point out any you see.

    For example, I could claim that I "know" that my cat is in the kitchen with no further evidence than simply stating that the claim doesn't contradict my or anyone else who is in the room's "reality".Bob Ross

    Applicably knowing something depends on our context, and while context can also be chosen, the choice of context is limited by our distinctive knowledge. If, for example I did not have the distinctive knowledge that my friend could lie to me, then I would know the cat was in the room. But, if I had the distinctive knowledge that my friend could lie to me, I could make an induction that it is possible that my friend could be lying to me. Because that is an option I have no tested in application and due to my circumstance, cannot test even if I wanted to, I must make an induction.

    I think that your hierarchy of inductions is a step in the right direction, but what is a justified claim of knowledge?Bob Ross

    When you can deduce nothing else within your context of distinctive knowledge. If you recall the sheep and goat issue, prior to separating the identities of a sheep and a goat, both could be called a "sheep". But once the two identities are formed, there is a greater burden on the person who is trying to applicably know whether that animal is either a sheep, or a goat.

    Arguably, I think we applicably know few things. The greater your distinctive knowledge and more specific the context, the more difficult it becomes to applicably know something. Arguably though, the greater specificity also gives you a greater assurance that what you do applicably know, will allow greater precision in handling reality. It is easier for a person with a smaller imagination and vocabulary to know something. This reminds me of the concept of newspeak in 1984.

    "In "The Principles of Newspeak", the appendix to the novel, Orwell explains that Newspeak follows most of the rules of English grammar, yet is a language characterised by a continually diminishing vocabulary; complete thoughts are reduced to simple terms of simplistic meaning.
    - https://en.wikipedia.org/wiki/Newspeak

    Orwell understood implicitly that the simpler and more general the language, the more you could get your populace to "applicably know" without question. If this state is "good" no matter what the state does, then questioning anything the state does is "evil". Simple terms make simple men. But, simple terms also make efficient men. It is not that induction is wrong, it is that incorrectly understood, it can be misused. I think a useful term for when we are discussing a situation in which a person has extremely limited distinctive knowledge is the "simpleton context". We can use this when there is a question of fundamentals.

    I would argue the bulk of our decisions are through intuitive inductions, and being able to categorize which one's are the most useful to us, is one of the strengths of the theory. Now that we have have a way to manage the cogency of inductions lets go back to your cat in the kitchen example.

    As a reminder, hierarchy of inductions is as follows: Probability, possibility, plausibility, and irrational. Each are formed based on how much of their underlying logic is based upon deductions versus other inductions. First, lets examine the most basic of inductions.

    Likewise, we could take this a step further: let's say that I, and everyone else in the room, get on a phone call with someone who is allegedly in that very kitchen that we don't have access to (in which I am claiming the cat to reside) and that person states (through the phone call) that the cat is not in the kitchen.Bob Ross

    I'll just cover the first three questions. We will not use the simpleton context here. It is a useful context for addressing fundamentals, so if there are any questions, we can return to it at anytime to find the underlying basis. We will be people who are normal seekers of knowledge.

    Do I now "know" (applicably) that the cat is not in the kitchen?
    No, because I know it is possible that my friend might lie, and I don't know if the person is telling the truth.

    Do I "know" that that person actually checked the kitchen and didn't just make it up?
    If it is possible that my friend could call me outside of the kitchen, and I have no way of verifying where he called from, then no.

    Do I "know" that was even was a person I was talking to?
    If I know it is possible that something else could mimic my friends voice to the point I would be fooled, then no.

    From this discussion, I think I've actually gleaned something new from my theory I didn't explicitly realize before! If we have the distinctive knowledge of something that is possible or probable, these act as potential issues we have to applicably test and eliminate before we can say we applicably know something. This is because possibilities and probabilities are based on prior applicable knowledge.

    Lets change the cat situation to different hierarchies so you can see different outcomes. The person who you're talking to is a trusted friend who rarely lies to you. Its possible they could, but its improbable. There doesn't appear to be a tell in their voice that they are lying, so it would be more cogent to look at the probability they are lying. They rarely lie to you, and they wouldn't have an incentive to lie (that you know of), so you assume they probably aren't lying.

    They tell you the cat is in the kitchen as you hear them pouring the food into their bowl. You even hear a "meow" over the phone. You still don't know it, because you have distinctive knowledge of the fact that your friend could be lying this one, or playing a clever prank. You know that it is possible to get an electronic device that would mimic the sound of a cat. You know that it is possible for someone to pour something into a bowl that sounds like cat food, but that doesn't mean the cat is in the kitchen. But, again, its improbable that your friend is lying to you. Probability is more cogent to make decisions off of then possibilities. Therefore, you are more reasonable in assuming your friend is not lying to you, and making the induction that the cat is in the kitchen.

    Of course, you could be wrong. All inductions could be wrong. But it would still be less reasonable for you to believe the cat was not in the kitchen based on possibility, when you have probability that indicates the cat likely is.

    Another great example I have been pondering is this: do I "know" that a whale is the largest mammal on earth?Bob Ross

    It depends on your context. If you are implicitly including, "out of all the mammals we have discovered so far," then yes. Or you could explicitly give that greater specific context and add that phrase into the sentence. Often times, we may say things with implied contexts behind them, due to efficiency. The danger of efficiency is of course people can skip steps, overlook implicit claims, and take things literally when it was never intended to.

    When we also state, "out of all the mammals we have discovered so far," we are also implicitly noting it is "out of all the possible mammals we've discovered so far". We do not consider plausibilities. For example, I can imagine an animal bigger than a whale that stands on four feet and reaches its neck into the clouds. But we have never applicably known such a creature, so it is not an induction that can challenge the deduction we have made.

    I feel there is a lot to cover and refine with inductions, so I look forward to your questions and critiques!
  • Bob Ross
    1.7k
    Hello @Philosophim,

    I have never been able to discuss this aspect with someone seriously before, as no one has gotten to the point of mostly understanding the first three parts.

    I am glad that we are able to agree and discuss further as well! Honestly, as our discussion has progressed, I have realized more and more that we hold incredibly similar epistemologies: I just didn't initially understand your epistemology correctly.

    Applicably knowing something depends on our context, and while context can also be chosen, the choice of context is limited by our distinctive knowledge. If, for example I did not have the distinctive knowledge that my friend could lie to me, then I would know the cat was in the room. But, if I had the distinctive knowledge that my friend could lie to me, I could make an induction that it is possible that my friend could be lying to me. Because that is an option I have no tested in application and due to my circumstance, cannot test even if I wanted to, I must make an induction.

    I think that this is fair enough: we are essentially deriving the most sure (or reasonable) knowledge we can about the situation and, in a sense, it is like a spectrum of knowledge instead of concrete, absolute knowledge. However, with that being said, I think that a relative, spectrum-like epistemology (which I think both our epistomologies could be characterized as such) does not account for when we should simply suspend judgement. You see, if we are always simply determining which is more cogent, we are, thereby, never determining if the most cogent form is really cogent enough, within the context, to be worth even holding as knowledge in the first place.

    Arguably, I think we applicably know few things. The greater your distinctive knowledge and more specific the context, the more difficult it becomes to applicably know something.

    I completely agree. Furthermore, I also understand your reference to 1984 and how vocabulary greatly affects what one can or cannot know with their context because, I would say, their vocabulary greatly determines their context in the first place.

    I think before I get into your response to the cat example I need to elaborate on a couple things first. Although I think that your hierarchical inductions are a good start, upon further reflection, I don't think they are quite adequate enough. Let me try to explain what I mean.

    Firstly, let's take probabilistic inductions. Probability is not, in itself, necessarily an induction. It is just like all of mathematics: math is either deduced or, thereupon, induced. For example, in terms of math, imagine I am in an empty room where I only have the ability to count on my fingers and, let's say, I haven't experienced anything, in terms of numbers, that exceeded 10. Now, therefrom, I could count up to 10 on my fingers (I understand I am overly simplifying this, but bare with me). I could then believe that there is such a thing as "10 things" or "10 numbers" and apply that to reality without contradiction: this is a deduction. Thereafter, I could then induce that I could, theoretically, add 10 fingers worth of "things" 10 times and, therefore, have 100 things. Now, so far in this example, I have no discrete experiences of 100 things but determined that I know 100 "fingers" could exist. So logically, as of now, 100 only exists in terms of a mathematical induction, whereas 10 exists in terms of a deduction. I would say the same thing is true for probability. Imagine I am in a room that is completely empty apart from me and a deck of 52 cards. Firstly, I can deductively know that there is 52 "things". Secondly, I could deductively know an idea of "randomness" and apply that without contradiction as well. Thirdly, I could deductively know that, at "random", me choosing a kind out of the deck is a chance of 4/52 and apply that without contradiction (I could, thereafter, play out this scenario ad infinitum, where I pick a card out of a "randomly" shuffled deck, and my results would slowly even out to 4/52). All of this, thus forth, is deductive: created out of application of beliefs towards reality in the same way as your sheep example. Now, where induction, I would say, actually comes into play, in terms of probability, is an extrapolation of that probabilistic application. For example, let's take that previous 52 deck scenario I outlined above: I could then, without ever discretely experiencing 100 cards, induce that the probability of picking 1 specific card out of 100 is 1/100. Moreover, I could extrapolate that knowledge, which was deduced, of 4/52 and utilize that to show whether something is "highly probable" or "highly improbable" or something in between: this would also be an induction. For example, if I have 3 cards, two of which are aces and one is a king, I could extrapolate that it is "highly probable" that I will randomly pick an ace out of the three because of my deduced knowledge that the probability of picking an Ace is 2/3 in this case. My point is that I view your "probabilistic inductions" as really being a point towards "mathematical inductions", which does not entirely engross probability. Your 52 card deck example in the essays is actually a deduction and not an induction.

    Secondly, I think that probabilistic inductions and plausible inductions are not always directly comparable. To be more specific, a probabilistic "fact" (whether deduced or induced) is comparable to plausible inductions and, in that sense, I think you are right to place the former above the latter; however, I do not think that "extended" probabilistic claims are comparable (always) to plausible inductions. For example, let's say that there is a tree near my house, of which I can't see from where I am writing this, that I walk past quite frequently. Let's also say that I have three cards in front of me, two of which are aces. Now, I would say that the "fact" that the probability of me randomly picking an ace is 2/3 is "surer" (more cogent form of knowledge) than any claim I could make in terms of an inapplicable plausibility induction towards the tree still being where it was last time I walked past it (let's assume I can't quickly go look right now). However, if I were to ask myself "are you surer that you will pick an ace out of these three cards or that the tree is still intact", or I think you would put it "is it more cogent to claim I will pick an ace out of these three cards or that the tree is still intact", I am now extending my previous "fact" (2/3) into an "extended", contextual, claim that weighs the "highly probableness" of picking an ace out of three cards with the plausibility of the tree still being intact. These are two, as they were stated in the previous sentence, completely incompatible types of claims and, therefore, one must be "converted" into the other for comparison. To claim something is probable is purely a mathematical game, whereas plausible directly entails other means of evidence other than math (I may have walked by the tree yesterday, there may have been no storms the previous night, and I may have other reasons to believe it "highly implausible" that someone planned a heist to remove the tree). In this example, although I may colloquially ask myself "what are the odds that someone moved the tree", I can't actually convert the intactness of the tree into purely probability: it is plausibility. I think this shows that probability and plausibility, in terms of "extended" knowledge claims stemming from probability, are not completely hierarchical.

    Thirdly, building off of the previous paragraph, even though they are not necessarily comparable in the sense of probability, they can be compared in terms of immediateness (or discrete experiential knowledge--applicable and distinct knowledge): the "probabilistic deductions" are "surer" (or more cogent) than "plausible inductions", but "probabilistic inductions" are not necessarily "surer" than "plausible inductions" (they are only necessarily surer if we are talking about the "fact" and not an "extension"). Let's take my previous example of the tree and the 3 cards, but let's say its 1000 cards and one of them is an ace: I think I am "surer" that the tree is still there, although it is argument made solely from an inapplicable plausible induction (as I haven't actually calculated the probability nor have I, in this scenario, the ability to go discretely experience the tree) than me getting an ace from those 1000 cards. However, I am always "surer" that the probability of getting an ace out of 1000 cards is 1/1000 (given there's only one) than the intactness of the tree (again, assuming I can't discretely experience it as of now). Now I may have just misunderstood your fourth essay a bit, but I think that your essay directly implies that the hierarchy, based off of proximity towarfd deductions, is always in that order of cogency. However, I think that sometimes the "extension" of probability is actually less cogent than a plausible induction.

    I was going to say much more, and elaborate much more, but this is becoming entirely too long. So I will leave you with my conclusion: the cogency (or "sureness", as I put it) of knowledge is not, at its most fundamental level, about which kind of induction the given claim stems from, but more about the degree of immediateness to the "I". You see, the probabilistic "fact" of picking an ace out of three cards (two of which are aces: 2/3) is "surer", or more cogent, because it is very immediate to the "I" (it is a deduction directly applied to discrete experiences). The probabilistic "extension" claim, built off of a mathematical deduction in this case (but could be an induction if we greatly increased the numbers), that I am "surer" of getting an ace out of three cards (2/3) is actually less cogent (or less "sure") of a claim than the tree is intact because, in this example, the tree intactness is more immediate than the result of picking a card being an ace. Sure, I know it is 2/3 probability, but I could get the one card that isn't an ace, whereas, given that I walked past the tree yesterday (and couple that with, let's say, a relatively strong case for the tree being there--like there wasn't a tornado that rolled through the day before), the "sureness" is much greater; I have a lot of discrete experiences that lead me to conclude that it is "highly plausible" (note that "highly probable" would require a conversion I don't think possible in this case) that the tree is still there. Everything, the way I see it, is based off of immediateness, but it gets complicated really fast. Imagine that I didn't have an incredibly strong case for the tree still being there (like I walked past it three weeks ago and there was a strong storm that occurred two weeks ago), then it is entirely possible, given an incredible amount of analysis, that the "sureness" would reverse. As you have elegantly pointed out in your epistemology, this is expected as it is all within context (and context, I would argue, is incredibly complicated and enormous).

    I will leave it at that for now, as this is getting much longer than I expected, so, I apologize, I will address your response to the cat example once we hash this out first (as I think it is important).

    Bob
  • Philosophim
    2.6k
    Great comments so far Bob! I'll dive in.

    Firstly, let's take probabilistic inductions. Probability is not, in itself, necessarily an induction.Bob Ross

    I understand exactly what you are saying in this paragraph. I've deductively concluded that these inductions exist. Just as it is deductively concluded that there are 4 jacks in 52 playing cards.

    Now, where induction, I would say, actually comes into play, in terms of probability, is an extrapolation of that probabilistic application.Bob Ross

    For example, if I have 3 cards, two of which are aces and one is a king, I could extrapolate that it is "highly probable" that I will randomly pick an ace out of the three because of my deduced knowledge that the probability of picking an Ace is 2/3 in this case.Bob Ross

    Exactly. That is the induction I am talking about. We can know an induction discretely. But know an inductions outcome when we apply it to reality.

    My point is that I view your "probabilistic inductions" as really being a point towards "mathematical inductions", which does not entirely engross probability.Bob Ross

    There are likely degrees of probability we could break down. Intuitively, pulling a jack out of deck of cards prescribes very real limits. However, if I note, "Jack has left their house for the last four days at 9am. I predict today on Friday, they will probably do the same," I think there's an intuition its less probably, and more just possible.

    Perhaps the key is the fact that we don't know what the denominator limit really is. The chance of a jack would be 4/52, while the chance of Jack leaving his house at 9 am is 4 out of...5? Does that even work? I have avoided these probabilities until now, as they are definitely murky for me.

    Secondly, I think that probabilistic inductions and plausible inductions are not always directly comparable. To be more specific, a probabilistic "fact" (whether deduced or induced) is comparable to plausible inductions and, in that sense, I think you are right to place the former above the latter; however, I do not think that "extended" probabilistic claims are comparable (always) to plausible inductions.Bob Ross

    Ah, I'm certain I cut this out of part four to whittle it down. A hierarchy of inductions only works when applying a particular set of distinctive knowledge to an applicable outcome. We compare the hierarchy within the deck of cards. We know the probability if pulling a jack, we know its possible we could pull a jack, but the probability is more cogent that we won't pull a jack.

    The intactness of the tree would be evaluated separately, as the cards have nothing to do with the trees outcome. So for example, if the tree was of a healthy age, and in a place unlikely to be harmed or cut down, it is cogent to say that it will probably be there the next day. Is it plausible that someone chopped it down last night for a bet or because they hated it? Sure. But I don't know if that's actually possible, so I would be more cogent in predicting the tree will still be there tomorrow with the applicable knowledge that I have.

    I was going to say much more, and elaborate much more, but this is becoming entirely too long. So I will leave you with my conclusion: the cogency (or "sureness", as I put it) of knowledge is not, at its most fundamental level, about which kind of induction the given claim stems from, but more about the degree of immediateness to the "I".Bob Ross

    With the clarification I've made, do you think this still holds?

    Imagine that I didn't have an incredibly strong case for the tree still being there (like I walked past it three weeks ago and there was a strong storm that occurred two weeks ago), then it is entirely possible, given an incredible amount of analysis, that the "sureness" would reverse. As you have elegantly pointed out in your epistemology, this is expected as it is all within context (and context, I would argue, is incredibly complicated and enormous).Bob Ross

    This ties into my "degrees of probability" that I mentioned earlier. In these cases, we don't have the denominator like in the "draw a jack" example. In fact, we just might not have enough applicable knowledge to make a decision based on probability. The more detailed our applicable knowledge in the situation, the more likely we are to craft a probability that seems more cogent. If we don't know the destructive level of the storm, perhaps we can't really make a reasonable induction. Knowing that we can't make a very good induction, is also valuable at times too.

    My apologies is this is a little terse for me tonight. I will have more time later to dive into these if we need more detail, I just wanted to give you an answer without any more delay.
  • Bob Ross
    1.7k
    Hello @Philosophim,
    I apologize, as things have been a bit busy for me, but, nevertheless, here's my response!

    My apologies is this is a little terse for me tonight. I will have more time later to dive into these if we need more detail, I just wanted to give you an answer without any more delay.

    Absolutely no problem! I really appreciated your responses, so take your time! I think your most recent response has clarified quite a bit for me!

    I understand exactly what you are saying in this paragraph. I've deductively concluded that these inductions exist. Just as it is deductively concluded that there are 4 jacks in 52 playing cards.

    There are likely degrees of probability we could break down. Intuitively, pulling a jack out of deck of cards prescribes very real limits. However, if I note, "Jack has left their house for the last four days at 9am. I predict today on Friday, they will probably do the same," I think there's an intuition its less probably, and more just possible.

    Perhaps the key is the fact that we don't know what the denominator limit really is. The chance of a jack would be 4/52, while the chance of Jack leaving his house at 9 am is 4 out of...5? Does that even work? I have avoided these probabilities until now, as they are definitely murky for me.

    Although I am glad that we agree about the deduction of probability inductions, I think that we are using "probability" in two different ways and, therefore, I think it is best if I define it, along with "plausibility", for clarification purposes (and you can determine if you agree with me or not). "Plausibility" is a spectrum of likelyhoods, in a generic sense, where something is "Plausible" if it meets certain criteria (of which do not need to be derived solely from mathematics) and is "Implausible" if it is meets certain other criteria. In other words, something is "plausible" if it has enough evidence to be considered such and something is "implausible" if it has enough evidence to be considered such. Now, since "plausibility" exists within a spectrum, it is up to the subject (and other subjects: societal contexts) to agree upon where to draw the line (like the exact point at which anything thereafter is considered "plausible" or anything below a certain line is considered "implausible"). Most importantly, I would like to emphasize that "plausibility", although one of its forms of evidence can be mathematics, does not only encompass math. On the contrary, "probability" is a mathematical concrete likelyhood: existing completely separate from any sort of spectrum. The only thing that subjects need to agree upon, in terms of "probability", is mathematics, whereas "plausibility" requires a much more generic subscription to a set (or range) of qualifying criteria of which the spectrum is built on. For example, when I say "X is plausible", this only makes sense within context, where I must define (1) the set (range) of valid forms of evidence and (2) how much quantity of them is required for X to be considered qualified under the term "plausible". However, if I say "X is probable", then I must determine (1) the denominator, (2) numerator (possibilities), and (3) finally calculate the concrete likelyhood. When it is "plausible", it simply met the criteria the subject pre-defined, whereas saying there is a 1% chance of picking a particular card out of 100 cards is a concrete likelyhood (I am not pre-defining any of it). Likewise, if I say that X is "plausible" because it is "probable", then I am stating that (1) mathematical concrete likelyhoods are a valid form of evidence for "plausibilities" and (2) the mathematical concrete likelyhood of X is enough for me to consider it "plausible" (that the "probability" was enough to shift the proposition of X past my pre-defined line of when things become "plausible").

    You see, when you say "while the chance of Jack leaving his house at 9 am is 4 out of...5?", I think you are conflating "probability" with "plausibility"--unless you can somehow mathematically determine the concrete likelyhood of Jack leaving (I don't think you can, or at least not in a vast majority of cases). I think that we colloquially use "probable" and "plausible" interchangeably, but I would say they are different concepts in a formative sense. Now it is entirely possible, hypothetically speaking, that two subjects could determine that the only valid form of evidence is mathematically concrete likelyhoods (or mathematically derived truths in a generic sense) and that, thereby, that is the only criteria by which something becomes worthy of the term "plausible" (and, thereby, anything not derived from math is "implausible"), but I would say that those two people won't "know" much about anything in their lives.

    Ah, I'm certain I cut this out of part four to whittle it down. A hierarchy of inductions only works when applying a particular set of distinctive knowledge to an applicable outcome. We compare the hierarchy within the deck of cards. We know the probability if pulling a jack, we know its possible we could pull a jack, but the probability is more cogent that we won't pull a jack.

    The intactness of the tree would be evaluated separately, as the cards have nothing to do with the trees outcome. So for example, if the tree was of a healthy age, and in a place unlikely to be harmed or cut down, it is cogent to say that it will probably be there the next day. Is it plausible that someone chopped it down last night for a bet or because they hated it? Sure. But I don't know if that's actually possible, so I would be more cogent in predicting the tree will still be there tomorrow with the applicable knowledge that I have.

    Ah, I see! That makes a lot more sense! I would agree in a sense, but also not in a sense: this seems to imply that we can't compare two separate claims of "knowledge" and determine which is more "sure"; however, I think that we definitely can in terms of immediateness. I think that you are right in that a "probability" claim, like all other mathematical inductions, is more cogent than simply stating "it is possible", but why is this? I think it is due to the unwavering, inflexibility of numbers. All my life, from my immediate forms of knowledge (my discrete experiences and memories), I have never come in contact with such a thing as a "flaky number" because it is ingrained fundamentally into the processes that makeup my immediate forms of knowledge (i.e. my discrete experiences have an ingrained sense of plurality and, thereby, I do too). Therefore, any induction I make pertaining to math, since it is closer to my immediate forms of knowledge (in the sense that it is literally ingrained into them), assuming it is mathematically sound, is going to trump something less close to my immediate forms of knowledge (such as the possibility of something: "possibility" is just a way of saying "I have discretely experienced it before without strong correlation, therefore it could happen again", whereas a mathematical induction such as "multiplication will always work for two numbers regardless their size" is really just a way of saying "I have discretely experienced this with strong correlation (so strong, in fact, that I haven't witness any contradicting evidence), therefore it will always happen again". When I say "immediateness", am not entirely talking merely about physical locations but, rather, about what is more forthright in your experiences: the experience itself is the thing by which we derive all other things and, naturally, that which corresponds to it will be maintained over things that do not.

    For example, the reason, I think, human opinions are wavering is due to me having experiences of peoples' opinions changing, whereas if I had always experienced (and everyone else always experienced) peoples' opinions unchanging (like gravity or 2+2 = 4), then I would logically characterize it with gravity as a concrete, strongly correlated, experience held very close to me. Another example is germ theory: we say we "know" germs make us sick, and that is fine, but it is the correlation between the theory and our immediate forms of knowledge (discrete experiences and memories) that make us "know" germ theory to be true. We could be completely wrong about germs, but it is undeniable that something makes us sick and that everything adds up so far that it is germs (it is strongly correlated) (why? because that is apart of our immediate knowledge--discrete experiences and memories).

    With that in mind, let's take another look at the tree and cards example: which is more "sure"? I think that your epistemology is claiming that they must be separately, within their own contexts, evaluated to determine the most cogent form of induction to use within that particular context (separately), but, upon determining which is more cogent within that context, we cannot go any farther. On the contrary, I think that I am more "sure" of the deduction of 2/3 probability because it is tied to my immediate forms of knowledge (discrete experiences and memories). But I am more "sure" of the tree still being their (within that context) than that I am going to actually draw the ace because I have more immediate knowledge (I saw it 2 hours ago, etc) of the tree that adds up to it still being their than me actually getting an ace. Another way to think about it is: if my entire life (and everyone else testified to it in their lives as well), when presented with three cards (two of which are aces), I always randomly drew an ace--as in every time with no exceptions--then I would say the the "sureness" reverses and my math must have been wrong somehow (maybe probability doesn't work after all? (: ). This is directly due to the fact that it would be no different than my immediate knowledge of gravity or mathematical truths ( as in 2+2 = 4, or the extrapolation of such). Now, when I use this example, I am laying out a very radical example and I understand that me picking an ace 10 times in a row does not constitute probability being broken; however, if everyone all attested to always experiencing themselves picking an ace every time, and that is how I grew up, then I see no difference between this and the reality of gravity.

    I was going to say much more, and elaborate much more, but this is becoming entirely too long. So I will leave you with my conclusion: the cogency (or "sureness", as I put it) of knowledge is not, at its most fundamental level, about which kind of induction the given claim stems from, but more about the degree of immediateness to the "I". — Bob Ross


    With the clarification I've made, do you think this still holds?

    Sort of. I think that, although I would still hold the claim that it is based off of immediateness, I do see your point in terms of cogency within a particular scenario, evaluated separately from the others, and I think, in that sense, you are correct. However, I don't think we should have to limit our examinations to their specific contexts: I think it is a hierarchy of hierarchies. You are right about the first hierarchy: you can determine the cogency based off of possibility vs probability vs plausibility vs irrationality. However, we don't need to stop there: we can, thereafter, create a hierarchy of which contextual claims we are more "sure" of and which ones we are less "sure" of (it is like a hierarchy within a spectrum).

    In these cases, we don't have the denominator like in the "draw a jack" example. In fact, we just might not have enough applicable knowledge to make a decision based on probability. The more detailed our applicable knowledge in the situation, the more likely we are to craft a probability that seems more cogent. If we don't know the destructive level of the storm, perhaps we can't really make a reasonable induction. Knowing that we can't make a very good induction, is also valuable at times too.

    I think that most cases we cannot create an actual probability of the situation: I think most cases of what people constitute as "knowledge" are plausibilities. On another note, I completely agree with you that it is entirely the case that there is a point at which we should suspend judgment: but what is that point? That is of yet to decide! I think we can probably cover that next if you'd like.

    I look forward to your response,
    Bob
  • Philosophim
    2.6k
    Great conversation so far Bob! First, I have had time to think about it, and yes, I believe without a denominator, one cannot have probability, only possibilities that have occurred multiple times. I think this ties in with your idea of "immediateness" when considering cogency, and I think you have something that could be included in the cogency calculus.

    I believe immediateness is a property of "possibility". Another is "repetition". A possibility that has been repeated many times, as well as its immediateness in memory, would intuitively seem more cogent than something that has occurred less often and farther in the past. Can we make that intuitiveness reasonable?

    In terms of repetition, I suppose repetition means that you have applicably known an identity without distinctive alteration or amending multiple times. Something that has stood applicably for several repeats would seem to affirm its use in reality without contradiction.

    Immediateness also ties into this logic. Over time, there is ample opportunity for our distinctive knowledge to be expanded and amended. Whenever our distinctive knowledge changes, so does our context. What we applicably knew in our old context, may not apply in our current context.

    I think immediateness is a keen insight Bob, great contribution!

    "Plausibility" is a spectrum of likelyhoods, in a generic sense, where something is "Plausible" if it meets certain criteria (of which do not need to be derived solely from mathematics) and is "Implausible" if it is meets certain other criteria.Bob Ross

    I'll clarify plausibility. A plausibility has no consideration of likelihood, or probability. Plausibility is simply distinctive knowledge that has not been applicably tested yet. We can create plausibilities that can be applicably tested, and plausibilities that are currently impossible to applicably test. For example, I can state, "I think its plausible that a magical horse with a horn on its head exists somewhere in the world." I can then explore the world, and discover that no, magical horses with horns on their head do not exist.

    I could add things like, "Maybe we can't find them because they use their magic to become completely undetectable." Now this has become an inapplicable plausibility. We cannot apply it to reality, because we have set it up to be so. Fortunately, a person can ignore such plausibilities as cogent by saying, "Since we cannot applicably know such a creature, I believe it is not possible that they exist." That person has a higher tier of induction, and the plausibility can be dismissed as being less cogent.

    With this explored, we can identify probability as an applicable deduction that concludes both a numerator and denominator, or ratio. Possibility is a record of applicable deduction at least once. It is a numerator, with an unknown denominator. Repetition and immediateness intuitively add to its cogency. Finally plausibilities are distinctive knowledge that has not had a proper attempt at applicable deduction.

    Another way to think about it is: if my entire life (and everyone else testified to it in their lives as well), when presented with three cards (two of which are aces), I always randomly drew an ace--as in every time with no exceptions--then I would say the the "sureness" reverses and my math must have been wrong somehow (maybe probability doesn't work after all?Bob Ross

    I pulled this one quote out of your exceptional paragraph, because I think it allows an anchor to explore all of your propositions. Probability is based off of applicable knowledge. When I say there is a 4 out of 52 chance of drawing a jack, part of the applicable knowledge is that the deck has been shuffled in a way that cannot be determined. The reality is, we applicably know the deck is deterministic once the shuffle is finished. If we turned the deck around, we could see what the card order is. The probability forms from our known applicable limits, or when we cannot see the cards.

    In the case that someone pulled an ace every time someone shuffled the cards, there is the implicit addition of these limits. For example, "The person shuffling doesn't know the order of the cards." The person shuffling will doesn't try to rig the cards a particular way." "There is no applicable knowledge that would imply an ace would be more likely to be picked than any other card."

    In the instance in which we have a situation where probability has these underlying reasons, but extremely unlikely occurrences happen, like an ace is drawn every time someone picks from a shuffled deck, we have applicable knowledge challenging our probable induction. Applicable knowledge always trump's inductions, so at that point we need to re-examine our underlying reasons for our probability, and determine whether they still hold.

    We could do several tests to ascertain that we have a situation in which our probability holds. Perhaps pass the deck to be shuffled to several different people who are blindfolded. Test the cards for strange substances. Essentially ensure that the deck, the shuffle, and the pick all actually have the context for the probability to be a sound induction.

    It could be physics changes one day and it turns out that an ace will always end up at the top of any shuffled deck. At that point, we have to retest our underlying applicable knowledge, and discover that some of it no longer holds. We would have to make new conclusions. Fortunately, what would not break is how we applicably deduce, and the hierarchy of inductions.

    However, I don't think we should have to limit our examinations to their specific contexts: I think it is a hierarchy of hierarchies. You are right about the first hierarchy: you can determine the cogency based off of possibility vs probability vs plausibility vs irrationality. However, we don't need to stop there: we can, thereafter, create a hierarchy of which contextual claims we are more "sure" of and which ones we are less "sure" of (it is like a hierarchy within a spectrum).Bob Ross

    An excellent point that I think is applied to another aspect, context. Within the context of a person, I believe we have a heirarchy of inductions. But what about when two contexts collide? Can we determine a hierarchy of contexts? I believe I've mentioned that we cannot force a person to use a different context. Essentially contexts are used for what we want out of our reality. Of course, this can apply to inductions as well. Despite a person's choice, it does not negate that certain inductions are more rational. I would argue the same applies to contexts.

    This would be difficult to measure, but I believe one can determine if a context is "better" than another based on an evaluation of a few factors.

    1. Resource expenditure
    2. Risk of harm within the context
    3. Degree of harm within the context

    1. Resource expenditures are the cost of effort in holding a specific context. This can be time, societal, mental, physical effort, and much more. As we've discussed, the more specific and detailed one's distinctive knowledge, the more resource expenditure it will require to applicably know within that that distinctive context.

    2. The risk of harm would be the likelihood that one would be incorrect, and the consequences of being incorrect. If my distinctive context is very simple, I may come to harm more often in reality. For example, lets say there are 2 types of green round fruits that grow in an area. One is nutritious, the other can be eaten, but will make you sick. If you have a distinctive context that cannot identify between the two fruits, you are more likely to come to harm. If you have a more specific distinct context that can enable you to identify which fruit is good, and which is not, you decrease the likelihood you will come to harm.

    3. The degree of harm would be the cost for making an incorrect decision based on the context one holds. If for example, I have a very simple distinctive context that means I fail at making good decisions in a card game with friends, the risk of harm is very low. No money is lost, and we're there to have a good time. If however I'm playing high stakes poker for a million dollar pot, the opportunity cost of losing is staggering. A context that increases the likelihood I will lose should be thrown out in favor of a context that gives a higher chance of winning. Or back to fruit. Perhaps one of the green round fruits simply doesn't taste as good as the other. The degree of harm is lower, and may not be enough for you to expend extra resources in identifying the two fruits as having separate identities.

    I believe this could all be evaluated mathematically. Perhaps it would not be so useful to most people, but could be very important in terms of AI, large businesses, or incredibly major and important decisions. As such, this begins to seep out of philosophy, and into math and science. Which if the theory is sound, would be the next step.

    Really great points again Bob! Holidays are on the horizon, so there may be a lull between writings this week, but should resume after Christmas. I hope you have a nice holiday season yourself!
  • Bob Ross
    1.7k
    Hello @Philosophim,

    First and foremost, Merry Christmas! I hope you have (had) a wonderful holiday! I apologize as I haven't had the time to answer you swiftly: I wanted to make sure I responded in a substantive manner.

    There is a lot of what you said that I wholeheartedly agree with, so I will merely comment on some things to spark further conversation.

    I believe immediateness is a property of "possibility". Another is "repetition". A possibility that has been repeated many times, as well as its immediateness in memory, would intuitively seem more cogent than something that has occurred less often and farther in the past. Can we make that intuitiveness reasonable?

    I think you are right in saying immediateness is a property of possibility, and therefrom, also repetition. Moreover, I would say that immediateness, in a general sense, is "reasonableness". What we use to reason, at a fundamental level, is our experiences and memories, and we weigh them. I think of it, in part, like Hume's problem of induction: we have an ingrained habit of weighing our current experiences and our memories to determine what we hold as "reality", just like we have an ingrained sense of the future resembling the past. I don't think we can escape it at a fundamental level. For example, imagine all of the your memories are of a life that you currently find yourself not in: your job, your family, your intimate lover, your hobbies, etc within your memories directly contradicts your current experiences of life (like, for instance, all of your pictures that you are currently looking at explicitly depict a family that contradicts the family you remember: they don't even look similar at all, they have different names, they aren't even the same quantity of loved ones you remember). In the heat of the moment, the more persistently your experiences continue to contradict your memories, the more likely you are to assert the former over the latter. But, on the contrary, if you only experience for, let's say, 3 minutes this alternate life and then are "sucked back into" the other one, which aligns with your memories, then you are very likely to assert that your memories were true and you must have been hallucinating. However, 3 years into experiencing that which contradicts your memories will most certainly revoke any notion that your original memories are useful, lest you live in a perpetual insanity. That would be my main point: it is not really about what is "true", but what is "useful" (or relevant). Even if your original memories, in this case, are "true", they definitely aren't relevant within your current context. This is what I mean by "weighing them", and we don't just innately weigh one over the other but, rather, we also compare memories to other memories. Although I am not entirely sure, I think that we innately compare memories to other memories in terms of two things: (1) quantity of conflicting or aligning memories and (2) current experience. However, upon further reflection, it actually seems that we are merely comparing #1, as #2 is actually just #1: our "current" experience is in the past. By the time the subject has determined that they have had an experience (i.e. they have reasoned their way into a conclusory thought amongst a string of preceding thoughts that they are, thereby, convinced of) they are contemplating something that is in the past (no matter how short a duration of time from when it occurred). Another way of putting it is: once I've realized that the color of these characters, which I am typing currently, is black, it is in the past. By the time I can answer the question of "Is my current experience I am having in the present?", I am contemplating a very near memory. My "current" mode of existence is simply that which is the most recent of past experiences: interpretations are not live in a literal sense, but only in a contextual sense (if "present" experience is going to mean anything, it is going to relate to the most recent past experience, number 1 in the queue). The reason I bring this up is because when we compare our "current" experience to past experiences, we are necessarily comparing a past experience to a past experience, but, most notably, one is more immediate than the other: I surely can say that what is most recent in the queue of past experiences, which necessarily encompasses "life" in general (knowledge)(discrete experiences, applicable knowledge, and discrete knowledge), is more "sure" than any of the past experiences that reside before it in the queue of memories. However, just because I am more "sure" of it doesn't make it more trustworthy in an objective sense: it becomes more trustworthy the more it aligns with the ever prepending additions of new experiences. For example, if I have, hypothetically speaking, 200 memories and the oldest 199 contradict the newest 1, then the determining factor is necessarily, as an ingrained function of humanity, how consistently each of those two contradicting subcategories compares to the continual prepending of new experiences (assuming 1 is considered the "current" experience and 2 is the most recent experience after 1 and so forth). But, initially, since the quantity of past experiences is overwhelmingly aligned and only contradicted by the most recent one, I would assert the position that my past 199 experiences are much more "true" (cogent). However, as I continue to experience, at a total of 500 experiences, if the past 300 most recent experiences align with that one experience that contradicted the 199, then the tide has probably turned. However, if I, on the next experience after that one contradicting experience, start experiencing many things that align with those 199, then I would presume that that one was wrong (furthermore, when I previously stated I would initially claim the 199 to be more “true” than the 1, this doesn’t actually happen until I experience something else, where that one contradictory experience is no longer the most recent, and the now newest experience is what I innately compare to the 1 past contradictory one and the other 199). Now, you can probably see that this is completely contextual as well: there's a lot of factors that go into determining which is more cogent. However, the "current" experiences are always a more "sure" fact and, therefore, the more recent the more "sure". For example, if I have 200 past experiences and the very next 10 are hallucinations (thereby causing a dilemma between two "realities", so to speak), I will only be able to say the original, older 200 past experiences were the valid ones if I resume experiencing in a way that aligns with those experiences and contradicts those 10 hallucinated experiences. If I started hallucinated right now (although we, in hindsight know it, let's say I don't), then I will never be able to realize it is really false representations until I start having "normal" experiences again. Even if I have memories of my "normal experiences" that contradict my "current" hallucinated ones, I won't truly deem it so (solidify my certainty that it really was hallucinated) until the hallucinated chain of experiences is broken with ones that align with my past experiences of "normal experiences". Now, I may have my doubts, especially if I have a ton of vivid memories of "normal experiences" while I am still hallucinating, but the more it goes on the more it seems as though my "normal experiences" were the hallucinated ones while the hallucinated ones are the "normal experiences". I'm not saying that necessarily, in this situation, I would be correct in inverting the terms there, but it seems as though only what is relevant to the subject is meaningful: even if it is the case that my memories of "normal experiences" are in actuality normal experiences, if I never experience such "normal experiences" again, then, within context, I would be obliged to refurbish my diction to something that is more relevant to my newly acquired hallucinated situation. Just food for thought.

    I'll clarify plausibility. A plausibility has no consideration of likelihood, or probability. Plausibility is simply distinctive knowledge that has not been applicably tested yet. We can create plausibilities that can be applicably tested, and plausibilities that are currently impossible to applicably test. For example, I can state, "I think its plausible that a magical horse with a horn on its head exists somewhere in the world." I can then explore the world, and discover that no, magical horses with horns on their head do not exist.

    I could add things like, "Maybe we can't find them because they use their magic to become completely undetectable." Now this has become an inapplicable plausibility. We cannot apply it to reality, because we have set it up to be so. Fortunately, a person can ignore such plausibilities as cogent by saying, "Since we cannot applicably know such a creature, I believe it is not possible that they exist." That person has a higher tier of induction, and the plausibility can be dismissed as being less cogent.

    Although I was incorrect in in saying plausibility is likelihood, I still have a bit of a quarrel with this part: I don't think that all unapplicable plausibilities are as invalid as you say. Take that tree example from a couple of posts ago: we may never be able to applicably test to see if the tree is there, but I can rationally hold that it is highly plausible that it is. The validity of a plausibility claim is not about if it is directly applicable to reality or not, it is about (1) how well it aligns with our immediate knowledge (our discrete experiences, memories, discrete knowledge, and applicable knowledge) and (2) its relevancy to the subject. For this reason, I don't think the claim that unicorns exist can be effectively negated by claiming that it is not possible that they exist. A winged horse-like creature with a horn in the middle of its skull is possible, in that it doesn’t defy any underlying physics or fundamental principles, and, therefore, it is entirely possible that there is a unicorn out there somewhere in the universe (unless, in your second example, it has a magical power that causes it to be undetectable—however the person could claim that it is a natural cloaking instead of supernatural, in a magical sense, just like how we can’t see the tiny bacteria in the air, maybe the unicorn is super small). For me, it isn’t, in the case of a unicorn, that it is not possible that makes me not believe that they exist, it is (1) its utter irrelevancy to the subject and (2) the complete lack of positive evidence for it. I am a firm believer in defaulting to not believing something until it is proven to be true, and so, naturally, I don’t believe unicorns exist until we have evidence for them (I don’t think possibility is strong evidence for virtually anything—it is more of just a starting point). This goes hand-in-hand with my point pertaining to plausibility: the lack of positive evidence for a unicorn’s existence goes hand-in-hand, directly, with our immediate forms of knowledge. If nobody has any immediate forms of knowledge pertaining to unicorns (discrete experiences, applicable knowledge, and discrete knowledge), then, for me, it doesn’t exist—not because it actually doesn’t exist (in the case that it is not a possibility), but because it has no relevancy to me or anyone else (anything that we could base off of unicorns would be completely abstract—a combination of experiences and absences/negations of what has been experienced in ways that produce something that the subject hasn’t actually ever experienced before). Now, I think this gets a bit tricky because someone could claim that their belief in a unicorn existing makes them happier and, thereby, it is relevant to them. I think this becomes a contextual difference, because, although I would tell them “you do you”, I would most certainly claim that they don’t “know” unicorns exist (and, in this case, they may agree with me on that). You see, this gets at what it means to be able to “applicably know” something: everything a subject utilizes is applicable in one way or another. If the person tells me that “I don’t know if unicorns exist, but I believe that they do because it makes me happy”, they are applying their belief to the world without contradiction with respect to their happiness: who am I to tell them to stop being happy? However, I would say that they don’t “know” it (and they agreed with me on that in this case), so applying a belief to reality is not necessarily a form of knowledge (to me, at least). But in a weird way, it actually is, because it depends on what they are claiming to know. In my previous example, they aren’t claiming to “know” unicorns exist, but they are implicitly claiming to “know” that believing in it makes them happier and I think that is a perfectly valid application of belief that doesn’t contradict reality (it just isn’t pertaining to whether the unicorn actually exists or not). Now, if I were to notice some toxic habits brewing from their belief in unicorns, then I could say that are holding a contradiction because the whole point was to be happier and toxic habits don’t make you happier (so basically I would have to prove that they are not able to apply their “belief in unicorns=happier” without contradiction). Just food for thought (:

    In the case that someone pulled an ace every time someone shuffled the cards, there is the implicit addition of these limits. For example, "The person shuffling doesn't know the order of the cards." The person shuffling will doesn't try to rig the cards a particular way." "There is no applicable knowledge that would imply an ace would be more likely to be picked than any other card."

    In the instance in which we have a situation where probability has these underlying reasons, but extremely unlikely occurrences happen, like an ace is drawn every time someone picks from a shuffled deck, we have applicable knowledge challenging our probable induction. Applicable knowledge always trump's inductions, so at that point we need to re-examine our underlying reasons for our probability, and determine whether they still hold.

    I completely agree with you here: excellent assessment! My main point was just that it is based off of one’s experiences and memories: that is it. If we radically change the perspective on an idea that we hold as malleable (such as an “opinion”), such that it is as concrete as ever in our experiences and memories, then we are completely justified in equating it with what we currently deem to be concrete (such as gravity).

    I believe I've mentioned that we cannot force a person to use a different context. Essentially contexts are used for what we want out of our reality. Of course, this can apply to inductions as well. Despite a person's choice, it does not negate that certain inductions are more rational. I would argue the same applies to contexts.

    I would, personally, rephrase “Despite a person’s choice, it does not negate that certain inductions are more rational” to “Despite a person’s choice, it does not negate that certain inductions are more rational within a fundamentally shared subjective experience”. I would be hesitant to state that one induction is actually absolutely better than another due to the fact that they only seem that way because we share enough common ground with respect to our most fundamental subjective experiences. One day, there could be a being that experiences with no commonalities with me or you, a being that navigates in a whole different relativity (different scopes/contexts) than us and I wouldn’t have the authority to say they were wrong—only that they are wrong within my context as their context (given it shares nothing with me) is irrelevant to my subjective experience.

    This would be difficult to measure, but I believe one can determine if a context is "better" than another based on an evaluation of a few factors.

    I love your three evaluative principles for determining which context is “better”! However, with that being said, I think that your determinations are relative to subjects that share fundamental contexts. For example, your #3 (degree of harm) principle doesn’t really address two ideas: (1) the subject may not share your belief that one ought to strive to minimize the degree of harm and (2) the subject may not care about the degree of harm pertaining to other subjects due to their actions (i.e. psychopaths). To put it bluntly, I think that humans become cognitively convinced of something (via rudimentary reason) and it gets implemented if enough people (with the power in society—or have the ability to seize enough power) are also convinced of it (and I am using “power” in an incredibly generic sense—like a foucault kind of complexity, not just brute force or guns or something). That’s why society is a wave, historically speaking, and 100 generations later we condemn our predecessors for not being like us (i.e. for the horrific things they did), but why would they be like us? We do not share as much in common contextually with them as we do with a vast majority of people living within the present with us (or people that lived closer to our generation). I think that a lot of the things that they did 200 years ago (or really pick any time frame) was horrendous: but was it objectively wrong? I thing nietzsche put it best: “there are no moral phenomenon, just moral interpretations of phenomenon”. I am cognitively convinced that slavery is abhorrent, does that make it objectively wrong? The moral wrongness being derived from cognition (and not any objective attribute of the universe) doesn’t make slavery any more “right”, does it? I think not. The reason we don’t have slavery anymore (or at least at such a large scale as previous times) is because enough people who held sufficient power (or could seize it) were also convinced that it is abhorrent and implemented that power to make a change (albeit ever so slow as it was). My point is that, even though I agree with you on your three points, you won’t necessarily be able to convince a true psychopath to care about his/her neighbors, and their actions are only “wrong” relative to the subject uttering it. We have enough people that want to prevent psychopaths from doing “horrible” things (a vast majority of people can feel empathy, which is a vital factor) and, therefore, pyschopaths get locked up. I am just trying to convey that everything is within a context (and I think you agree with me on this, but we haven’t gone this deep yet, so I am curious as to what you think). It is kind of like the blue glasses though experiment, if we all were born with blue glasses ingrained into our eyeballs, then the “color spectrum” would consist of different colors of blue and that would be “right” relative to our subjective experience. However, if, one day, someone was born with eyes that we currently have, absent of blue glasses, then their color spectrum would be “right” for them, while our blue shaded color spectrum would be “right” for us. Sadly, this is where “survival of the fittest” (sort of) comes into play: if there is a conflict where one subjective experience of the color spectrum needs to be deemed the “right” one, then the one held by the most that hold the “power” ultimately will determine their conclusion to be the “truth”: that is why we call people who see green and red flip-flopped “color blind”, when, in reality, we have no clue who is actually absolutely “right” (and I would say we can’t know, and that is why each are technically “blind” with respect to the other—we just only strictly call them “color blind” because the vast majority ended up determining their “truth” to be the truth). When we say “you can’t see red and green correctly”, this is really just subjectively contingent on how we see color.

    I think that my main point here is that absolutely determining which context is better is just as fallacious, in my mind, as telling ourselves that we must determine whether our “hand” exists as we perceive it or whether it is just mere atoms (or protons, or quarks) and that we must choose one: they don’t contradict eachother, nor do fundamental contexts. Yes we could try to rationalize who has a better context (and I think your three points on this are splendid!), but that also requires some common ground that must be agreed upon and that means that, in some unfortunate cases, it really becomes a “does this affect me where I need to take action to prevent their context?” (and “do I have enough power to do anything about it?” or “can I assemble enough power, by the use of other subjects that agree with me, to object to this particular person’s context”).

    I look forward to your response!
    Bob
  • Philosophim
    2.6k
    Hello Bob! I'm back from vacation. I hope the holidays found you well.

    Your immediateness section is spot on! Our chain of "trusting" memories is the evaluation of possibilities and plausible beliefs. Having a memory of something doesn't necessarily mean that memory is of something we applicably knew. Many times, its plausible beliefs that have not been applicably tested. While I agree that immediateness is an evaluative tool of possibilities (that which has been applicably known at least once), an old possibility is still more cogent than a newer plausibility.

    Plausibility does not use immediateness for evaluation, because immediateness is based on the time from which the applicable knowledge was first gained. Something plausible has never been applicably known, so there is no time from from which we can state it is relevent.

    Moreover, I would say that immediateness, in a general sense, is "reasonableness".Bob Ross

    The reasonableness is because it is something we have applicably known, and recently applicably known. I say this, because it is easy to confuse plausibilities and possibilities together. Especially when examining the string of chained memories, it is important to realize which are plausibilities, and which are possibilities. If you have a base possibility that chains into a plausibility, you might believe the end result is something possible, when it is merely plausible.

    So taking your example of a person who has lived with different memories (A fantastic example) we can detail it to understand why immediateness is important. It is not that the memories are old. It is that that which was once possible, is now no longer possible when you apply your distinctive knowledge to your current situation.

    We don't even have to imagine the fantastical to evaluate this. We can look at science. At one time, what was determined as physics is different than what scientists have discovered about physics today. We can look back into the past, and see that many experiments revealed what was possible, while many theories, or plausibilities were floating around intellectual circles, like string theory.

    However, as pluasibilities are applied to reality, the rejects are thrown away, and the accepted become possibilities. Sometimes these possibilities require us to work back up the chain of our previous possibilities, and evaluate them with our new context. Sometimes, this revokes what was previously possible, or it could be said forces us to switch context. That which was once known within a previous context of time and space, can no longer be known within this context.

    With this clarified, this will allow me to address your second part about plausibility.

    Take that tree example from a couple of posts ago: we may never be able to applicably test to see if the tree is there, but I can rationally hold that it is highly plausible that it is.Bob Ross

    Is it possible that the tree is not there anymore, or is it plausible? If you applicably know that trees can cease to be then you know it is possible that a tree can cease to be. It is plausible that the tree no longer exists, but this plausibility is based on a possibility. The devil is in the details, and the devil understand that the best way to convince someone of a lie, is to mix in a little truth.

    The reality, is this is a plausibility based off of a possibility. Intuitively, this is more reasonable then a plausibility based off of a plausibility. For example, its plausible that trees have gained immortality, therefore the tree is still there. This intuitively seems less cogent, and I believe the reason why, is because of the chain of comparative logic that its built off of.

    But the end claim, that one particular tree is standing, vs not still standing, is a plausibility. You can rationally hold that it is plausible that it is still standing, but how do we determine if one plausibility is more rational than another? How do we determine if one possibility, or even one's applicable knowledge is more cogent than another? I believe it is by looking at the logic chain that the plausibility is linked from.

    The validity of a plausibility claim is not about if it is directly applicable to reality or not, it is about (1) how well it aligns with our immediate knowledge (our discrete experiences, memories, discrete knowledge, and applicable knowledge) and (2) its relevancy to the subject. For this reason, I don't think the claim that unicorns exist can be effectively negated by claiming that it is not possible that they exist.Bob Ross

    I think the comparative chains of logic describes how (1) it aligns with our immediate knowledge and inductive hierarchies. I believe (2) relevancy to the subject can be seen as making our distinctive knowledge more accurate.

    Going to your unicorn example, you may say its possible for an animal to have a horn, possible for an animal to have wings, therefore it is plausible that a unicorn exists. But someone might come along with a little more detail and state, while its possible that animals can have horns on their head, so far, no one has discovered that its possible for a horse to. Therefore, its only plausible that a horse would have wings or a horn, therefore it is only plausible that a unicorn exists. In this case, our more detailed context allows us to establish that a unicorn is a concluded plausibility, based off of 2 pluasibilities within this more specific context.

    Logically, what is pluasible is not yet possible. Therefore I can counter by stating, "It is not possible for a horse to have wings or horns grow from its head. Therefore it is not possible that a unicorn exists in the world."

    I am a firm believer in defaulting to not believing something until it is proven to be true, and so, naturally, I don’t believe unicorns exist until we have evidence for themBob Ross

    I think this fits with your intuition then. What is plausible is something that has no applicable knowledge. It is more rational to believe something which has had applicable knowledge, the possible, over what has not, the plausible.

    Now, I think this gets a bit tricky because someone could claim that their belief in a unicorn existing makes them happier and, thereby, it is relevant to them.Bob Ross

    Hopefully the above points have shown why a belief in their existence, based on their happiness of having that belief, does not negate the hierarchy of deductive application and induction. Recall that to applicably know something, they must have a definition, and must show that definition can exist in the world without contradiction. If they give essential properties, such as a horse with a horn from its head and wings, they must find such a creature to say they have applicable knowledge of it.

    Insisting it exists without applying that belief to reality, is simply the belief in a plausibility. Happiness may be a justification for why they believe that plausibility, but it is never applicable knowledge.
    Happiness of the self does not fulfill the discovery of the essential properties of a horn and wings on a horse in the world.

    I would, personally, rephrase “Despite a person’s choice, it does not negate that certain inductions are more rational” to “Despite a person’s choice, it does not negate that certain inductions are more rational within a fundamentally shared subjective experience”.Bob Ross

    I agree with the spirit of this, but want to be specific on the chain comparison within a context. What is applicable, and the hierarchy of inductions never changes. What one deduces or induces is based upon the context one is in. Something that is possible in a specific context, may only be plausible in a more detailed one as noted earlier. But, what is possible in that context, is always more rational then what is plausible in that context.

    For example, your #3 (degree of harm) principle doesn’t really address two ideas: (1) the subject may not share your belief that one ought to strive to minimize the degree of harm and (2) the subject may not care about the degree of harm pertaining to other subjects due to their actions (i.e. psychopaths).Bob Ross

    I agree here, because no matter what formula or rationale I set up for a person to enter into a particular context, they must decide to enter in that particular content of that formula or rationale! This means that yes, there will be creatures that are not able to grasp certain contexts, or simply decide not to agree with them. This is a fundamental freedom of every thinking thing.

    So then, there is one last thing to cover: morality. You hit the nail on the head. We need reasons why choosing to harm other people for self gain is wrong. I wrote a paper on morality long ago, and got the basic premises down. The problem was, I was getting burned out of philosophy. I couldn't get people to discuss my knowledge theory with me, and I felt like I needed that to be established first. How can we know what morality is if we cannot know knowledge?

    Finally, it honestly scared me. I felt that if someone could take the fundamental tenants of morality I had made, they could twist it into a half truth to manipulate people. If you're interested in hearing my take on morality, I can write it up again. Perhaps my years of experience since then will make me see it differently. Of course lets finish here first.

    That would be my main point: it is not really about what is "true", but what is "useful" (or relevant).Bob Ross

    I just wanted to emphasize this point. Applicable knowledge cannot claim it is true. Applicable knowledge can only claim that it is reasonable.

    And with that, another examination done! Fantastic points and thoughts as always.
  • Bob Ross
    1.7k
    Hello @Philosophim,

    Absolutely splendid post! I thoroughly enjoyed reading it!

    Upon further reflection, I think that we are using the terms "possibility" and "plausibility" differently. I am understanding you to be defining a "possibility" as something that has been experienced at least once before and a "plausibility" as something that has not been applicable tested yet. However, I as thinking of "possibility" more in terms of its typical definition: "Capable of happening, existing, or being true without contradicting proven facts, laws, or circumstances". Furthermore, I was thinking of "plausibility" more in the sense that it is something that is not only possible, but has convincing evidence that it is the case (but hasn't been applicably tested yet). I think that you are implicitly redefining terms, and that is totally fine if that was the intention. However, I think that to say something is "possible" is the admit that it doesn't directly contradict reality in any way (i.e. our immediate forms of knowledge) and has nothing directly to do with whether I have ever experienced it before. For example, given our knowledge of colors and the human eye, I can state that it is possible that there are other shades of colors that we can't see (but with better eyes we could) without ever experiencing any new shades of colors. It is possible because it doesn't contradict reality, whereas iron floating on water isn't impossible because I haven't witnessed it but, rather, because my understanding of densities (which are derived from experiences of course) disallows such a thing to occur. Moreover, to state that something is "plausible", in my mind, implies necessarily that it is also "possible"--for if it isn't possible then that would mean it contradicts reality and, therefore, it cannot have reasonable evidence for it being "plausible". Now, don't get me wrong, I think that your responses were more than adequate to convey your point, I am merely portraying our differences in definitions (semantics). I think that your hierarchy, which determines things that are derived more closely to "possibilities" to be more cogent, is correct in the sense that I redefine a "possibility" as something experienced before (or, more accurately, applicably known). However, I think that you are really depicting that which is more immediate to be more cogent and not that which is possible (because I would define "possibility" differently than you). Likewise, when you define a "plausibility" to be completely separate from "possibility", I wouldn't do that, but I think that the underlying meaning you are conveying is correct.

    an old possibility is still more cogent than a newer plausibility.
    I would say: that which is derived from a more immediate source (closer to the processes of perception, thought, and emotion--aka experience) is more cogent than something that is derived from a less immediate source.

    Plausibility does not use immediateness for evaluation, because immediateness is based on the time from which the applicable knowledge was first gained.

    Although, with your definitions in mind, I would agree, I think that plausibility utilizes immediateness just as everything else: you cannot escape it--it is merely a matter of degree (closeness or remoteness).

    So taking your example of a person who has lived with different memories (A fantastic example) we can detail it to understand why immediateness is important. It is not that the memories are old. It is that that which was once possible, is now no longer possible when you apply your distinctive knowledge to your current situation.

    I agree! But because possibility is derived from whether it contradicts reality--not whether I have experienced it directly before. Although I may be misunderstanding you, if we define possibility as that which has been applicably known before, then, in this case, it is still possible although one cannot apply it without contradiction anymore (because one would have past experiences of it happening: thus it is possible). However, if we define possibility in the sense that something doesn't contradict reality, then it can be possible with respect the memories (in that "reality") and not possible with respect to the current experiences (this "reality") because we are simply, within the context, determining whether the belief directly contradicts what we applicably and distinctly know.

    We don't even have to imagine the fantastical to evaluate this. We can look at science. At one time, what was determined as physics is different than what scientists have discovered about physics today. We can look back into the past, and see that many experiments revealed what was possible, while many theories, or plausibilities were floating around intellectual circles, like string theory.

    Although I understand what you are saying and it makes sense within your definitions, I would claim that scientific theories are possible and plausible. If it wasn't possible, then it is isn't plausible because it must first be possible to be eligible to even be considered plausible. However, I fully agree with you in the sense that we are constantly refining (or completely discarding) older theories for better ones: but this is because our immediate forms of knowledge now reveal to us that those theories contradict reality in some manner and, therefore, are no longer possible (and, thereby, no longer plausible either). Or, we negate the theory by claiming it no longer meets our predefined threshold for what is considered plausible, which in no way negates its possibility directly (although maybe indirectly).

    However, as pluasibilities are applied to reality, the rejects are thrown away, and the accepted become possibilities. Sometimes these possibilities require us to work back up the chain of our previous possibilities, and evaluate them with our new context. Sometimes, this revokes what was previously possible, or it could be said forces us to switch context. That which was once known within a previous context of time and space, can no longer be known within this context.

    I think you are sort of alluding to what I was trying to depict here, but within the idea that an applied plausibility can morph into a possibility. However, I don't think that only things I have directly experienced are possible, or that what I haven't directly experienced is impossible, it is about how well it aligns with what I have directly experienced (immediate forms of knowledge). Now, I may be just conflating terms here, but I think that to state that something is plausible necessitates that it is possible.

    Is it possible that the tree is not there anymore, or is it plausible?

    Both. If I just walked by the tree 10 minutes ago, and I claim that it is highly plausible that it is still there, then I am thereby also admitting that it is possible that it is there. If it is not possible that it is there, then I would be claiming that the tree being there contradicts reality but yet somehow is still plausible. For example, if I claimed that it is plausible that the tree poofed into existing out of thin air right now (and I never saw, I'm just hypothesizing it from my room which has no access to the area of land it allegedly poofed onto), then you would be 100% correct in rejecting that claim because it is not possible, but it is not possible because contradicts every aspect of my immediate knowledge I have. However, if I claimed that it is highly plausible that a seed, in the middle of spring, in an area constantly populated with birds and squirrels, has been planted (carried by an animal, not purposely planted by humans) in the ground and will sprout someday a little tree, I am claiming that it is possible that this can occur and, not only that, but it is highly "likely" (not in a probabilistic sense, but based off of immediate knowledge) that it will happen. I don't have to actually have previously experienced this process in its entirety: if I have the experiential knowledge that birds can carry seeds in their stomachs (which get pooped out, leaving the seed in fine condition) and that a seed dropped on soil, given certain conditions, can cause a seed to implant and sprout, then I can say it is possible without ever actually experiencing a bird poop a seed out onto a field and it, within the right conditions, sprout. A more radical example is the classic teapot flouting around (I can't quite remember which planet) Jupiter. If the teapot doesn't violate any of my immediate forms of knowledge, then it is possible; however, it may not be plausible as I haven't experienced anything like it and just because the laws allow it doesn't mean it is a reasonable (or plausible) occurrence to take place. Assuming the teapot doesn't directly contradict reality, then I wouldn't negate a belief in it based off of it not being possible but, rather, based on it not being plausible (and, more importantly, not relevant to the subject at all).

    The reality, is this is a plausibility based off of a possibility. Intuitively, this is more reasonable then a plausibility based off of a plausibility. For example, its plausible that trees have gained immortality, therefore the tree is still there. This intuitively seems less cogent, and I believe the reason why, is because of the chain of comparative logic that its built off of.

    In this specific case, I would claim that trees being immortal is not plausible because it contradicts all my immediate knowledge pertaining to organisms: they necessarily have an expiration to their lives. However, let's say that an immortal tree didn't contradict reality, then I would still say it is implausible, albeit possible, because I don't have any experiences of it directly or indirectly in any meaningful sense. If the immortality of a tree could somehow be correlated to a meaningful, relevant occurrence that I have experienced (such as, even if I haven't seen a cell, my indirect contact with the concept of "cells" consequences), then I would hold that it is "true". If it passes the threshold of a certain pre-defined quantity of backed evidence, then I would claim it is, thereafter, considered "plausible".

    But the end claim, that one particular tree is standing, vs not still standing, is a plausibility.

    Upon further examination, I don't think this is always the case. It is true that it can be a plausibility, but it is, first and foremost, a possibility. Firstly, I must determine whether the tree being there or not is a contradiction to reality. If it is, then I don't even begin contemplating whether it is plausible or not. If it isn't, then I start reasoning whether I would consider it plausible. If I deem it not plausible, after contemplation, it is still necessarily possible, just not plausible.

    You can rationally hold that it is plausible that it is still standing, but how do we determine if one plausibility is more rational than another?

    By agreeing upon a bar of evidence and rationale it must pass to be considered such. For example, take a 100 yard dash sprint race. We both can only determine a contestant's run time "really fast", "fast", "slow", or "really slow" if we agree upon thresholds: I would say the same is true regarding plausibility and implausibility. I might constitute the fact that I saw the tree there five minutes ago as a characterization that it is "highly plausible" that it is there, whereas you may require further evidence to state the same.

    I believe it is by looking at the logic chain that the plausibility is linked from.

    Although I have portrayed some differences, hereforth, between our concepts of possibility and plausibility, I would agree with you here. However, I think that it is derived from the proximity of the concept to our immediate forms of knowledge, whereas I think yours, in this particular case, is based off of whether it is closer to a possibility or not (thereby necessarily, I would say, making something that is possible and something that is plausible mutually exclusive).

    I think the comparative chains of logic describes how (1) it aligns with our immediate knowledge and inductive hierarchies. I believe (2) relevancy to the subject can be seen as making our distinctive knowledge more accurate.

    Again, I agree, but I would say that the "chains of logic" here is fundamentally the proximity to the immediate forms of knowledge (or immediateness as I generally put it) and not necessarily (although I still think it is a solid idea) comparing mutually exclusive types, so to speak, such as possibility and plausibility, like I think you are arguing for.

    Going to your unicorn example, you may say its possible for an animal to have a horn, possible for an animal to have wings, therefore it is plausible that a unicorn exists. But someone might come along with a little more detail and state, while its possible that animals can have horns on their head, so far, no one has discovered that its possible for a horse to. Therefore, its only plausible that a horse would have wings or a horn, therefore it is only plausible that a unicorn exists

    I would say that someone doesn't have to witness a horned, winged horse to know that it is possible because it doesn't contradict any immediate forms of knowledge (reality): it abides by the laws of physics (as far as I know). This doesn't mean that it is plausible just because it is possible: I would say it isn't plausible because it doesn't meet my predefined standards for what I can constitute as plausible. However, for someone else, that may be enough to claim it is "plausible", but I would disagree and, more importantly, we would then have to discuss our thresholds before continuing the conversation in any productive manner pertaining to unicorns. Again, maybe I am just conflating the terms, but this is as I currently understand them to mean.

    Logically, what is pluasible is not yet possible

    I don't agree with this, but I am open to hearing why you think this is the case. I consider a possibility to be, generally speaking, "Capable of happening, existing, or being true without contradicting proven facts, laws, or circumstances" and a plausibility, generally speaking, to be "Seemingly or apparently valid, likely, or acceptable; credible". I could potentially see that maybe you are saying that what is "seemingly...valid, likely, or acceptable" is implying it hasn't been applicably known yet, but this doesn't mean that it isn't possible (unless we specifically define possibility in that way, which I will simply disagree with). I would say that it is "seemingly...valid, likely, or acceptable" because it is possible (fundamentally) and because it passes a certain predefined threshold (that other subjects can certainly reject).

    I think this fits with your intuition then. What is plausible is something that has no applicable knowledge. It is more rational to believe something which has had applicable knowledge, the possible, over what has not, the plausible

    Again, the underlying meaning here I have no problem with: I would just say it is about the proximity and not whether it is possible or not (although it must first be possible, I would say, for something to be plausible--for if I can prove that something contradicts reality, then it surely can't be plausible). I think that we are just using terms differently.

    So then, there is one last thing to cover: morality. You hit the nail on the head. We need reasons why choosing to harm other people for self gain is wrong. I wrote a paper on morality long ago, and got the basic premises down. The problem was, I was getting burned out of philosophy. I couldn't get people to discuss my knowledge theory with me, and I felt like I needed that to be established first. How can we know what morality is if we cannot know knowledge?

    Finally, it honestly scared me. I felt that if someone could take the fundamental tenants of morality I had made, they could twist it into a half truth to manipulate people. If you're interested in hearing my take on morality, I can write it up again. Perhaps my years of experience since then will make me see it differently. Of course lets finish here first.

    I would love to hear your thoughts on morality and ethics! However, I think we need to resolve the aforementioned disagreements first before we can explore such concepts (and I totally agree that epistemology precedes morality).

    Applicable knowledge cannot claim it is true. Applicable knowledge can only claim that it is reasonable.

    I absolutely love this! However, I would say that it is "true" for the subject within that context (relative truth), but with respect to absolute truths I think you hit the nail on the head!

    I look forward to your response,
    Bob
  • Philosophim
    2.6k
    I think the true issue here, is a difference in our use of terms between plausibility and possibility. Lets see if we can come to the same context.

    I am repurposing the terms of probability, possibility, and plausibility after redefining knowledge into distinctive and applicable knowledge. The reason is, the terms original use was for the old debated generic knowledge. As they were, they do not work anymore. However, they are great words, and honestly only needed some slight modifications. If you think I should invent new terms for these words I will. The words themselves aren't as important as the underlying meaning.

    At each step of the inductive hierarchy, it is a comparative state of deductive knowledge, versus applicable knowledge.

    Possibility is a state in which an applied bit of distinctive knowledge has been applicably known. At that point in time, a belief that the applicably knowledge could be obtained again, is the belief that it is "possible".

    Plausibility is distinctive knowledge that has not been applicably tested, but we have a belief as to the applicable outcome.

    You noted,
    Logically, what is plausible is not yet possible

    I don't agree with this, but I am open to hearing why you think this is the case.
    Bob Ross

    The reason something plausible is not yet possible, is because once something plausible has been applicably known one time, it is now possible. It is an essential property to the meaning of plausibility, that it is exclusionary from what is possible.

    As such, many times you were comparing to possibilities together, instead of a plausibility and a possibility.

    However, I think that to say something is "possible" is to admit that it doesn't directly contradict reality in any way (i.e. our immediate forms of knowledge) and has nothing directly to do with whether I have ever experienced it before. For example, given our knowledge of colors and the human eye, I can state that it is possible that there are other shades of colors that we can't see (but with better eyes we could) without ever experiencing any new shades of colors.Bob Ross

    We say something is possible if it has been applicably known at least once. To applicably know something, you must experience it at least once. We cannot state that it is possible that there are other shades of color that humanity could see if we improved the human eye, because no one has yet improved the human eye to see currently unseeable colors.

    What you've done is taken distinctive knowledge, that is built on other applicable knowledge, and said, "Well its "likely" there are other colors". But what does "likely" mean in terms of the knowledge theory we have? Its not a probability, or a possibility, because the distinctive knowledge of "I think there are other colors the human eye could see if we could make it better." has never been applicably known.

    We could one day try improving the human eye genetically. Maybe we would succeed. Then we would know its possible. But until we succeed in applicably knowing once, it is only plausible.

    I feel that "Plausibility" one of the greatest missing links in epistemology. Once I understood it, it explained many of the problems in philosophy, religion, and fallacious thinking in general. I understand your initial difficulty in separating plausibilities and possibilities. Plausibilities are compelling! They make sense in our own head. They are the things that propel us forward to think on new experiences in life. Because we have not had this distinction in language before, we have tied plausibilities and possibilities into the same word of "possibility" in the old context of language. That has created a massive headache in epistemology.

    But when we separate the two, so many things make sense. If you start looking for it, you'll see many arguments of "possibility" in the old context of "knowledge", are actually talking about plausibilities. When you see that, the fault in the argument becomes obvious.

    With this in mind, re-read the points I make about immediateness, and how that can only apply to possibility. Plausbilities cannot have immediateness, because they are only the imaginations of what could be within our mind, and have not been applied to reality without contradiction yet.

    I would say that someone doesn't have to witness a horned, winged horse to know that it is possible because it doesn't contradict any immediate forms of knowledgeBob Ross

    As one last attempt to clarify, when you state it doesn't contradict any immediate forms of knowledge, do you mean distinctive knowledge, or applicable knowledge? I agree that it does not contradict our distinctive knowledge. I can imagine a horse flying in the air with a horn on its head. It has not been applied to reality however. If I believe it may exist somewhere in reality, reality has "contradicted" this distinctive knowledge, by the fact that it has not revealed it exists. If I believe something exists in reality, but I have not found it yet, my current application to reality shows it does not exist.

    Plausibilities drive us to keep looking in the face of realities denial. They are very useful. The powerful drivers of imagination and creativity. But they are not confirmations of what is real, only the hopes and dreams of what we want to be real.

    I hope that clears up the issue. Fortunately, this may be the final issue! Great discussion as always.
  • Bob Ross
    1.7k
    Hello @Philosophim,
    I agree: I think that we are using terms drastically differently. Furthermore, I don't, as of now, agree with your use of the terminology for multiple different reasons (of which I will hereafter attempt to explain).

    Firstly, the use of "possibility" and "plausibility" in the sense that you have defined it seems, to me, to not account for certain meaningful distinctions. For example,let's consider two scenarios: person one claims that a new color could be sensed by humans if their eyes are augmented, while person two claims that iron can float on water if you rub butter all over the iron block. I would ask you, within your use of the terms, which is more cogent? Under your terms, I think that these would both (assuming they both haven't been applied to reality yet) be a "plausibility" and not a "possibility", and, more importantly, there is no hierarchy between the two: they only gain credibility if they aren't inapplicable implausibilities and, thereafter, are applied to reality without contradiction. This produces a problem, for me at least, in that I think one is more cogent than the other. Moreover, in my use of the terms, it would be because one is possible while the other can be proven to be impossible while they are both still "applicable plausibilities" (in accordance to your terms). However, I think that your terms do not account for this at all and, thereby, consider them equal. You see, "possibility", according to my terms, allows us to determine what beliefs we should pursue and which ones we should throw away before even attempting them (I think that your use of the terms doesn't allow this, we must apply it directly to reality and see if it fails, but what if it would require 3 years to properly set up? What if we are conducting an experiment that is clearly impossible, but yet considered an "applicable plausibility? What term would you use for that if not "possibility"?). Moreover, there is knowledge that we have that we cannot physically directly experience, which I am sure you are acquainted with as a priori, that must precede the subject altogether. I haven't, and won't ever, experience directly the processes that allow me to experience in the first place, but I can hold it as not only a "possibility" (in my sense of the term) but also a "highly plausible" "truth" of my existence. Regardless of what we call it, the subject must have a preliminary consideration of what is worth pursuing and what isn't. I think it is the term "possibility", I think that you are more saying that we must apply it to reality without contradiction--which confuses me a bit because that is exactly what I am saying but I would then ask you what you would call something that has the potential to occur in reality without contradiction? If you are thinking about the idea of iron floating on water, instead of saying "that is not possible", are you saying "I would not be able to apply that to reality without contradiction"? If so, then I think I am just using what I would deem a more concise word for the same thing: possibility. Furthermore, it is a preliminary judgement, not in term so claiming that something can be applied to reality to see if it holds: I could apply the butter rubbing iron on water idea and the color one, but before that I could determine one to be an utter waste of time.

    Secondly, your use of the terms doesn't account for any sort of qualitative likelihoods: only quantitative likelihoods (aka, probability). You see, if I say that something isn't "possible" until I have experienced it at least once, then a fighter jet flying at the speed of sound is not possible, only plausible, for me because I haven't experienced it directly nor have I measured it with a second hand tool. However, I think that it is "plausible", in a qualitative likelihood sense, because I've heard from many people I trust that they can travel that fast (among other things that pass my threshold of what can be considered "plausible"). I can also preliminarily consider whether this concept would contradict any of my discrete or applicable knowledge and, given that it doesn't, I would be able to categorize this as completely distinct from a claim such as "iron can float on water". I would say that a jet traveling at the speed of sound is "possible", therefore I should pursue further contemplation, and then I consider it "highly plausible" because it meets my standard of what is "highly plausible" based off of qualitative analysis. In your terms, I would have two "plausibilities" that are not "possible" unless I experience it (this seems like empiricism the more I think about it--although I could be wrong) and there is no meaningful distinction between the two.

    Thirdly, think that your use of the terms lacks a stronger, qualitative (rationalized), form of knowledge (i.e. what "plausibility" is for me). If a "plausibility" is weaker than a "possibility", and a "possibility" is merely that which one has experienced once, then we are left without any useful terms for when something has been witnessed once but isn't as qualitatively likely as another thing that has been witnessed multiple times. For example, the subject could have experienced a dog attack a human; therefore, it is "possible" and not "plausible" (according to your terms), but when a passerby asks them if their dog will attack them if they pet it, the subject now has to consider, not just that it is "possible" since they have witnessed it before, the qualitative likelihood that their dog is aggressive enough to be a risk. They have to necessarily create a threshold, of which is only useful in this context if the passerby agrees more or less with it, that must be assessed to determine if the dog will attack or not. They must both agree, implicitly, because if the subject has too drastically different of a threshold then the passerby then the passerby's question will be answer in a way that won't portray anything meaningful. For example, if the subject thinks that its dog will be docile as long as the passerby doesn't pet its ears and decides to answer "no, it won't attack you", then that will not be very useful to the other subject, the passerby, unless they also implicitly understand that they shouldn't pet the ears. Most importantly, the subject is not making any quantitative analysis (as we have discussed earlier) but, rather, I would say qualitative analysis that I would constitute in terms of "plausibility". However, if you have another term for this I would be open to considering it as I think that your underlying meaning is generally correct.

    Fourthly, I think that your redefinitions would be incredibly hard to get the public to accept in any colloquial sense (or honestly any practical sense) because it 180s their perception of it all and, as I previously mentioned, doesn't provide enough semantical options for them to accurately portray meaning. I am not trying to pressure you into having to abide by common folk: I just think that, if the goal is to refurbish epistemology, then you will have to either (1) keep using the terms as they are now or (2) accompany their redefinitions with other terms that give people the ability to still accurately portray their opinions.

    We cannot state that it is possible that there are other shades of color that humanity could see if we improved the human eye, because no one has yet improved the human eye to see currently unseeable colors

    I would say that this reveals what I think lacks in your terminology: we can't determine what is more cogent to pursue. In my terminology, I would be able to pursue trying to augment the eye to see more shades of colors because it is "possible". I am not saying that I "know" that they exist, only that I "know" that they don't contradict any distinctive or applicable knowledge I have (what I would call immediate forms of knowledge: perception, thought, emotion, rudimentary reason, and memories). I'm not sure what term you would use here in the absence of "possibility", but I am curious as to know what!

    But what does "likely" mean in terms of the knowledge theory we have? Its not a probability, or a possibility, because the distinctive knowledge of "I think there are other colors the human eye could see if we could make it better." has never been applicably known.

    Again, I think this is another great example of the problem with your terms; If it isn't possible or probable then it is just a plausibility like all the other plausibilities. But I can consider the qualitative likelihood that it is true and whether it contradicts all my current knowledge, which will also determine whether I pursue it or not. I haven't seen a meteor, nor a meteor colliding into the moon, but I have assessed that it is (1) possible (in my use of the term) and (2) plausible (in my use of the term) because I have assessed whether it passes my threshold. For example, I would have to assess whether the people that taught me about meteors would trick me or not (and whether they are credible and have authority over the matter--both of which require subjective thresholds). Are they liars? Does what they are saying align with what I already know? Are they trying to convince me of iron floating on water? These are considerations that I think get lost in the infinite sea of "plausibilities" (in your terms). The only thing I can think of is that maybe you are defining what I would call "possible" as an "applicable plausibility" and that which is "impossible" as a "inapplicable plausibility". But then I would ask what determines what is "applicable"? Is it that I need to test it directly? Or is it the examination that it could potentially occur? I think that to say it "could potentially occur" doesn't mean that I "know" that it exists, just that, within my knowledge, it has the potential too. I think your terms removes potentiality altogether.

    I feel that "Plausibility" one of the greatest missing links in epistemology. Once I understood it, it explained many of the problems in philosophy, religion, and fallacious thinking in general. I understand your initial difficulty in separating plausibilities and possibilities. Plausibilities are compelling! They make sense in our own head. They are the things that propel us forward to think on new experiences in life. Because we have not had this distinction in language before, we have tied plausibilities and possibilities into the same word of "possibility" in the old context of language. That has created a massive headache in epistemology.

    I understand what you mean to a certain degree, but I think that it isn't fallacious to say that something could potentially occur: I think it becomes fallacious if the subject thereafter concludes that because it could occur it does occur. If I "know" something could occur, that doesn't mean that I "know" that it does occur and, moreover, I find this to be the root of what I think your are referring to in this quote.

    But when we separate the two, so many things make sense. If you start looking for it, you'll see many arguments of "possibility" in the old context of "knowledge", are actually talking about plausibilities. When you see that, the fault in the argument becomes obvious.

    I agree in the sense that one should recognize that just because something is "possible" (in my use of the term) that doesn't mean that it actually exists, it just means that it could occur (which can be a useful and meaningful distinction between things that cannot). I also understand that, within your use of the terms, that you are 100% correct here: but I think that the redefining of the terms leads to other problems (which I have and will continue to be addressing in this post).

    Plausbilities cannot have immediateness, because they are only the imaginations of what could be within our mind, and have not been applied to reality without contradiction yet.

    I think that they are applied to reality without contradiction in an indirect sense: it's not that they directly do not contradict reality, it's that they don't contradict any knowledge that I currently have (distinct and applicable). This doesn't mean that it does happen, or is real, but, rather, that it can happen: it is just a meaningful distinction that I think your terms lack (or I am just not understanding it correctly). And, to clarify, I think that "applicable plausibilities" aren't semantically enough for "things that could occur" because then I am no longer distinguishing "applicable" and "unapplicable" plausibilities based off of whether I can apply it to reality or not. In my head, it is a different to claim that I could apply something to reality to see if it works and saying that, even if I can't apply it, it has the potential to work. If I state that the teapot example is an "inapplicable plausibility", then I think that the butter on iron example, even if it took me three years to properly setup the experiment, would be an "applicable plausibility" along with the shades of colors example. But I think that there is a clear distinction between the shades of colors example and the button on iron example--even if I can apply them, given enough time, to reality to see if they are true: I have applied enough concepts that must be presupposed for the idea of iron to float on water to work and, therefore, it doesn't hold even if I can't physically go test it.

    As one last attempt to clarify, when you state it doesn't contradict any immediate forms of knowledge, do you mean distinctive knowledge, or applicable knowledge? I agree that it does not contradict our distinctive knowledge. I can imagine a horse flying in the air with a horn on its head. It has not been applied to reality however. If I believe it may exist somewhere in reality, reality has "contradicted" this distinctive knowledge, by the fact that it has not revealed it exists. If I believe something exists in reality, but I have not found it yet, my current application to reality shows it does not exist.

    I mean both (I believe): my experiences and memories which is the sum of my existence. However, I am not saying that it exists, only that it could exist. This is a meaningful distinction between things that could not exist. I would agree with you in the sense that I don't think unicorns exist, but not because they can't exist, but because I don't have any applicable knowledge (I believe is what you would call it) of it. So I would agree with you that I can't claim to "know" a unicorn exists just because it could exist: but I can claim that an idea of a unicorn that is "possible" is more cogent than one that is "impossible", regardless of whether I can directly test anything or not.

    But they are not confirmations of what is real, only the hopes and dreams of what we want to be real.

    I sort of agree. There is a distinction to be made between what is merely a hope or dream, and that which could actually happen. I may wish that a supernatural, magical unicorn exists, but that is distinctly different from the claim that a natural unicorn could exist. One is more cogent than the other, and, thereby, one is hierarchically higher than the other.

    I look forward to hearing your response,
    Bob
  • Agent Smith
    9.5k
    Methodology of knowledge?

    I know that I know nothing. — Socrates

    Socrates possessed a methodology of knowledge. He knows (that he knows nothing).

    Whatever methodology Socrates utilized, it failed to provide any other form of knowledge at all, apart from knowledge of his own ignorance.

    What does that tell you about Socrates' methodology of knowledge? It almost seems like his methodology was designed specifically to prove one and only one proposition: I know that I know nothing.

    How do you prove Socrates' (paradoxical) statement?

    1. I know nothing

    Ergo,

    2. I know nothing

    Remember he only knows I know nothing. The argument is a circulus in probando, fallacious (informally).

    :chin:
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.