• Determinism, Reversibility, Decoherence and Transaction
    I might go back and edit my embarrassing snapping at MUKenosha Kid

    Generally that's not a good idea. For instance, if I edited out everything I said that was embarrassing to me, there really wouldn't be much left.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    Children are often corrected when they learn to talk, by parents, teachers and others. They may not be taught explicit rules - that's my point - but they are still taught how to speak properly, and this training constitutes "the regulations or principles governing the conduct" of language use. As the Google definition of the word "rule" states, such regulations or principles can be either "explicit or understood".Luke

    To be corrected is not to be taught a rule. My point is that there is no such thing as a regulation or principle which governs, that is not explicitly stated. You could keep denying this forever if you want, but you'll have no success at persuading me that I'm wrong unless you show me how such a rule might exist, if it does not exist as an expression in language.

    The issue appears to be, that if rules of language use don't exist as an expression of language, then the rules do not exist within the public domain. If they are public, then where else could they exist if not as language? So we must turn to the private, internal domain of the individual to find these implicit rules, if they are real. Within the internal, private, we find what I called (for lack of a better word) "principles", in my discussion with Josh. The argument is that there is a very significant need to distinguish these private "principles", which serve as some sort of guidance to free willing, intentional choices, and public "rules", which are explicit regulations that govern conduct. The difference is immediately evident in the role of correction.

    There are rules you can consult if you have doubts about what ought or ought not be done with language, e.g. dictionaries, thesauri, rules of syntax, other fluent speakers, written examples, etc. Language is a tool. Learning how to use a hammer won't tell you when or where you should hammer, either.Luke

    None of those, and all of those together, do not tell me how to express my self coherently. They are not rules for how to talk. The hammer is a good example. There are no rules for how to use a hammer, so long as you do not damage private property, or injure someone.

    Intention is irrelevant to our disagreement, which is whether or not rules must be made explicit.Luke

    How is intention irrelevant, when to follow a rule is to intentionally act according to the rule? If we remove the relevance of intentionality, this just produces the ambiguity required for you to freely equivocate between following a rule in the prescriptive sense, and following a rule descriptively. You do recognize the difference between these two don't you? If we remove the role of intention, as Antony proposes, then you might make the absurd claim that a person could unintentionally follow a prescriptive rule, by appealing to the descriptive sense, like physical objects follow the laws of physics, then concluding that intentional acts like talking are acts of following rules in that sense. But that's not what we're talking about, we're talking about people in freely chosen activities. So hiding the role of intentionality is clearly a misrepresentation, which is unacceptable because it produces a misunderstanding of equivocation.

    Put simply, you're mistaken. The dictionary defintion I have repeatedly given states that a rule can be either "explicit or understood". You appear to be using a special meaning of the word "rule" that excludes the (unarticulated) "understood regulations or principles governing conduct or procedure within a particular area of activity".Luke

    This is evidence of your delusion. You think that the dictionary definition provides a stated rule for how the word "rule" must be used, and if I step outside the precise boundary of your interpretation of that stated rule, I am necessarily mistaken.
  • Can we dispense with necessity?
    If I say "it's necessary for you to buy me some butter" what do I mean? Do I mean that it is a necessary truth that you will buy me some butter? No, clearly not. I mean that it is urgent, important, imperative, that you do so. That's typically what words such as 'must' 'always' 'never' and so on mean when we use them.

    So, the language of necessity is used in everyday life not to describe the world, but simply to emphasize things - that is, it functions 'expressively'.
    Bartricks

    Yes, I'd say that "necessary" here means that there is good reason for it.

    But philosophers - most, anyway - think that there is this weird thing 'metaphysical necessity'. It's a strange glue that binds things immovably. So, a 'necessary truth', on their usage, is not a truth it is extremely important that you believe (which is what it'd be if the word 'necessary' was functioning expressively), but a truth that cannot be anything other than true - so a proposition that has truth bonded to it so strongly that it can never come away.Bartricks

    Oh, I see the problem, you think there is some sort of "metaphysical necessity" referred to, which is a "strange glue" , and that's why you don't like the usage. I suggest you just release that idea of a metaphysical necessity, and just look at "necessity" here in the normal way, as meaning "good reason", and your problem will be solved.

    Now, 'that' kind of necessity - metaphysical necessity - is the kind that I am suggesting we can dispense with. It is really just a case, I think, of us taking language that normally functions expressively, literally. As such we can dispense with it.Bartricks

    OK, I agree there's no need to assume this "metaphysical necessity". But do you agree that when the conclusion follows logically from the premises, then it is "necessary" in the normal sense, meaning that there is good reason for it?

    It's just when I draw a conclusion, I think the conclusion 'is' true, whereas others will think that it is 'necessarily' true. But there's no real difference. It's not like there are two grades of truth. There are just true propositions and false propositions and a story to tell about how they got to be that way.Bartricks

    I tend to think that there are different grades of truth, depending on the reasons the person has for believing what is believed. True or false is a judgement we make, and the judgement can be made for a variety of different reasons, some better than others. So, suppose that the truth of the conclusion is dependent on both the truth of the premises, and the strength of the logic employed. If this is the case, then the truth of the premise is a higher grade of truth than the truth of the conclusion, because it is more likely that the conclusion would be false.
  • Donald Trump (All General Trump Conversations Here)
    It sounds like double standards but there is no evidence you have any.NOS4A2

    There's shit loads of evidence, you just explain it away, like any good conspiracy theorist will do.

    Yeah, I could care less if Russians bought Facebook ads.NOS4A2

    No, I couldn't care less because there is no crime...NOS4A2

    What you didn't seem to grasp four years ago, and still don't seem to grasp is that there are laws against foreign participation in an American election. It is a crime.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!

    Is that "ironic" in the sense of funny, and sad in the sense of distressing?
    Lol, now that's irony!
  • Can we dispense with necessity?
    I think you are confused about the kind of thing the rules of logic are. The rules of logic are instructions. They don't describe how we think, they 'tell us' how to think. So, we are told to believe that the conclusion is true if the premises are.Bartricks

    Right, so "necessarily" means that you will judge the conclusion as true if you adhere to the rules, instructions.

    Here's an instruction: if they have any butter, but me a pad of butter. That's an instruction and you can follow it. There's no necessity invoked. I am just telling you to do something under certain conditions.Bartricks

    Clearly there is necessity invoked here. You are telling me that if they have butter then I need to get you a pad of butter, you are just not explicit with the "need". It's completely similar to the example of logic. I can of my own free choice, choose not to get you the butter, and this means that I do not see the need, just like you can of your own free choice choose not to follow the logic, and this means that you do not see the need. In the case of the logic we are explicit, using "necessarily".

    That's how things are with logic. We are indeed told that if the premises of a valid argument are true, then we 'must' believe the conclusion is true. But this does not indicate that necessity exists.Bartricks

    That's correct, but the issue you've brought up is whether or not "necessarily" serves a purpose, and it clearly does. It indicates that the conclusion is judged to be true only if you agree with the logical principles employed. So the "necessity" is within you, as the need to produce a conclusion. The judgement that the conclusion is true is contingent on you apprehending that need, just like me getting the butter for you is contingent on me apprehending the need. In the case of the logic we are explicit to describe what produces the need, the logical process. In the case of the butter you are not explicit as to why I need to get butter for you.

    To return to the point though: "if they have any butter, buy me some" and "if they have any butter, you must buy me some" are both instructions that one can follow. As such one does not need to be told that the conclusion of a valid argument 'must' be true in order to follow logic; that would be akin to thinking that you could only do as I say if I said "if they have any butter you 'must' buy me some" as opposed to just saying "if they have any butter, buy me some".Bartricks

    In the case of the logic, we are told that if we follow the logic we must accept the conclusion. In the case of the butter, there are many ways you could ask, "can you buy me some?", "please buy me some", etc.. Or, as you say "buy me some". They are all ways of asking. If I am agreeable, I will apprehend the need, and buy you some. You might also say "you must buy me some", and the same principle holds, you are still asking me to buy you some, and if I see the need, and am agreeable, I will.

    So, in the case of the logic we are given the reason we we ought to accept the conclusion. "Necessarily' represents the reason, which is that the logic backs up the conclusion. In the case of the butter you are not giving me the reason why I "must" buy you some. So the two are not comparable. With the logic "necessarily" gives reference to the logic, demonstrating the need. Unless you provide why I "must" buy you the butter, support the "must", as "necessarily" is supported by the logic, eg. you will die without it, then the "must" doesn't do the same thing as the "necessarily" does.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    How is language use any different to these sorts of rule-governed activities?Luke

    In rule governed activities, if I have any doubts about what I ought or ought not do, I can consult the rules. I see a semblance of rules for mathematics and logic, and some sketchy ambiguous rules for reading and writing. But although I can find a little guidance on pronunciation, I really can't find any comprehensive set of rules for talking, which constitutes the majority of language use. In my experience, I see that people learn to talk, and do so adequately without reference to rules. And if I have doubts about how to express what I want from someone else, there are no rules for me to refer to. That is how language use is different from a rule-governed activity. For the majority of its activities there are no rules to consult if one has doubts about what ought or ought not be done, but a rule-governed activity has rules which can be consulted.

    Explaining a particular use of a word is describing a rule for its use;Luke

    This is what is known in philosophy as a category mistake. You are talking about "a particular" and you switch it for a general, "rule". To explain a particular use of a word requires a description which is designed for the uniqueness of that particular instance of use. And this cannot be done through reference to a general rule. The general rule will not distinguish that particular instance of use from another, and therefore will not explain the meaning which is specific to the particular context.

    This is exactly why language cannot be a rule-governed activity. Each instance of language use occurs in a particular and unique set of circumstances, and the meaning must be designed, created, for that specific context. General rules cannot give us what is required for creating meaning which is designed for the peculiar, unique, features of the particular circumstances. Nor can general rules therefore, explain a particular use of a word. To explain any particular instance of use of a word we must refer to the particularities of the context in which it was used, because the meaning was designed in relation to that context. General rules are insufficient.

    Corporate culture is an understood set of behaviours which are often not explicitly expressed in language.Luke

    This is meaningless babble to me. I have no idea what you mean by "corporate culture", or "understood set of behaviours". But I'll repeat what I said already, a pattern of occurrences does not constitute a rule. It is the description of those occurrences which is the rule. So "understood set of behaviours" refers to description, in words, and therefore is expressed in language, despite what you assert.

    Also, rules and laws are often made explicit only after there has been some transgression of the implicit, understood principles of conduct.Luke

    So let's see what you're saying here. I am doing something you dislike, then you state a rule to prevent me from doing it, and you present me with that rule. Now you want to argue that I was knowingly breaking the rule before you even presented me with the rule. Your claim that I understood the principles of conduct before you presented me with the rule is imaginary fiction. "Implication" requires logic, which requires stated premises. So there is no such thing as "implicit" rules without language.

    There's also pets. Sometimes we train pets to respond to particular verbal commands. We might say that our pet understands to do (or not do) something, or behave a certain way, even though the pet doesn't speak English, and we might never make the rule explicit - to the pet - in English.Luke

    To "understand" does not require following a rule. You are simply begging the question, assuming that one cannot understand without following a rule. To understand requires some sort of empathy. The nature of this, identifying with the other, I have been discussing with Josh, but it really cannot be characterized as following rules. I described my experience of understanding another as presuming to allow the other's intention to become my own, such that I do what I think the other wants me to. The fact that it is presuming allows for the reality of misunderstanding. You might think that it's odd to believe that a dog or cat has intention, and that it allows my intention to become its own, and that's why it does what I want it to do, but it's no odder than believing that such animals "understand", and clearly these animals act with purpose.

    Finally, there is language itself. When children are trained how to use language, they learn "the regulations or principles governing conduct or procedure" for the activity of language use, which is a definition of "rule(s)". Obviously, children don't already know the principles that govern (i.e. the rules of) speaking English before they learn how to speak English.Luke

    That a person behaves in an habitual way does not demonstrate that they have learned "the regulations or principles governing conduct or procedure". If this were true, then we'd have to conclude that birds, insects, and probably even single celled beings have learned the regulations governing conduct and procedure.

    Of course you have, except that you are blinded by a particular definition of "rule" which you think requires that it must be expressed in language.Luke

    It's an inductive conclusion. All the rules I've ever known have been expressed in language, therefore I think that a rule must be expressed in language. I've already invited you to disprove this principle, and I'm still waiting, as your attempts seem to have failed. Until you provide that proof, I'll adhere to my reasoning.
  • Can we dispense with necessity?
    I reject determinism because the notion invokes necessity. But that leaves open whether we have free will or not (which is what one would expect if necessity is doing no real work) as it leaves open whether we are originating causes of our decisions or mere links in a chain. It's the latter that seems to preclude our being free.Bartricks

    When you say "if those premises are true, then the conclusion will be as well", you are talking about judgement. If tThe premises are judged as true, then so will be the conclusion. What is that judgement based in if not the necessity of logic? Is it a free will judgement? In this case a person would be free to say that the conclusion will not be true

    .
    I can do something similar. Here: I stipulate that a valid argument is one that, if the premises are true then the conclusion is Potter true.Bartricks

    I think the point is that one judges the premises as true, for some reason. That reason need not be stated. So when they say that the conclusion of a logical argument is "necessarily" true, this is a statement as to the reason why it is judged to be true. It is judged as true because of the necessity which the logic produces.

    Rather than argue that "necessarily" has no purpose here, because it does serve a purpose, you'd be better off to look at the premises and ask why there is no qualification on the use of "true" in the premise. But wait, there is. It says "if" the premises are true, then the conclusion is necessarily true. So there's no problem at all. It says that if the premises are true, then the conclusion is necessarily true, where "necessarily" refers to the necessity produced by accepting the logic. If you reject the logic, which you could, of your own free will, then you would say that the conclusion is not necessarily true. Therefore "necessarily" clearly serves a purpose. It says that the judgement of truth assigned to the conclusion is dependent on acceptance of the logic.
  • Can we dispense with necessity?

    So you are replacing "must be" with "will be". I assume that "will" implies a free will, which is distinct from "must" which implies a determinist necessity. Are you saying that the logical process is a free will choice, to choose the logical conclusion, rather than that the logical conclusion is forced by some sort of determinist necessity?
  • Donald Trump (All General Trump Conversations Here)
    Based on recent events, I'd say he is far from useless.Echarmion

    He's got to keep up the agitation or else the guillotine falls. He's not in a happy place.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    It's a simple solution for you to claim that language is necessary for rules but rules are not necessary for language. I would agree that language is necessary for the linguistic expression of rules (as you imply), simply because language is necessary for any linguistic expression.Luke

    Yes, we learn rules from their linguistic expression. If one simply observed an activity and made up so-called "rules" to follow, from the observations, in order to engage in that activity, this would not be a case of learning rules, it would be a case of making up so-called "rules". Those would be private rules which don't qualify as "rules", under Wittgenstein's restrictions, or criteria, as to what constitutes following a "rule".

    Consider the difference Wittgenstein describes between thinking oneself to be following a rule, and to be actually following a rule. Think of this as a part of Wittgenstein's definition of "rule", as a restriction placed on the word's usage. If I watched an activity, and made up rules for myself to follow, and then proceeded to engage in that activity, I might think that I was following a rule, but I wouldn't actually be following a rule.

    But why are rules not necessary for language? Is your position that language has no rules?Luke

    I see no reason to belief that rules are necessary for language. I have seen no acceptable logic which leads to this conclusion, and I see no evidence of learning rules in early childhood learning of language. I see that people only learn rules after they learn language. And rules are only a part of more advanced language development like writing, mathematics, and logic. Therefore I conclude that rules are not necessary for language use. So it is not my belief, that language has no rules, I think that they are a feature of advanced languages, we could say they are an emergent feature in language use.

    Rules can be expressed in language. They don't have to be.Luke

    If you truly believe this, then you ought to be able to provide some examples. Show me some rules, or even a rule, which is not expressed in language. Remember though, a repeated pattern, or any type of pattern, does not constitute "a rule", but the description of it may be a "rule".

    Meta either cannot or will not set aside his framework...creativesoul

    I do this intentionally, to demonstrate to people like Antony who take agreement, "our coming together", "our shared lives" as a fundamental premise, that their premise is false.

    I understand you want to let me know that you disagree, but you simply rejected this with no justification than I'm not living in reality.Antony Nickles

    I justified with both explanation, and examples. My legs moving, in the context of walking, exists within the context of an intentional act, going to the store. My lips moving, in the context of speaking, exists within the context of an intentional act of speaking to my brother. I explained that even if we find simple habitual acts, which appear to be unintended, they exist within the overall context of a living human being who has ongoing goals, intentions, which influence those seemingly unintentional acts.

    You said:
    "Can we not just say: "I'm going to the store." or: "I'm speaking to my brother about something." We do not need your picture of intention here--"

    The fact of the matter, the reality of the situation, is that this is simply the way that 'we' (meaning most ordinary normal people), talk about these things, that they are intentional acts. What do 'we' mean by "I'm going to the store"? 'We' mean that I am engaged in an intentional act with the goal of getting to the store.

    If your purpose is to deny the intention implied by such phrases, when you answer the question "what do we mean by...", then you are not practicing your professed OLP, because you are not answering the question truthfully. This is part of the hypocrisy I warned you about. If however, it is a part of your method, to provide untruthful answers to such questions then your method is one of deception..

    This is unacceptable behavior.Antony Nickles

    That I refuse to accept the principles of a hypocrite is unacceptable behaviour?

    I appreciate the opportunity to attempt to refine how I present this material but a blanket denial in the end leaves nothing to say.Antony Nickles

    Take off your tinted spectacles, and take a look at the situation! Who is the one in denial here? In defense of your denial of the reality of intention, you suggested we could take sayings like "I'm going to the store", and truthfully consider what is meant by them, outside the context of intention. I've simply pointed out to you, that it's not a realistic representation of how we speak, of what we mean, your proposition that we speak about human acts as if they exist outside the context of intention.

    It must seem like a lonely world.Antony Nickles

    Why do you think I'm here, engaged in this unpleasant situation?
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    Husserl made a distinction between free and bound idealities. Mathematical logic is an example of of a free ideality. It is designed to be able to be identically repeatable outside of all contexts, it it is by itself empty of intentional meaning.Joshs

    I don't agree that being repeatable in any or all contexts makes mathematics empty of intention. That also would be the claim of those who support the idea of "pure mathematics", axioms created without the influence of intentional meaning. But the goal to produce something like that, with the possibility of universal application, is itself intentional. So I don't believe that we can escape intentional meaning in this way. As living human beings, even sleep might not free us from intentional meaning, as dream interpreters might show. Some might argue that meditation seeks to free us from intention, but meditation requires effort. I think it is inherently contradictory to engage in an activity which would have as its goal to free one from intention.

    Spoken and written language, and all other sorts of gestures and markings which intend meaning, exemplify bound idealities. Even as it is designed to be immortal, repeatable as the same apart from any actual occurrences made at some point, the SENSE of a spoken or inscribed utterance, what it means or desires to say, is always tied to the contingencies of empirical circumstance. In other words , no matter how hard we try to steadfastly adhere to a standard , there is always contextually driven slippage.Joshs

    I agree with this, but I believe that the "slippage" extends even to fundamental mathematics. Evidence of the slippage in fundamental mathematics is the fact that we have numerous distinct ways of numbering, natural numbers, rational numbers, real numbers, for example. As living human beings, with living conditions, living needs, and the constraints of a living body, we cannot produce pure principles which would be free from the influence of empirical circumstance.

    It sounds like you are saying that we have unaltered access to a standard first, and only after do we pick and choose what parts of it to apply to a news contextual situation. I’m saying that regardless of how hard we attempt to keep our understanding of the original standard an exact duplicate of the first time we became acquainted with it , there will be continual slippage in the meaning of that standard. Such slippage will be subtle enough, at least over short periods of time , that it will go unnoticed. For all intents and purposes we can claim to be able to consult an unchanged version of the standard every time we think of it in our mind or re-read it.Joshs

    As I said, I would not call these "standards", I'd prefer to call them principles. The fact is that we allow slippage, intentionally, for whatever reason. Since we intentionally allow slippage, not adhering to them because the principles are not universally applicable, or whatever, then we are actually appealing to a different hierarchy of values, one which does not give that principle the status of "a standard" in that hierarchy.

    We can see this very clearly in moral philosophy. We learn and accept moral principles, but then we intentionally stray from them. In other words, we sometimes do what we know is wrong. This is the argument Plato used against the sophists who claimed to teach virtue, insisting that virtue is a form of knowledge. To know the difference between, and be able to judge between, bad and good, wrong and right, or incorrect and correct, is not sufficient to ensure that one does not intentionally do what that person knows is wrong. We judge the act as wrong, yet we do it anyway.

    When we call them "principles", I think we recognize that they are themselves, free to be judged by us, in relation to other principles. When we call them "standards", I think that we think of them as the basis for judgement, therefore we think that we cannot judge them, because we'd have nothing to base that judgement on.

    More specifically, Goldman argues that my understanding of others is rooted in my ability to project myself imaginatively into their situation.Joshs

    I would not accept this proposal. To put myself into another's situation is far too difficult and complex, and it appears to be completely inconsistent with my experience. Instead, as I proposed in the last post, I think we allow the other's intention to replace one's own. In other words, we submit to the other, to do what the other person wants from us. We can see this quite readily, in education, the child does what the parents want, the student does what the teachers want, etc.. So as a listener, to understand the speaker, we do not try to project into the speaker's position, we simply open up in trust, and allow our own intention to become one with the assumed intent of the speaker. We simply try to do what we think the speaker wants us to do.

    The advantage of my perspective is that doing this is very quick and easy and doesn't require the mental gymnastics of attempting to put oneself in the other's position. There's one simple question, what does the other want from me, and we learn to judge this very quickly as children. Since the judgement can be made very rapidly, conversation is facilitated. We go back and forth in conversation very quickly and smoothly, from listening, judging what the other wants, to speaking, showing the other what you want, without even noticing the transition.

    Where there is a problem, is that we are often not forthcoming in showing the others what we want from them. And if our trust for each other wanes, we will throw up more and more roadblocks to understanding, until these insecurities become habitual, and the person is naturally difficult to understand.

    When we interact directly with another person, we do generally not engage in some detached observation of what the person is doing. We do in general not at first attempt to classify his or her actions under lawlike generalizations; rather we seek to make sense of them. When you see somebody use a hammer, feed a child or clean a table, you might not necessarily understand every aspect of the action, but it is immediately given as a meaningful action (in a common world).Joshs

    There is a distinction to be made here between simply observing a person and judging what is that person doing, and engaging communicatively with a person where I must judge what does the person wants from me. At first glance, you might think that the latter would be a more difficult judgement to make, but I think the reality is that it is much more easy. This is because what the person is doing in communicating with you, is intentionally showing you what is wanted from you. In the other case, the person is just doing things, and you must judge what they are doing without the person trying to show you. So communication is much easier than putting yourself in the other's shoes, which would be determining all that the other is doing, it's just a matter of determining what the other wants from you, which is one of the things the person is trying to do anyway.

    I hope you see that this makes your rebuttal to my point appear to be that you know what reality is, and I do not.Antony Nickles

    Yes, you suggested that a human being could remove oneself from the context of intention, and I think that's simply unreal. It's no different from asking me to accept a proposition which I strongly believe to be false. I'd tell you that if you believe that proposition you simply do not know the reality of the situation.

    Can we not just say: "I'm going to the store." or: "I'm speaking to my brother about something."Antony Nickles

    We could say that, but intention is implied when we say "I'm doing...", "I'm going...", "I'm speaking...". To say that we ought to discuss these activities as if there is no intention involved would be foolish.

    And these show us something about intention--that it is a hope for the future, which, however, may go wrong (like shooting a cow instead of a donkey).Antony Nickles

    Yes, we always have a view toward the future, so intention is always present. A mistake does not remove intention from the scene, it just means that things did not turn out how it was intended.

    ..our shared lives...Antony Nickles

    Again, this is incoherent to me. My life is my life, and yours is yours. We are separated by space, we are born and die at different times. There is no such thing as a shared life, except perhaps the Siamese twins'.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    My apologies, your precise words were "necessary truth".

    A principle is a synthesis of conceptions into a necessary truth.Mww
  • GameStop and the Means of Prediction
    This is how the elites govern now - by means of 'structural power', where they close ranks and deny access to means rather than end-product. One precedent for this which I've studied alot is in the case of sovereign debt, where solidarity among lending institutions (banks and so forth) simply refuse to lend more to indebted countries in order to enforce austerity and political change (this is basically the story of international finance relations since the 70s, and no one talks about it). This kinda of neoliberal strategy is favoured because it sticks with the script of "open-markets": the state isn't denying anything, it's allowing certain institutions to do stuff (even if that stuff happens to be denying access). It's devolution of power 'outside' the state and 'freedom' to corporate action.StreetlightX

    Allowing individuals to buy on margin is another way that the predictability of the market is increased. When a stock plummets to a particular point, it triggers a margin call, and the person who bought on margin is forced to sell in order to cover the margin. This amplifies the fall, making the overall fall more predictable. If it falls a certain percentage, margin calls will begin, forcing a further fall, and more margin calls. A fall will always proceed in a somewhat predictable way due to the standards employed in margins.

    From what I was reading earlier to make sure my understanding of margins was correct, that’s normal practice when someone buys something on margin and then it tanks below the required maintenance margin percentage (e.g. if you buy $2k of something with only $1k of cash, i.e. at 50% margin, and then it tanks to only $1.5k in value, if your maintenance margin is 33% then your holdings of that will be liquidated to cut the losses of the money you borrowed from the brokerage to purchase it).Pfhorrest

    Margin is more complicated than this though. If you have a variety of holdings you can take margin on more stable stock to buy other more risky companies which are not marginable. This makes a dependence between different sectors in the market. Widespread market crash is now a natural and predictable phenomenon, as a fall in one sector will precipitate selling in another sector to cover margins, resulting in a cascading event which becomes quite predictable and lucrative to the short sellers.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    If communication is a pre-requisite to learning, as you claim, then a child without language should not be able to learn, right?Luke

    No, "language" is the more specific term, while "communicate" is more general. Using language is a form of communicating, but there are forms of communicating which do not use language. If language is a specialized human form of communication, then the child might still use more animalistic types before learning the human type.

    He offers this as an example of a common philosophical misconception of language, not as an endorsement of the idea.Luke

    Right, it's a sort of dilemma which the philosophical misconception of language creates. The resolution to that dilemma is to recognize that the philosophical representation of language, which assumes rules as a necessary aspect of language, is wrong. Language allows for the existence of rules, which are expressed via language, and therefore cannot exist without language. This is a big part of that "coming together", the description of which Antony calls Grammar. If we allow, as a philosophical principle, that understanding some rules is prerequisite for language use, as is the common philosophical notion, then we have to account for the acquisition of these rules which are not learned through language use, as they must be already understood to be able to learn language. These rules would be private rules, constituting a private language, which is what Wittgenstein rejects.

    Why are "rules required to learn rules"? Because you say so?Luke

    You don't seem to grasp the issue. Rules are expressed in language. Therefore one must know how to interpret language to be able to learn a rule. To know how to interpret a language means that one knows how to use that language. Therefore one must know how to use a language prior to learning any rules. Consequently, we must conclude that rules are not a necessary part of language. You might try to avoid this conclusion by assuming some type of rules which are not expressed in language, but this leads to the private rules and the private language which Wittgenstein argues is an absurdity.

    The whole point of interaction theory is that standards don’t have any existence outside of their use, and in their use they are altered to accommodate themselves to what they are applied to.Joshs

    I can accept this. with a slight revision, and this is what I've been arguing. We can not call this a "standard" then. That is why I rejected Antony's use of "criteria". The point though, is that we also have stated standards, and criteria, laws, which are not intended "to accommodate themselves to what they are applied to", they are intended to be steadfastly adhered to. These are exemplified in mathematics and logic. And they are what those words more properly refer to.

    So, we clearly have a difference here, between the standards, criteria, and laws, which are intended to be adhered to, and these 'guidelines' (or whatever we ought to call them) which are intended to "accommodate themselves" by being alterable. Due to this difference, we ought to call them by distinct names to avoid confusion, equivocation, misunderstanding and mistake.

    What I see is a distinction between the public and the private as Wittgenstein exemplified. Within the public realm, we must honour our words, stay true to our principles, and establish an equality between individuals. This means that we must establish rigid standards, criteria and laws, which must be rigorously adhered to, to maintain equality which is the basis of empathy and understanding. However, these standards which we adhere to (and in extreme cases of "law", we enforce), are not a true representation of the principles which we use within our own private minds, in our acts of speaking and interpreting, and acting in general. Within our own minds we use some sort of guidance mechanisms which are completely flexible. They must necessarily be, to capacitate learning, and to be adaptable to circumstances.

    We might call these "principles", (as completely opposed to Mww's proposed definition of "principle" as an absolute truth) . But this is how we speak in moral philosophy, we have principles which provide our moral guidance. The unique particulars of the very distinct and unique situations which we find ourselves in, makes it impossible for us to govern our lives through strict adherence to any rigid standards or criteria, because these general, universal principles cannot be applied in the majority of those mundane situations. However, the moral person seeks to establish principles which can act as true standards, or criteria, because of the public domain which we partake in, and the necessity of interpreting one's acts, and the law, in a way which is consistent with others. In a sense then, there is a public pressure, for the rigid standards which we must adhere to in our cooperation with others, to enter into our private flexible guidance mechanism, as "principles".

    The way you are understanding them is precisely as internal templates or representations, which are first consulted and then compared with something else.Joshs

    The internal "principles" cannot be templates or representations. The whole point of such a principle is its applicability, usefulness, therefore it must be to the greatest possible extent, something general, universal. A template or representation is by its very nature, something particular. Since it facilitates action, as the mechanism for decision making, "the principle" must exist in a direct relation with intention, or will, if these words refer to the motivator of action.

    So consider this description you made: "interaction theory claims that we do not consult an internal set of representations or rules in order to relate to the other , but perceive their intent directly in their expressions." If this is the case, then what is involved in my recognizing what another person has said, is simply a matter of switching out my intention, and replacing it with the other's intention. My "principles" have a direct relation to my intention, and the switch allows a direct relationship with the other's intention because I have assumed the other's intention to take the place of my own. The important word is "assumed", because the other's intention doesn't actually take the place of mine, i simply allow it to seem that way.

    Now here's the complicating factor. This scenario, in its most simple and raw form, allows for unfettered deception. You can see that I would not intentionally deceive myself. But if I allow another's intention to freely take the place of my own intention, and the other's intent is to deceive, then it is just as if I intend to deceive myself when I allow the other's intention to take the place of my own. Since we actually do employ some safeguards against deception, this simple and raw form of "interaction" as I have represented it here, is not complete. It may provide a basic representation of habitual speaking, in which one completely removes one's own intention from the conversation, to have direct access to another's (direct access to both at the same time is not possible because contradiction), but this is never really the case in actual conversation. So this "direct access" is a type of assumption, a switching which we allow through some sort of "principles", but because there are these "principles" which for allow it to occur, the other person's intention's access to my mind is not completely unfiltered, and not really direct.

    The conclusion to this is the principle I've been arguing, that there is a distinction to be made between hearing a person, recognizing what that person is saying, and actually understanding the person. Recognizing what the person is saying is the habitual act of assuming a direct relation with the other's intention. We might call this apprehending the meaning of the other's words. It's done by identifying with the other, allowing that my intention has become one and the same as the other's. However, that I really have direct access to the other's intention is an illusion, it's not a true assumption. I make the assumption for the pragmatic purpose of facilitating apprehension of the other's words. If I adhere to this assumption as a truth, deception is actually facilitated.

    Therefore we must assume another level, which constitutes true understanding. Hearing and recognizing what a person is saying, is just to identify with the person, allow that person's intention to be mine, therefore to see what the person has said as if it was me who said it. To truly understand the person is to then remove this switched intention, which creates the illusion of understanding that allows for deception, and understand what the person has said, as a separate person, with distinct intentions.

    Not "what do you mean by___" It's: "what do we mean when we say___?"Antony Nickles

    You're right back to incoherent nonsense now. Each act of saying something is an individual act of an individual person saying something in a particular situation. Context plays an undeniable role in meaning. Your phrases "we say", and "we mean", are incoherent, as if a phrase could be properly interpreted outside its context. This is representative of your false premise, that we "have come together", that "we" exist as a entity united through a common Grammar.

    But this is to just divide acts/expressions into intended ones and unintended ones, so the intended ones still fall under the picture of a ever-present cause (for those "intended"). And this is different than my proposing the question of intention only comes up sometimes, not that it applies to all acts that are (pre?) "intended".Antony Nickles

    You are simply denying the reality of the situation. Human beings are intentional beings. They always have goals and therefore they cannot separate themselves from their goals, as if they could pass some time without having any goals. So an habitual, "unintended" human act, exists within the wider context of intention. When I walk to the store, my legs are moving in an unintended habitual way, but this is within the context of me intending to get to the store. When I talk to my brother, my lips are moving and I'm making sounds in an unintended habitual way, but this is within the wider context of intending to speak to him about some subject.

    Your proposal, "intention only comes up sometimes", needs to be rejected as a false proposition. Intention is always there, as part of the background, the context.

    This is not "we" as in "you and I". It is "we" as in all Engilsh speakers (Cavell will say "native" speakers, not to be racist or exclusionary (intentionally) but to record the fact that learning a language is to learn (be trained in, is more accurate given Witt's student) all the things that we do and say. And here I am not saying people don't then disagree or have hidden motives or speak past each other or mistake a claim for a statement, etc.Antony Nickles

    If it's difficult to justify the idea that "you and I" exist as one united entity called "we", how much more difficult is it to justify your claim that "all English speakers" exist as such a united entity?

    I guess I don't see where I implied that mistakes happen without circumstances--"product of" seems to need accounting for, as if a mistake was a result of, at least an outcome of, the circumstances. "I made a mistake." "What about the circumstances led to the mistake [as an outcome]?"Antony Nickles

    Have you never looked at your own question, to ask what is meant by "a mistake"? A mistake is something which occurs when a person has not properly accounted for the particulars of the situation. Therefore it is always an outcome of the circumstances. "What about the circumstances led to the mistake?" The fact that the person (oneself a part of the circumstances) did not properly account for the particulars. "Why did you shoot the cow instead of the donkey?" "Someone put the cow into the donkey's stall and I didn't confirm that it was the donkey I was shooting." This is the answer to "why" in every instance of a mistake, "I did not take into account all the particulars of the circumstances". A mistake is an intentional act which was made without adequate knowledge of the particulars of the situation, therefore it does not result as intended. It is because each situation consists of particulars which are unique to that situation, as "the circumstances", and the person fails to account for the particulars, that mistakes are made.

    "We are separate people, but not separated by anything...Antony Nickles

    The biggest problem of idealism is to account for the fact that we, as individual minds, are separated. There is a very real medium of separation between your mind and my mind, which we call the material world, and this very real separation forces the idealist toward principles to account for this reality, to avoid solipsism. If you deny the reality of this separation between us, you force us into a reality in which there is no material world, and we are all just one solipsistic mind.
  • The self
    Pain is NOT reducible. Such complexities are only analytical correspondences.Constance

    Analysis is reduction. What are you saying, pain ought not be analyzed? That's a value judgement which needs to be justified. How do you justify it, by insisting that pain is the absolute, metavalue? And you justify this by claiming that pain ought not be analyzed. That looks like a vicious circle to me.

    But you also encounter this is such a reductive attempt: When you make the move to higher ground analytically, looking to physical brain activities, in the act of data extraction in the observation of the brain, you are not working from outside perspective looking at the brain. You are LOOKING. Literally a product of brain activity and precisely the kind of thing you are supposed to be analyzing. This is the most obvious form of question begging imaginable.Constance

    No, not quite, I'm not observing the brain, I'm observing the finger, the pain is in the finger. And it is the fact that the pain being in the finger makes the brain want to analyze it, which makes it appear to consist of parts.

    The pain is what is evidently there, unproblematic in what it is. The reduction is on your part: you take what is clear as a bell, the screaming pain and claim this is not what it really is. It's explanatory grounding is elsewhere. Well, of you are doing a scientific analysis on thephysical anatomy of pain, then fine. But that is not this here at all.Constance

    I'm not denying the pain, I'm saying that there's more to it than just the pain. I feel the pain, I look at the place where it hurts, and I see the wound. Oh, there's a reason why I'm feeling this pain. PAIN is not the end of the inquiry, it's the beginning.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    But at the same time the system as a whole accomodates itself to the novelty of what it assimilated. What is key to understanding g this approach is that the system is an integrated network , and the accommodation . changes the network’s structure as whole. Learning something news isnt simply a matter of synthesizing and combining the new event with one’s
    extant cognitive system, but of altering the meaning of that system as a whole while assimilating the new item. This means when you subject someone’s
    terms to your standards of judgement , those standards must at the same time accommodate and alter themselves in order to assimilate the other’s terms.
    Joshs

    I haven't denied altering one's own standards, I just said the person has to establish consistency between the new information and one's standards. Sometimes the existing standards might be judged as wrong. That's the point I was arguing with Mww, knowledge is not only a matter of building onto existing principles, it's also a matter of rejecting principles once believed to be true, if later discovered to be false.

    I haven't yet seen what "understanding" involves in OLP. I don't think Antony has gotten to that point yet.

    The three main contenders are theory theory, simulation theory and interaction theory.
    Theory theory seems to be be similar to your thinking. It posits that we understand and relate to others
    by consulting our own internal templates or representations. That is , we create a theory of how they are thinking and apply it to them. Simulation theory says that we imitate the other and learn to understand them that way. Against both of these representationalist approaches , interaction theory claims that we do not consult an internal set of representations or
    rules in order to relate to the other , but perceive their intent directly in their expressions. Interaction theory
    rejects representationalist because it never makes contact with another. Instead it just regurgitates the contents of its own cognitive system, which is not true interaction. The system must be affected and changed as a whole in response to communication with others. You can see the resonances here with Witt. Contexts of interaction create meanings, rather than just acting as excuses for a cognitive system to recycle it’s own inner contents.
    Joshs

    I didn't posit any internal templates or representations. What I posited was the need for interpretation. I don't think that interpretation is carried out through templates or representations, in most cases. As I said earlier, it's commonly a matter of familiarity, habit, and this involves recognition. So you might class me as closest to interaction theory, out of those three.

    I know about the what; I’m talking about the how (did a 2 get into Nature seeing as how it isn’t there naturally). You’re talking about what it’s there for, to relate a use to a meaning. I wish to know how the representation occurs such that it can be used.Mww

    I can't say I understand what you're asking. I distinguish between artificial and natural. A 2 didn't "get into Nature" it was created, just like a house, a car, or a chair, they are artificial.

    Uhhhh... this is the opposite of understanding. You are never going to get Hegel unless you find a way to meet him on his ground through his terms as he uses them.Antony Nickles

    I'm afraid I will never understand you then, if you're not willing to compromise with your terms, and explain yourself in a way which appears to be intelligible to me.

    I suggest going back through all these comments and find the places were I am imagining something someone might say (in quotes). Those are the instances of method (I think there are some on the Witt page too. Sometimes it is "Imagine what one would say..." as well.Antony Nickles

    I just don't see the method. You mostly ask questions like "what do we mean when we say...?". To me, a method would be a way to answer such questions. How are we to answer that question, what method would we apply to determine what is meant by...? If, simply asking the question, "what do you mean by...?" is the method, then you ought to be very proud of me because I'm practicing it very well. I've been asking you, what do you mean by "ordinary criteria", by "grammar", etc.. It appears I'm already proficient at your method. But now you insist that I shouldn't be asking you to explain yourself, you think I ought to just be able to know what you mean without asking. So which is it? Should we ask what does this or that mean, or should we assume to be able to know what it means without asking?

    Sometimes (in regular life) you'll want to know the intention, as I have said, because something is fishy. But the picture that everything said is tied to a "meaning" or "intention" is the misconception that Austin and Witt spend their entire books overcoming, so maybe I'm not going to get you to see that here.Antony Nickles

    I totally agree with this. That is what I tried to bring to your attention, when I spoke of familiar, habitual activities, which most of language use is. These language acts are mostly just responses, reactions, to the particular circumstances which we find ourselves in, we might even call them reflexive. So these language acts cannot be directly tied to any meaning or intention. You didn't seem to want to listen to me at that point, insisting that there was some type of criteria at play here, ordinary criteria. But I insisted that applying criteria is an intentional act, negating the assumption that these familiar, habitual acts are carried out without intentional direction. Therefore we cannot assume that there is criteria involved here.

    My description is completely different from yours" is different than "how our lives have come together" (I would say "when"). Our shared language (concepts) is "how our lives have come together". Now our description of the Grammar of those concepts is subject to disagreement, but thus also open to agreement. Seeing the Grammar is to look at what we say when as instances of "how [when] our lives have come together"Antony Nickles

    The problem is much deeper than this. As I explained toward the end of the post, we have not really "come together". We are still spatially, temporally, and psychologically separated. So the claim that we have "come together" is not justified. To say that "our lives have come to together" is a false description. Our attempts at togetherness are a never ending, ongoing effort to increase closeness.

    Well, the description is a claim about the ways in which intention works (its grammar); you may disagree.Antony Nickles

    I explained why this claim is completely unintelligible to me, and you've done nothing to clarify it, only reasserted it. You've defined grammar as a description concerning how we have come together in our lives. Clearly my intentions are quite distinct from your intentions. So if you think that you have a description of how our intentions have come to work together, I'm ready to hear it. Otherwise I think it's a false premise, and the true premise would be that getting distinct people with distinct intentions to work together is an arduous task, not something which ought to be taken for granted.

    Well, you can theorize about the "cause" of mistakes, or we can ask when we might say it: "What was the cause of your mistakenly shooting the cow, and not the donkey?" Of course, this is probably a different sense of "mistake" (used as to actions) than I believe you are using. But how would we ask your question? "I made a mistake." "What was the cause?" Now there are a number of answers here, perhaps they show the grammar of explaining a mistake (as in confessing to it, asking for help in correcting it, or learning how it went wrong, etc.) Now do we want a theory to avoid the mistake?Antony Nickles

    I find that there's a problem with your example of "mistake". A mistake, no matter when or where it occurs, is a product of the particular circumstances. I think that is the only generalization we can make about mistakes, other than that something has gone wrong. You keep going on as if we can make some sort of general description of a mistake, the grammar of a mistake, but each mistake must be dealt with as a particular individual, just like each human being is. Your idea, that we can describe all the human beings together as a Grammar of our being, or all the mistakes together as a grammar of mistakes, is deeply flawed.

    I wouldn't say the control we have over our shared lives is through description (maybe politics, decent, violence, etc.--Emerson will call this "aversion", Thoureau of course, civil disobedience). I do agree that we can disagree over our descriptions of our Grammar (though we are not doing sociology), but there is a logic and rationality to this (through OLP's method), though no certainty of agreement, or the kind of justification you might want.Antony Nickles

    The big question though, do you see that we have control over our own descriptions, the descriptions which we make, of whatever we describe? We can choose whatever words we want, even make up new ones. Furthermore, there is no need that we be truthful, or accurate, we can leave things out, and do all manners of deception, depending on what one's intention is. The intention of the individual is not completely irrelevant. So, how can there be such a thing as "our Grammar"?

    And if we apply the OLP method and ask "what is meant by such and such?" how do we know the descriptive method which the describer who has control over one's own description, as well as individual intention, is employing? The togetherness which is implied by "our Grammar" has to itself be wanted, intended, or else OLP loses any footing it might have had. What good is a philosophy which is only useful so long as everyone is behaving honestly, and no one is practicing rhetoric, or sophistry, because it takes togetherness of intention as a premise?

    That doesn't follow, I can break the Grammar of an apology; that doesn't mean an apology is not an apology, but that I am a jerk.Antony Nickles

    If you break the Grammar of an apology, then you are not making an apology. If the thing is not consistent with the description, then it is not the named thing. Otherwise you could call anything an apology.

    Well I would simply call this cynicismAntony Nickles

    It's not cynical, it's just a description based in evidence. The evidence I cited is physical and obvious, spatial temporal relations. Further, the psychological evidence of human emotions, and moral attitudes, indicates that the small degree of togetherness which we do enjoy, is difficult to maintain, requiring effort, and dedication.

    I'm not going to try to talk you out of this, but this is the slope that leads to a picture of every expression being intended or meant or thought and understood or interpreted, and those are all up to you and me. As if we were responsible not to what we have expressed (held to it), but that we are responsible for everything--the whole process--thus the need to perfect language (rather than ourselves).Antony Nickles

    This takes us right back to where we first engaged. It does not lead to a picture of every expression being intended, I don't know where you derive that from. I spent considerable time explaining to you that the majority of our expressions are habitual, and not thought out. The problem though, is that embedded within this habitual activity is where we find the majority of mistakes. This is why, in philosophy we employ things like criteria and the like, in an attempt to avoid such mistakes. These are avoidable mistakes, and your attitude of 'oh well we shouldn't worry about those mistakes' is disturbing. Philosophy only sees the need to perfect language to the extent required to better ourselves (avoid making mistakes). But improving language is a real need because bettering ourselves requires working together, which couldn't happen if we continually misunderstood each other (made those mistakes).
  • The world of Causes
    I do think there can be a 3rd perspective in a sense that you are simply observing a situation.Thinking

    You need to respect the fact that one can only observe what is made possible from one's own capacity for observation. So the observation is fundamentally subjective, according to the limits of one's capacity.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    If you are actually interested in Wittgenstein's notion of grammar, I recommend reading this article.Luke
    Thanks Luke.

    It's not so black-and-white. You have to allow for learning and intermediate stages of development and capability. Children can learn the rules of grammar just as they can learn the rules of a game. It takes practice.Luke

    Learning is a social interaction, That's the point, a child needs to be able to communicate in order to be able to learn. That's what Witt pointed out at the beginning of PI, it's as if a child needs to already know a language in order to learn a language. That's why we cannot characterize language as consisting of rules because then we'd have an infinite regress of rules required to learn rules, and rules required to learn those rules etc..

    The general conceptual structure stayed the same; the arrangement of the structure changed,Mww

    Well, since a structure is an arrangement of parts, I really don't see how the arrangement of a structure can change, while the structure stays the same. To say that the arrangement changed is to say that the structure changed. If the objects stayed the same, that does not mean the structure stayed the same, unless the structure is the object, but the structure is what changed.

    Easy: it isn’t knowledge that’s wrong, it is the incompleteness of the conditions for it, or misunderstanding of the complete conditions, that are wrong. As I said before, knowledge is at the end of the chain, so it is theoretically inconsistent to claim an end is a fault in itself. Think about it: how is it that you and I know everything there is to know about shoes, but you know your shoe size and I do not. Can you claim, without being irrational about it, that my knowledge of shoes is wrong because I don’t know about two of them?Mww

    But to say that the sun goes around the earth every day, is simply wrong. It's not a matter of incompleteness, it' s a matter of making a faulty representation, a faulty model. It's a falsity. And unless your representation of knowledge can account for this wrongness, falsity, within what is known as knowledge, your representation of knowledge is wrong. Saying that all faulty knowledge is a matter of incompleteness is simply wrong because faulty knowledge is sometimes a false representation.

    How can it be, that there are no 2’s in Nature unless we put them there? Because of an active domain specific, if not exclusive, to human sentience over and above their domain of mere reactive experience.Mww

    A 2 is a symbol, they are put here by human beings. What a 2 represents in a particular instances of use is the symbol's meaning in that instance.

    At bottom, a premise is usually a subject/copula/predicate proposition. A principle is a synthesis of conceptions into a necessary truth. From that, a premise can be the propositional form of a principle, but a principle does not have a propositional form.Mww

    This I don't understand at all. What form does a principle have if not a propositional form? How would I differentiate between a principle and a proposition if I was presented with a bunch of each? And, what makes a principle necessarily true? A proposition for example is judged as true or false, and that judgement might be wrong. What excludes "the principle" from such a judgement, making it necessarily true. When I see a principle in a propositional form, beside a proposition, how would i know which one is necessarily true?

    "Walking in my shoes" as an idiom here would mean trying understand me on my terms rather than subject my terms to your standards of judgment.Antony Nickles

    Don't you see, what I've been saying, that this is what "understanding" is, to subject another's terms to one's own standards? That's what I've been trying to tell you, a number of times now. To simply accept, and agree to another's terms, is not to understand the other person. That's why we are taught in school to put things in our own words, and not to plagiarize. If one does not establish a consistency between what the other person has said, and one's own standards of judgement for interpretation, then that person cannot claim to have understood what the other said. Interpretation is an act of subjecting your terms to my standards of judgement. If I have not interpreted what you have said, simply read the words and agreed to them, it is impossible that I have understood what you have said.

    Try to understand that it is a method not a theory; I have repeatedly given examples and samples of Witt's text.Antony Nickles

    But I don't see that you are showing me a method. You are saying things, talking about concepts, criteria, and grammar, without showing me the things you are describing. Is that your method, to make assertions about things which are hidden from me, because you are not showing them to me, and asking me to accept these assertions as true, carte blanche, because you are withholding from me the means for me to confirm the truth are falsity of them, by hiding the things you are talking about from me?

    Here's what I can say about your method, from what you've provided for me. You claim to have a philosophical method which is unique from others, because it uses description rather than theory. However, I am skeptical, because I see all description as theory laden. So I see your claim of description rather than theory as just an attempt to avert the need for justification. You might say it's a description rather than a theory, therefore there is no need for justification, but I would say that I want the criteria (definitions) which justify your use of words in your description. Do you see what I mean? 'A cup is on the table' (a description) require criteria for the use of the words to be understood, judged, or agreed to..

    The next thing I see about your method is that you claim to be able to say something about intention through the description of our shared lives. And you seem to believe that since it is descriptive, it is not speculative like other metaphysics. However, this is where I find a vicious circle which can only be escaped through speculation. You claim that we can conclude something about intention through describing what we mean by words like "mistake" and "accident", and describing the differences between what is meant by them. But I think we need to know the intention to know what was meant. So we have the vicious circle whereby we cannot say what was meant by the word without knowing the intention, but we are wanting to say something about the intention by knowing what was meant. So we are actually completely excluded from describing intention, and all we can do is speculate.

    I have also tried to say that grammar is just a description of the ways in which our lives have come together to create these distinctions and terms of judgment and identity and possibility for each concept.Antony Nickles

    So here is your definition of "grammar". But if I replace "grammar" in your usage, with this definition, it often makes little or no sense. Look, here's an example: "The whole point of Witt's PI in describing our shared grammar is to show that words don't always 'point' to a 'thing'."

    So you are talking about a "shared grammar" here. And "grammar" means a description of how our lives have come together. But my description is completely different from yours. That's the issue we're having in this thread. We come together from different backgrounds, we have experienced different things, therefore we necessarily have different descriptions. If "grammar" is a description of the ways we have come together, as you have defined it, then it makes no sense to speak of a "shared grammar" because we've each come from different directions with different descriptions, therefore different grammars.

    Nevertheless, I have repeatedly tried to explain how grammar is just a description of the ways our lives have embodied the things that grammar sees.Antony Nickles

    Now your use of grammar here makes even less sense. Grammar is a description. Yet grammar sees things? A description is of things, and it may be of things which are seen. Through my familiar interpretation of "grammar", I want to say that a person sees things and describes them through the use of grammar. You want me to interpret grammar as the description itself. So why do you say "grammar sees things", as if the person is seeing and describing through the interpretive tool of grammar?

    Obviously we can compare a concept's grammar to others--grammar is like context in that what we focus on is dictated by what we would like/need to investigate it for. So it is helpful to categorize groups of concepts together, as Austin does. But he also gets into the differences in types of excuses in order to show the ways our actions are considered moral or can be qualified to avoid our responsibility.Antony Nickles

    Under your definition of "grammar", I don't see how a concept could have a grammar. Grammar is a description of the possibility for a concept. How do we make the jump from describing the possibility for a concept (grammar), to the the claim that an actual concept has a grammar? Or, are all concepts just "possible concepts", because that is how they are described by "grammar", such that a "concept's grammar" implies the possibility for a concept?

    I thought I have made clear that Grammar may not be present (conscious), but what it describes is inherent in the concept (the life in it).Antony Nickles

    This use of "Grammar" makes no sense to me. How is the thing described inherent in the concept? Don't you recognize a separation between the thing described, and the description?

    It is not just made up rules or some theory about words; it is a description of ways in which intention works, what matters to us, what counts for it, the reasoning it has, and the ways it falls apart.Antony Nickles

    This is very clearly incorrect. It is a theory about the way intention works, it is not a description of the way that intention works. Actions, which are what is described, as " the ways in which our lives have come together to create these distinctions and terms of judgment and identity and possibility for each concept", are the results of intention, the effects. When you proceed to speculate about the cause of those actions, intention, it is theorizing.

    Furthermore, you have not closed the gap between the possibility for concepts, and the actual existence of concepts. This is another indication that you have a speculative theory rather than a description. Grammar only goes as far as describing the possibility for concepts, and anything you might say about the actual existence of concepts is theoretical and speculative.

    Studying grammar shows us the way mistakes work--how they are identified, how corrected, the responsibility I have to what I say.Antony Nickles

    This is incorrect as well. Studying grammar is to study a description. This can show the effects of a mistake, but it cannot show the way that mistakes work. Nor can it show how a mistake might be averted or corrected. Principles other than descriptive must be applied for that, theoretical principles. To show the way a mistake works is to show the cause of a mistake. That is what I described in my last post, "the way mistakes work". But your study of grammar has no approach to this, because you have no way to apprehend the actual conception, which is where the mistake inheres. You only describe the possibility of conception.

    Now here we are way off into a picture of communication that Witt spends half of PI trying to unravel. Yes, grammar is public. It is both within the expression and in our lives because those are woven together. We do not "have" or control grammar or meaning (use it any way we like) anymore than we "have" or control the ways we share our lives. An apology is an apology despite what you want it to be. A concept has different senses (options, possibilities) in which it can be used, but "sense" is not some quality an expression has which is applied by intention or "meaning" (or "thought"). We do not "apply" grammar. Our expressions use concepts which are embed in the shared lives we already have.Antony Nickles

    This use of "grammar" is completely inconsistent with your definition. Grammar is a description. It makes no sense to say that we have no control over a description. A description is either your description, mine, or someone else's, and we are completely free to choose our words as we see fit. A description only becomes public if we understand, and agree on it, and this requires interpretation, explanation, justification, etc.. At each step we have control.

    When you say "Our expressions use concepts which are embed in the shared lives we already have", I think you forget that the "shared lives we already have" is not grammar. Grammar is a description of this shared life. We may not have control over the sharing of our lives, which we've already had, but we do have control over our descriptions of it, and consequently we get some control over the way we share our lives in the future.

    Grammar is forgotten (not hiding, or "in" an expression, readily viewable) because we just handle things in our lives--thus philosophy's images of turning (in caves), and reflecting, and looking back, remembering, etc. Thus we have to see it indirectly in the kinds of things we say when we talk of a concept. Again, we do not use grammar (directly) to clear up misunderstandings ("interpret words" plays into the picture I describe above). "Misunderstanding" has grammar as well, and so ordinary ways in which it is handled.Antony Nickles

    None of this makes any sense to me if I adhere to your definition of grammar.

    Well, again, the picture of "intention" (as casually or ever-present) is getting in the way, as well as the idea that grammar is somehow a justification, reason, or conscious necessity. That being said, this is a good thing to bring up. We do not "have" to follow the ways our lives come together. We can act randomly, or even act rationally (or emotionally) but revolutionarily (against our concepts or taking them into new contexts). We can act flippantly, playfully, experimentally, etc. All of those things are specifically possible because of the grammar for each concept being specific to it and flexible in those ways (even those concepts).Antony Nickles

    If grammar is just a description, then it is not "the ways our lives come together" but a description of that. We need not follow any such description, we might even reject a description on a judgement of inaccurate after reference to criteria. A description is really nothing more than a theory about the thing being described.

    Furthermore, if you ascribe to human beings the capacity to act freely, randomly etc., in a way which does not follow the description (grammar), then you are actually admitting that the description has inaccuracies. If philosophy is an activity which seeks truth and understanding, and this means that we are seeking the highest standards of knowledge possible, then why would we settle on a method which admittedly accepts inaccuracies? I can see that for practical purposes we accept lower standards, as Aristotle describes in his Nichomachean Ethics, but here we are looking at the theory which will give us understanding of intention. Is this the basis of your distinguishing OLP from other philosophies? Is it not seeking a method toward truth and understanding (as other philosophies are), but rather a practical method for activities in the world. If this is the case, then how does describing language activity, and things like what we mean by the use of particular words, provide a better starting point, as an approach to intention, than moral philosophy does? What I see is a vicious circle which locks us out from any true understanding of intention, while moral philosophy seeks to understand intention directly.

    I will just point out, as I did above with Joshs, that Witt and Austin and Cavell (and Emerson) see our relationship with our expressions as giving ourselves over to them, choosing (if that is the case) to express, and then that expression speaks for us, but also reveals us (in its having been expressed). We say it, then we are responsible for it (which we can shirk), so answerable to the other to make it intelligible, even why it was meaningful to say it, here, now; describe, in what matters for this concept, what matters to me, to make clear to you.Antony Nickles

    This is a fine example of speculation. But you present such speculations as descriptions, in an attempt to separate your philosophical method from others, as if it is somehow superior because you assume justification is not required for descriptions.

    If I were going to tell a story, it would start that we learned language and our human lives together.Antony Nickles

    This is a false starting point, a false premise, a faulty description. The fact is that human beings are spread out in space, all over the earth, and in time, through thousands of years. When we learn language we learn it from a very few people whom we are close to, we are "together" only with that tiny group of people. The degree to which "our human lives are together" is extremely minimal.

    Therefore, there is a fundamental separation between people which makes it impossible to speak about "the Grammar of language" in general, or, "the language-game" in general. There are distinct grammars and distinct language-games, and the assumption that you can aggregate them in composition to make one artificial Grammar, or language game, is a false assumption, the fundamental differences are too diverse. This practice of aggregation is just a misguided attempt to facilitate your theory.

    This false description gives you a very skewed perspective. Instead of recognizing the individual differences between the individual perspectives of individual people, differences which need to be worked out through establishing consistency in interpretative, explanatory, and justificatory practices, through the application of rules and criteria, you simply take all this for granted, as a starting point. However, this need for establishing consistency is ever present, and on-going, as is the separation between individuals ever present and on-going. Therefore we cannot take it for granted that this consistency has been already established, some time in the ancient past, and that some sort of togetherness maintains this consistency. That togetherness is a false premise, easily disproven by an accurate description.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    1) you insist on your terms and your framework (and your criteria for judging what I say) instead of working to see my terms and how what I am saying requires you to see everything in a new way (walking in my shoes is exactly the method of OLP--trying to see what I see).Antony Nickles

    Actually I've been working very hard to demonstrate to you that I do not understand what you are saying, because your use of words is not what I am accustomed to, or familiar with. I am asking you to explain, or define some terms which in your usage have created obstacles, roadblocks to my understanding. The reason for defining words and insisting on criteria, is, as I've said, to ensure an adequate understanding. "Walking in my shoes" is exactly the type of thing which requires criteria, rules and definitions. Agreeing with each other does not require criteria, rules, etc.. However, if words are used in strange or ambiguous ways, we might agree with each other, then proceed on our respective ways, assuming to have understood each other, when we really misunderstand, and that would be a mistake.

    So I requested, that you define "ordinary criteria", in a way which I could understand, and you couldn't, or didn't. Instead of defining it, or explaining what you could possibly mean by it, you eventually suggested exchanging it with "grammar". Now I have the same problem with your use of "grammar". I can't make sense of what you are trying to say with it. If you want me to "walk in your shoes", you need to provide me with what is necessary to understand your point of view. Clearly I do not have the same background as you, so you cannot simply use words in ways which are foreign to me, and expect that this will allow me into your perspective.

    At this point I would say that we do not have a clear understanding between us, as to what "grammar" refers to. I will adhere to a familiar understanding, that grammar refers to some sort of rules which we follow, and I will attempt to demonstrate how it makes sense to interpret "grammar" in this way. If you can show me another way to interpret "grammar" which makes sense to you, then I will attempt to follow you.

    2) we are getting side-tracked on every little statement I make if it doesn't fit what you believe even if it isn't part of my trying to explain a different method of philosophy, instead of having to justify every little thing.Antony Nickles

    If you want to show me a method of philosophy, then show me a method of philosophy, but to use words in ways which are illogical, hypocritical, and even contradictory, from my philosophical perspective, does not appear as a method of philosophy, it's a method of sophistry, better known as deception.

    4) the points I have made above or to other participants are getting forgotten or lost and so I am having to repeat myself.Antony Nickles

    If I tell you that I don't understand how you could possibly be using "ordinary criteria", and request that you could use different words to explain or describe to me, what it is that you are referring to with these words, then repeating yourself is not the answer.

    Now, I think we've made some very real progress with your switch from "ordinary criteria" to "grammar", but I still don't see the thing which you are referring to with this word.

    This is the problem I am having. Your words are referring to some type of thing or things which you assume exists somewhere, "ordinary criteria", "grammar of a mistake", But you are not describing this thing or things, and when you point toward where the thing ought to be I do not see it, nor do I see any logical possibility that the thing referred to through my normal, familiar, use of those words, could even be there. Therefore you need to provide me with a better description of what you are referring to, so that I might understand your use of those words.

    I do think you may be taking "grammar" too literally (as regularly defined), but I'm not sure this is all wrong. (Though Witt does differentiate Grammar from "rules" in many different ways (we don't "follow" Grammar), but that is a rabbit hole.) Grammar does show the boundaries of what would be considered a "correct" or apt apology (but this type of criteria does not work for, say, intending--though we may find the Grammar of what is or is not part of intending). And there can be different "uses" (senses) of a concept (like: I know, above), and Grammar does differentiate between these. But the phrase "rules of correct usage" makes it seem like we are looking for something to ensure "usage"; maybe, of meaning, or communication, etc. that would be "correct" as in justified or certain.Antony Nickles

    The reason why I was looking at grammar as "rules of correct usage" is that you replaced "criteria" with "grammar". And criteria is very explicitly principles for judgement. In language use we have two very distinct types of judgement, choosing one's words, and interpreting the words of others. So if grammar shows some boundaries as to what is correct in language use, and it doesn't refer to rules of correct usage, then can I conclude that it refers to rules of correct interpretation?

    But how could these two sets of rules be fundamentally different? If the boundaries for choosing words were different from the boundaries for interpreting words, wouldn't this lead to misunderstanding? Where else could you possibly be pointing with "grammar", and "criteria", other than to rules of usage? I just don't see it. That's how the words are normally used, now you want to say that you are pointing to something different than this, but what could that different thing possibly be?

    In any event, moving on, the focus is the "concept" of a mistake--we could call it the "practice" of a mistake (though that has confusing implications). And looking at what we imply when we say "I made a mistake" is to find differences that make it distinct (in our lives) from, say, an accident (this differentiation is "part of the Grammer" as Witt says). If I can say "what did you intend to do there?" we learn that part of the Grammar of intention is that it is not always present--you do not intend anything when you have an accident, or (usually) if you do something in the ordinary course. These are, in a sense, categorical claims, procedural claims, claims of distinctions, etc. So it is a different level of investigation than just how language is justified--these aren't rules about language or communication, they are what matters and counts in our lives--we are simply turning to look at them.Antony Nickles

    Sorry to have to inform you of this Antony, but this does nothing for me. It appears as so confused and full of mistakes.

    First, as you say to 'practice a mistake' has very confusing implications. No one practices a mistake. Couldn't you have found a better way to say what you wanted here? I assume you are asking 'what does it mean to make a mistake?'.

    But why do we have to distinguish "mistake" from "accident" to do this? Why must we "find differences" And if a mistake is a type of accident, then "accident" will be a descriptive term used, like "animal" is a descriptive term used for describing "human being". In describing a thing we do not assume to have to distinguish that thing from other things, we do the exact opposite, compare it to others, looking for similarities, to establish its type. The differences are what is obvious to us, we don't have to find them, as they normally jump out at us, to describe the thing we look for points of similarity, and make comparisons.

    But you really lose me with "Grammar of intention". What is the point of "Grammar" here? It appears to serve no purpose but to distract, as if you are talking about Grammar when you are really talking about intention. Clearly you are talking about intention rather than grammar, as you proceed with "you do not intend anything when you have an accident". However, this statement is itself mistaken. "Doing something" always involves intention, so even when there's a mistake or an accident there is still something intended. So a mistake, or an accident, is an unintended feature of an intentional act. Therefore the fact that there was an accident is insufficient for the claim that intention was not present.

    We might however, use this fact, the occurrence of a mistake, as evidence that Grammar wasn't present. Let's do that instead shall we? Now we have evidence of intention without grammar. And we appear to have no principle whereby grammar could be brought into intention. So "the Grammar of intention" is a misnomer, a mistaken use of words which we need to reject. As you ought to be able to see, grammar is not inherent to intention, but extrinsic to it.

    Yes, this got all twisted up. OLP is not "saying" something is a "mistake". It is making a claim to the conditions of/for a mistake--you can call that "judging" the example, but the point is to see the grammatical claim. Now, yes, another philosopher might hear the grammatical claim and say, "no, you haven't got that right." At which point they might say "The context would be different", or "the implication does not have that force." (This happens between Cavell and Ryle). But the point is you have the means and grist with which to have a discussion. I was trying to say this is not the normal conversation that people would have to figure out if it was a mistake or an accident--people in a sense "assume" (though this is misleading) the things that philosophers would call Grammar because mistakes are part of our lives. We are not trying to justify whether it was one or the other, we are discerning what makes it so by investigating what we mean (imply) when we talk about it.Antony Nickles

    Do you see the point I am making? Grammar is not any part of a mistake. Grammar is brought into existence intentionally, to serve a purpose, and that purpose is to avoid mistakes, to exclude the possibility of mistakes. The "conditions of/for a mistake" are the absence of appropriate grammar. If the appropriate grammar was there, there would not have been a mistake. So we can see that since "mistakes are part of our lives", so is the absence of grammar.

    Therefore, we can proceed toward an examination of our actions, and determine which intentional actions are lacking in grammar, therefore prone to mistake. As I proposed earlier, these are the customary, familiar, habitual actions. It is when we proceed in the customary, habitual ways, without adequately accessing the risks of the particular circumstances, and applying the appropriate rules (grammar in this case), that mistake is most probable.

    These are all different senses of when we say "I mean" or you ask "Did you mean?" Each will have its own grammar. We do not get someone's meaning by, as Witt will say, "guessing thoughts".Antony Nickles

    I think you are misusing "grammar" here, or at least using it in a way which doesn't make any sense to me. It is not the phrase itself which has a grammar, it is the people using the phrase which have grammar. It really doesn't make any sense to say that there is grammar within the spoken words. How would we locate this grammar in our attempts to interpret the words? As I explained above, we apply grammar. When the person speaking is applying a different grammar from the person interpreting, then we have here another type of mistake, again due to an inadequacy of grammar. But in this case there is an inconsistency in grammar, and this will lead to misunderstanding, which is also a type of mistake due to an absence, an absence of consistency..

    Just two things: calling a speech act grammatically correct (not of course correct in regular grammar) does nothing to ensure understanding. Second, one might choose their words very carefully (as is necessary in philosophy as opposed to regular life), and it might be the other is not doing their part in understanding, but rather just insisting on justification or explanation on their terms.Antony Nickles

    Right, calling a speech act "grammatically correct" is done from the point of view of a particular grammar. If, my grammar is different from your grammar, then I will still misunderstand you despite your assertion of grammatically correct. This is why we need the second condition, in order to avoid mistake, the first being grammar, the second being consistency in the grammar.

    If you follow me so far, I can tell you about a third condition, and this one is the most difficult to understand. The third condition is the willingness to follow, or adhere to the grammar. as we are free willing beings, their is some tendency for us to drift off into some sort of random actions, or trial and error situations. Here again we would have no grammar in our intentions.

    I would put it that there is Grammar for each "class" or "type" of action (I'm not sure I would say "unique" because they overlap, etc. (as if family resemblances); and one might get the idea we are talking about each individual act.) So each concept, e.g., --"meaning", "knowing", "understanding"--all have associated "grammar" (multiple, and extendable, as much as our lives). Now we are tripping up on "incident" again as well--some incidents are not (grammatically) distinct from each other; we will only come up against grammar when necessary, and, even then, the discussion may not be "about" grammar (just along its lines as it were). Maybe it helps to point out that we are not "following" grammar, that we are just meaning, knowing, understanding, having accidents, making mistakes.Antony Nickles

    I'm almost happy with this use of "grammar", except that I will insist that grammar must be something that we are following, like instructions, rules. It makes no sense to say that the grammar is within the words, "meaning", "knowing", "understanding". Where could it possibly be hiding? Instead, we follow a grammar when using the words (speaking), and interpreting the words. Otherwise we have no way to understand the nature of misunderstanding. If the grammar was in the spoken words, then either we'd perceive it (and understand), or not. To allow for the possibility of misunderstand, we allow that the words are apprehended, but improperly interpreted. Then what does "improperly interpreted" mean other than not applying the correct grammar? So we must allow that "grammar" is the rules we follow in choosing words and interpreting words.

    Are you consciously aware of the grammatical rules as you speak or write every sentence? Could you name the grammatical rules for all uses of a given word (without looking it up, of course)? Are you aware of the grammatical rules and meanings/uses of all words in every English-speaking location?Luke

    I have been insistent with Antony, that we must allow for the reality that much speaking is done without grammatical rules. The reason for this insistence is to be able to account for the reality of mistaken understanding, misunderstanding. If we say that misunderstanding consists of instances when the speaker is following a different grammar from the interpreter, then we have to account for the possibility of this difference. This would mean that a person's grammar is developed individually from another person's, through one's social interactions for example. But this implies that a person goes into the social interactions, in the original condition (as a child), without grammar. And, the person must still be capable of communicating, in that original condition, in order to learn the grammar, without having any grammar. Therefore grammar is not a fundamental aspect of communication.
  • The self
    So once you complete that turn in the turning point, and you observe the sensation, it is just another event no different than any other event as an event.Constance

    It's not "an event" though, that's a misrepresentation, and you ought to be able to see this. There is a huge multitude of things going on all required for me to feel pain. You cannot reduce pain to "an event".
  • The world of Causes
    Logic is experienced in the first person. Every thought is an experience. So logic prevails, but yes, it is a conundrum isn't it? The third person perspective cannot include the first person experiential element, and is different in time and space, so it is logically dissonant, it seems. I'm not sure what to think really. It is a big issue, and one I currently cannot answer.

    I don't really want to debate the issue, I just wanted to compare notes. If you find an answer to this first person vs third person conundrum please let me know.
    Pop

    Well, if we want to get really technical, there is no such thing as the third person perspective. We're all somewhat independent, having our own personal perspectives. Therefore we all have a first person perspective and that's all. But we always seem to have a desire to empathize, to put ourselves in the shoes of the other, to determine why the other is different and why we have problems understanding each other. So we've developed logical rules which enable us to better "compare notes", because the rules are meant to be a standard not specific to any one of us.. However, I am never quite able to completely understand you, and you won't totally understand me so we will never properly identify with each other. I believe therefore, that we create an imaginary "third person", which is neither you nor I, but some other, "they", who is supposed to be some sort of intermediary or independent third party..
  • The self
    Here is the rub: Once the facts have been suspended, and all that remains is the most "local" fact, the pain itself, right there at the, if you will, Cartesian center of experience, we do the final reduction and consider this event as a qualia, the unutterable presence of the pain.Constance

    Sorry Constance, I don't buy it. I don't see getting to "the pain itself" as the final reduction. It's just a turning point, of going from the external world of what you call "facts", to the internal world of feelings. So there is a whole new world waiting for our analysis in the world of feelings. And I don't buy the notion that this is a world we cannot speak about, because we commonly talk about our feelings. It just requires a completely different way of talking from the way that we talk about the external world of "facts".
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    The logical possibility that heliocentrism could have come to be without the antecedent geocentrism is irrelevant in the face of fact that the record shows Copernicus developed the former because he knew something about the later, sufficient to justify changing it.Mww

    I think it's relevant, because you need to show a causal relationship to support your claim. Your claim was that new knowledge builds on the old knowledge, so we need a causal relationship of how the old knowledge lead to the new, not just circumstantial evidence. That Copernicus knew the geocentric system, is clearly not the cause of him developing the heliocentric system, because millions of people already knew it as well.

    But clearly the old conceptual structure was rejected, lock stock and barrel, and replaced by the new. It wasn't a case of "changing" geocentricism, as you seem to imply, it was a case of rejecting and replacing it. If knowledge truly advanced simply by building on older knowledge, the geocentric system would not have ever come about, because this description doesn't allow for dismissing old knowledge as wrong. This is a problem epistemologists have, how can knowledge be wrong. If it's wrong, it can't be knowledge. But if we do not know it's wrong, we'll call it knowledge. So what is 'real' knowledge, the stuff we call knowledge, which might be wrong, or the stuff that we want knowledge to be, which can't be wrong?

    So we can logically say the existence of the one is entirely dependent on the other, given the historical facts..Mww

    No we cannot make that conclusion. I think you are confusing "sufficient" with "necessary", and you haven't even demonstrated geocentricism to be sufficient. For one to be "dependent on" the other, means that the other is necessary. In this case, you have merely asserted that geocentricism is sufficient, but you haven't shown it to be necessary. So even though the one is prior to the other, in time, you haven't shown the posterior to be dependent on the prior.

    Here's an example to consider. Someone tells me my hair is too long. The next day I get a hair cut. You might argue that the person telling me my hair is too long is sufficient to cause me to get my hair cut, so in this historical context it is the cause. But that would be faulty logic, because a multitude of other things might be the real reason, I might have already been planning the haircut. So you cannot conclude "one is entirely dependent on the other, given the historical facts" because we never know all the historical facts. History is open to interpretation.

    Minor point, but no: laws are built on principles, rules are built on laws, suppositions are built on rules, but principles are not built on each other. If they were, each principle would be contingent, hence any law built on a contingent principle, is not properly a law.Mww

    I can't follow your use of terms, but I will ask at the end of this post for an explanation of "principle".

    Agreed, almost. We can account for principles simply from the thought of them, but they are not thereby empirically proven. It follows that our empirical knowledge, when based on them, is not so much flawed, as always uncertain. And it really doesn’t change or help anything, to call uncertainty a flaw, even if in the strictest possible technical sense, it is.Mww

    When we're talking about knowledge, clearly uncertainty is a flaw.

    It must be absolutely true a priori principles are real, because we cannot deny having thought them,Mww

    If we thought up the so-called a priori principles, and we are sentient beings, then how could these principles be free from the influence of sense experience, to be truly a priori?

    It could, however, also be said the principles at the base of the structure, being around the longest, are the most powerful, because they have been used to evolve knowledge from the primitive.Mww

    Yes, definitely the principles at the base are the most powerful, being the most useful. The problem being that useful does not equate with true. We can see that with the geocentric system. The principles they used were powerful and useful (Thales apparently predicted a solar eclipse), but they were not true. We have a trend in modern science, which is disturbing to me, and that is the trend to replace the search for truth, for the search of useful principles. So scientists focus on their capacity for making predictions rather than trying to find the true nature of things.

    If I were to analyze the idea to a finer point, I might say premises support what knowledge is about, while principles base the structure of knowledge itself. In this way, it is explained why some fundamental principles have lasted so long and some supporting premises fall by the epistemological wayside.Mww

    Can you explain to me, how you would differentiate between a principle and a premise.

    It would help if you could give an example of a mistake which also is, as opposed to merely is causing, an accident.Janus

    So if I'm walking for example, and there's an object in my path which I step on. My stepping on it is an accident, as the unforeseen, unintentional event. Stepping on the object is also my mistake (wrong action).


    I have long thought Wittgenstein thinks of grammar as being equivalent to one sense of logic. So, the grammatical structure of statements reflects the logical structure of perception. Also, the logical structure of conception reflects the logical structure of perception. But that is probably more Tractatus than Investigations. I have tried to read PI but have never found it illuminating enough to sustain much interest in it.Janus

    The "logical structure of perception" is what I am arguing against. I think it's nonsense to say that perception uses logic. There is a logical structure to conception, because conception is done through the application of logic. But there are many notions, ideas, and beliefs which do not have a logical structure, some are even illogical, and therefore cannot be said to be conceptual. The structure of these ideas and beliefs are closer to the structure of perception than to conception, and cannot be said to be logical.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    The problem I see here is a backward analysis. The processes of formal logic came into existence following the coming into existence of language. the application of rules, grammar, criteria, etc., was developed in an attempt to make language use logical, so that language could provide better understanding. Now when we look back at natural language, basic, common, ordinary language, in analysis, we want to apply these principles, grammar, criteria, rules, which were developed for logical language, but they do not fit in describing natural language. Instead of recognizing that these descriptive terms of logical languages do not fit in describing natural languages, because they describe features exclusive to specialized languages which came into existence after natural language, some philosophers of language will go through all sorts of contortions in an attempt to make them fit. Instead, we ought to just recognize that rules, criteria, and grammar, are not necessary features of natural language.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    I read him as indicating instead that a "surveyable overview" of all grammar is difficult, if not practically impossible, since our grammar is "deficient in surveyability":Luke

    How could there be grammar which is not surveyable? The rules of grammar must be observable if they are to be followed. It makes no sense to say that someone is obeying grammatical rules which they have not found, located, or identified. This is the same problem I brought to Antony's attention concerning his use of "criteria". Antony seemed to claim that we use "ordinary criteria", but we don't know what that criteria is. How does that make any sense, to say that we are using some sort of grammar, criteria, or rules, in our proceeding, but these rules or principles being applied are not present to the conscious mind which is proceeding with those actions? Mww suggested a similar thing, that we could proceed in logic with unconscious premises. How does that qualify as logic, to proceed with unstated premises? And how does this qualify as "grammar", if there are rules of grammar which we are supposed to be obeying, but we cannot even observe them?
  • The world of Causes
    Whilst the third person perspective is an invaluable conceptual tool, I tend to question to what extent is it real given nobody can ever experience that perspective? The experienced perspective is the first person perspective, and concerning time is quite a different beast.Pop

    The issue here is the question of how much of reality is not experienced. We know there is a huge portion of reality which is unexperienced by us. Quantum physics tells us about fundamental particles which are not experienced by us, and cosmology tells us about things like dark matter, dark energy, and spatial expansion. I believe that these are just the tip of the iceberg, and the world of the unexperienced is actually much more extensive than what is experienced.

    Assume that time is passing, and there is reality on both sides of the present. The human experience gives us a very narrow window on that reality. Our "present", being the things we can experience, is a very narrow range. Maybe we experience everything between one one hundredth of a second, and one tenth of a second, that's just a guess but it's probably actually much narrower. Anything faster or slower than that, we cannot experience, but we understand from memory, logic, and reasoning.

    Who's point of view is valid? Is it valid to apply a third person perspective to a first person perspective of time? I don't think so. That would be saying they are in my time and space, which they are not - they have their own time and space. It would seem that only the first person perspective is a valid view in this case. Thus Einstein's conclusion - relativity. Or time and space are relative to the observer.Pop

    If this is your assumption about what is valid, then all instances of logic being applied toward understanding things which are not actually experienced would be invalid. Of course that's incorrect, because if the only valid knowledge was things which are directly experienced, first person, then nothing obtained from the application of logic would be valid. So there would be no point in using logic because it would be all invalid for going outside experience.

    In reality we apply logic in an attempt to get us outside the first person perspective, to develop a more objective outlook which is not tainted by the constraints of the subjective human experience. However, the premises which logic proceeds from are derived from the first person experience, so it is very important to find sound premises, most widely applicable, in an attempt to ensure that they are as least tainted as possible. So when we look at time, we can see that there is a substantial difference between past and future, and this difference influences every aspect of our lives, and is therefore very widely applicable. The idea that there is a substantial difference between past and future makes a very sound premise to proceed logically from, towards understanding what is not experienced by us.

    From the first person perspective experience occurs in the present moment, where the future is a probabilistic abyss. There is no absolute certainty that it will occur. It has been our experience in the past that it will continue to occur, but there is a non zero probability that it wont ( particularly in covid times ). So that there is a future is an assumption, in my view.Pop

    At the fundamental level, there is only assumptions. This is because we cannot justify every principle, or else there would be infinite regress, or a vicious circle. So the fundamental principles can only be validated by experience. And how we relate to, or explain, our experience, takes on the characteristics of assumptions. That we remember the past, and anticipate the future are fundamental aspects of our experience. That the past and future are real, therefore is an assumption. But since these aspects of our experience, remembering the past and anticipating the future, are so fundamental to our experience, as the most basic aspect of our experience, then to deny that we can conclude therefore that there is a real past and future, would be equivalent to asserting that our experience cannot tell us anything about reality.

    Your causation is one of determinism plus free will ( compatibilism ). I take my que from systems such as covid19 and see causation as determinism plus a slight element of randomness, such that there will be a main causal thrust and then some variation to it, such that when the multiplicity of causal elements are combined the picture becomes quite random indeed. This randomness acting upon the multiplicity of causal elements causes emergent properties to come into the future. This makes it probabilistic and uncertain.Pop

    The difference between my perspective and yours then, is that free will is fundamentally intelligible, in relation to causation, while randomness is not. So these aspects of reality which are outside of our experience, and presently outside of our capacity to understand them with logic, you assume that they are to be accounted for by randomness, which indicates that you believe they are fundamentally unintelligible in terms of causation. I believe that we are just not applying the proper premises in our logic. And if we accounted for the reality of free will in our premises concerning the nature of time, things which appear to you as causally random would start to look far more intelligible. .

    .
  • The self
    Here, the claim is that the flame on your finger carries a non empirical, non discursive or irreducible intuition of a metavalue, i.e., an ethical badness.Constance

    Your experiment is self-refuting. If I were to put a match to my finger, me doing this would demonstrate that I do not believe it to be an ethical badness, and so it is not a metavalue.

    I must admit though, I really don't know what you mean by "metavalue".
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    Btw Antony, this is what I perceived you were trying to do with "criteria", use the word in a way which was outside of the concept's grammar. If we allow, "there's nothing wrong with that", then we open a big can of worms. If we want to enforce the grammar of concepts, then the P in OLP stands for Police.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    In #90 the statements we say about concepts show us their possibilities; these possibilities are part of its Grammar--this concept can do that and this, but if it tries to do this other, than it is no longer that concept. When does a game just become play? The concept of knowledge has different possibilities (senses, options) and each is distinguished by its Grammar.Antony Nickles

    So here's the dilemma for you Antony. Can the word "grammar" be successfully used in the way that Wittgenstein demonstrates, which is to go outside of the concept's grammar? If so, then it's not true that a concept's grammar is what determines its possibilities.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    Apart from the fact that 'mistake' is also a verb which 'accident' is not, it is easy to see that there are a significantly different constellation of associated ideas in each case. There is also some overlap to be sure. The two terms are far from being synonymous.Janus

    Of course they're not synonymous, I don't think anyone suggested that. The issue was how to distinguish a mistake from an accident in order to ensure that the correct word is used to describe the situation.. And, as I demonstrated, sometimes a mistake is also an accident, and in those instances the accident would also be a mistake. What makes one of those a better choice of words in these instances?

    I have to be honest here: call me obtuse, but I have to say I don't have any idea what Wittgenstein is getting at in those passages from PI. Can it be explained in plain language?Janus
    Wittgenstein's use of "grammar", I find is very elusive. I think he wants the word to do what it cannot possibly do, and that of course is a problem.

    n #90 the statements we say about concepts show us their possibilities; these possibilities are part of its Grammar--this concept can do that and this, but if it tries to do this other, than it is no longer that concept.Antony Nickles

    Wittgenstein, taking hypocrisy to a whole new level, trying to make the word "grammar", which he says refers to the limitations of what a word can do, do what it cannot possibly do.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    Except you aren’t at the fundamental level, obviously, because my assertion presupposes knowledge already acquired.Mww

    OK, sorry I misunderstood. But now that I think I understand, I don't see the relevance of what you said. Of course we cannot examine the coming into being of knowledge without knowledge having already come into being, but how is that point relevant to anything?

    Your rejoinder is even more absurd empirically, considering the reality that, e.g., heliocentrism could never have come to be known, if the standing knowledge represented by geocentrism wasn’t being first examined by Aristarchus. Just because Ptolemy turned out to be wrong doesn’t take away from his knowledge.Mww

    You have no logical association here. Let's say that geocentricism was examined and demonstrated as incorrect. Then heliocentrism took its place. Heliocentrism is not based in geocentricism, nor does it require geocentricism to precede it. Heliocentrism is completely distinct, and not dependent on geocentricism at all, so it may have come into existence without geocentricism preceding it. Just because it didn't and the one does follow the other in time, does not prove a causal connection, so we cannot logically say that the existence of heliocentrism is dependent on the prior existence of geocentricism.

    Yet, that is exactly how science is done, and science is both the means and the ends of human empirical knowledge, so.....the asymptotic relation is glaringly obvious.Mww

    I agree very much, that some knowledge depends on other knowledge, and that in many cases principles are built on existing principles. The problem is that characterizing knowledge in this way denies one the capacity for a complete understanding of knowledge. This is because the most fundamental principles are also a part of knowledge, and we cannot characterize them in this way, as built on other principles. If we characterize knowledge in this way, then all knowledge will be based on other knowledge, and therefore all knowledge will require some fundamental principles not derived this way, at its base, to support it. Since we cannot account for those fundamental principles, then all of our knowledge of knowledge is fundamentally flawed.

    You can try to avoid this problem by positing a priori principles as the foundation, but I see this proposal as unacceptable. This is because the reality of a priori principles cannot be demonstrated, so they appear to me to be simply an assumption of convenience. If we cannot account for the fundamental principles, that's no problem, we just posit a priori principles and there you have it, problem solved.

    I’m not characterizing knowledge, but theorizing on its acquisition, which presupposes its character is already determined, as it must have been, in order to grant it is something possible to acquire by the means supposed for it.Mww

    Let me see if I can understand what you are saying here then. You are assuming that there is something called "knowledge" and since there is such a thing its character is already determined. Now you are theorizing as to how knowledge might have been acquired.

    It might just be that knowledge doesn’t even have a character, but it is a characterization of something else. Knowledge may be characterized as merely the condition of the intellect. But that still doesn’t indicate what knowledge is, but only what it does.Mww

    But now you are rejecting that assumption, saying that there might not even be such a thing as knowledge. I don't think you can have it both ways. That would just lead to ambiguous meaninglessness. If your premise is "there is knowledge", so you proceed to inquire into the acquisition of knowledge, and the conclusions you come to, make you realize that the original premise "there is knowledge" is unsound, then shouldn't you reject that premise altogether, and start from something completely different?

    This is what I think is fundamental to knowledge. We start with premises which prove very useful, and since they are so useful they seem solid to support structures of knowledge. So we build huge structures on these fundamentals, which appear to be unshakably sound due to their usefulness, until we get really high, and far out on the branches, where the conclusion start to appear a little absurd. Why are the conclusions absurd? Well it's not evident, and we can reexamine the logical process over and over again without finding the fault. Then we must face the only remaining possible solution, the fundamental premises, the premises which we take for granted, as absolutely unshakable, which support the entire structure, are not sound, and therefore the entire structure must come done and be rebuild all over again from bottom up.

    So I think that your example of heliocentrism and geocentricism is very relevant and can tell us a lot about this reality. Thousands of years ago, that the sun moves across the sky, was a fundamental, unshakable premise, very useful for making clocks, calendars, and all sorts of representations. Back then, no one would even think of anything other wise. So huge structures of knowledge and predictions were built on this fundamental premise. However, it turned out that certain predictions weren't coming out quite right, there were some anomalies. When this happens, we can proceed by piling more and more principles onto the structure, to deal with the anomalies, but this just makes the entire structure more and more unstable. Eventually, the whole structure had to come down, to start over again from scratch, to address the unsound premises, which had at one time seemed so obviously true.

    An important thing to remember here, is that the principles at the base of the structure have been around for the longest. Although they are the ones taken for granted as the most obvious, and basic, they are actually the weakest ones, having been put into use the longest time ago when the state of knowledge was most primitive.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    Now let's just clear up that the grammar of a mistake would not be used in making a decision as in beforehand (in most cases--except a deliberate appeal to them, like in a speech), but, as I believe you are saying, in a decision as to what happened, though usually indirectly. For example, "Did your finger slip? (Was it an accident?); or, "Why did you shoot the cow?" (Was this a mistake?)Antony Nickles

    Can you clear this up for me then? What is meant by "the grammar of a mistake"? If "grammar" concerns rules of correct usage, and a "mistake" is to do something incorrectly, then how could a mistake have grammar? Doesn't "grammar of a mistake" seem oxymoronic to you?

    quote="Antony Nickles;492631"]Here, above, we learned that part of the grammar of an accident does not allow it to be considered beforehand (again, revealing something about intention), but that a mistake's grammar allows for mitigation, say, by concentration "Don't make a mistake".[/quote]

    See, replacing "criteria" with "grammar" does not resolve the issue. If it was a "mistake", then the grammar was not adhered to, and you cannot talk about "a mistake's grammar". If grammar was adhered to, you cannot call it a mistake. What is a mistake then, an act without grammar? But is an act without grammar necessarily a mistake. Wasn't there a time prior to grammar? Were these actions which brought grammar into existence mistaken actions?

    With OLP we are not "judging" (or justifying) the action, we are making a claim to our observation of the grammar (my claim, your concession to it), and the evidence is the example of what we say when we talk of accidents, or mistakes. So we are not doing the judging; people just make mistakes and accidents happen, and these are part of our lives, as is the deciding between them--which is what OLP looks at.Antony Nickles

    Can't you see though, that this is a judgement in itself? To say that something is a "mistake", or it is an "accident", implies that you have made that judgement. It's hypocritical to say to a person, "I'm not judging you", but then proceed to talk about what the person has done as a :"mistake". So in reality, you really are judging, by referring to things as mistakes or accidents.

    But that's just the nature of language use. Choice of words implies judgement, and that's why we can categorize language use as an action. And we assume that this activity is carried out through some form of intention, like other human acts. The difficult aspect about language use is that it is activity which is often carried on rapidly, in an habitual way, therefore with very little thought. So we're faced with the question of how does intention play a role in an activity carried out with very little thought, and no immediate indications of intention even being present.

    I was overreacting here I think to the supposition I saw that every instance calls for the need to be "judged" ("must" be justified), which I took as tied to the assumption that everything is intended or decided, or needs to be, or even can be, judged (Witt here talks of the grammar of knowledge: that there can be none without the possibility of doubt). And especially, that, if we were to (could) always judge, it would be based on one picture of how we judge.Antony Nickles

    So the point here, is that every instance of saying something, as an instance of performing an action, implies that judgement has been made prior to the act, and it was an intentional act. And, we really know very little about how intention plays a role in this activity of saying things, because much of it is done in an habitual way which displays little if any of the features of intention. However, we relate to what has been said through "meaning" implying what was meant, or intended. Therefore there is a serious gap here, a hole in our knowledge. We assume to know what was meant or intended, by an act in which intention is barely evident. So we turn to something completely other than the speaker's intention to justify our interpretations.

    The hole, or gap is only closed by skepticism. Skepticism is to recognize that there is the possibility that I misunderstand what was meant due to a deficient method of interpretation. You say for instance, "grammar of a mistake", I recognize that I might very easily misunderstand what you mean by this, so I question you in a skeptical way. Now, we'll see what comes out of this, but the way I see it, is that very often on this forum, people cannot explain what they mean when questioned about a phrase they have used. This fact provides another piece of evidence. Not only do people appear to be talking away habitually, without thought or intention entering into what they are saying, but even when questioned about what they mean by what they have said, sometimes they cannot even determine what they themselves intended. The evidence therefore, is that there are speech acts with very little if any intention, thus very little meaning, yet they appear to be correct grammatically.

    What OLP is doing is looking at Grammar to: 1) show that philosophy's preoccupation with a picture where there is one explanation (for speech, say) is confused by our desire for certainty; and 2) to learn something about, e.g., intention by looking at the grammar of actions which delineate them from each other (here, see Austin, ad infinitum) Banno.Antony Nickles

    This I believe is a misrepresentation of philosophy. It is not preoccupied by this 'one picture', or 'one explanation'. Take Plato's dialectical method for example. Each dialogue takes a term, like love, courage, friendship, knowledge, or just, and investigates the various different ways that the word is used. The implication is, that if there was an ideal, the ideal would validate the correct definition, therefore correct use of the term. It is only in the more logical based fields, mathematics and science for example, which assumes a definition as a starting point, assuming an ideal as a premise, that the 'one explanation' scenario is paramount. A philosopher might appear preoccupied in skepticism, with the question of what validates that particular explanation (definition), the one employed by the mathematician as the ideal.

    So, I think we are onto something to say OLP is not in the business of justification--we would be seeing what counts (what matters to us)--the grammar--to show us about intention, evidence, judging, decisions, etc., starting with the basic goal of OLP initially, which was to say judging and evidence--justification--works in different ways depending on the concept and even the context; that not everything is about certainty, universality, etc., but we can still have rationality and logic and truth value in other ways, and in cases philosophy thought we could not, e.g., what it is to judge and what counts as evidence, in: the problem of the other, aesthetics, moral moments, types of knowledge, and other philosophical concerns.Antony Nickles

    Clearly this is folly, to claim that we can have "rationality and logic and truth value" without justification. You know, a person could go on and on, saying all sorts of things in all sorts of ways, without even knowing what oneself is saying, spouting off all sorts of inconsistencies and contradictions, but how is the meaning of what that person is saying going to be revealed without justification? Without justification how can anyone judge whether what the person is saying is rational, logical, or true? We could consider justification to be a type of explanation, or interpretation..

    This is why there is a very real difference between saying something and interpreting (explaining) what has been said. Philosophy seeks the interpretation, the explanation. Now, there is a very real problem in affirming that interpretation is carried out according to criteria, or grammar. This is because grammar and criteria consists of principles, rules, and these things themselves need to be interpreted. This is what Wittgenstein demonstrated very early in the PI, as a fundamental principle. We cannot assume an infinite regress of rules required for the interpretation of rules. Therefore we ought to conclude that interpretation, and explanation, the aspects of language use which philosophers are interested in, cannot be deferred to grammar or criteria.

    Now we can see that we are saying each "instance" is "unique" (and here is where Joshs is, I believe, hanging onto "context" as unique/different) instead of saying there is a "particular" grammar for each "action" (concept).Antony Nickles

    But this doesn't make sense to say that there is a particular grammar for each unique action. If we separate a descriptive grammar from a prescriptive grammar, we can see that the prescriptive grammar consists of general rules for application, so we can rule out the prescriptive grammar as insufficient for a particular grammar. And if we say that each particular action has a description unique to it, how could we call unique, distinct, and different incidents, as following "a grammar"?

    In other words, if every circumstance was "unique", we would not have our lives aligned in the ways they are.Antony Nickles

    But every circumstance is unique, time and space are that way, despite what you say about the way that we align our lives. Have you ever been in two circumstances exactly the same? Even deja vu is regarded as inconclusive.

    Maybe we could say, there is what a person says, and then the possibility this is a different concept based on the anticipated grammar and the context, so that there is what is actually "done" with the words in terms of the aptness of the expression and the anticipated implications, and the consequences which should follow.Antony Nickles

    Why do you feel the urge to think that there is always 'concepts' involved when people are speaking? Why not just start with the evidence, and basic facts, that people are doing something with words? If, when we proceed to analyze what they are doing with words, the need to assume concepts comes up, then we can deal with that. But until that point I see this assumption of "concepts" as misleading.

    I see your assumption of "concepts" as directly opposed to what you say that OLP is telling you: " What OLP is doing is looking at Grammar to: 1) show that philosophy's preoccupation with a picture where there is one explanation (for speech, say) is confused by our desire for certainty;". You have just replaced the 'picture which can give certainty' with 'concept'.

    This could have been worded better. I did not mean to say "Words/concepts are used (by people)". Just that OLP is looking at the uses (as in "senses") of a concept, describing the grammar of that use (as a concept may have different uses/senses--see "I know" above). Not that I control the meaning (how it is "used") of the expression, but only that expressions (concepts) have different ways in which they work (uses/senses)--a concept will have different grammar for each use, but we don't "use" that grammar, manipulate, control, intend, etc., or "use" a concept.Antony Nickles

    This is what I'll ask of you, as a proposition, to enable our capacity to proceed in a manner of discussion which is acceptable to both of us. Can we start simply with the idea that in language and communication people are 'doing something with words'. We cannot assume "concepts", nor can we assume "grammar", or "criteria", or any such type of principles or rules as prerequisite for 'doing something with words'.

    We can start by inquiring as to what it is that people are doing with words, and perhaps make a few divisions as to the different types of things which people do with words, like Plato suggested in his analysis of "rhetoric". If the need comes up to consider concepts, or grammar, then we will consider the roles of these things as the need arises. But until then, I think that any preconception concerning the roles of these things is a hinderance to good philosophy.

    However, what OLP makes clear is that this is not the open hole that leads to the type of skepticism where we abstract from any context and install "certainty" in some other way. This would be to overlook or wipe out the grammar of the act, which includes the way it might fail, and how we rectify that, with qualifications, excuses, detail, etc. "Was that a threat...?" "No, I was trying to make an overture, and left off what I intended next." Now the Other is reassured, but are they now "certain"?Antony Nickles

    This is what I request, that we "wipe out the grammar of the act". That there is necessarily a grammar to an act is what I dispute as an unjustified, unnecessary, doubtful, and actually a very fishy claim in itself. It's fishy because it is unwarranted and therefore must conceal a hidden motivation, and this makes me uncertain about your intention. So let's start with the assumption that a human being is free to act as one pleases, and if the need to assume some sort of grammar appears to arise, we can discuss that need.

    Now if we are qualifying acts as "customary, habitual, familiar, ordinary", then we are assuming a sense of "certainty" in those types of acts, where with "other" acts we need certainty, in the sense of justification perhaps. Now we may just be thinking of aesthetics, morality, politics, etc., where some might say there are no justifications, or none that satisfy reason, or logic, or certainty. And even here, OLP will point to the grammar of the concepts in these areas as a sense of rationale, intelligibility, if not certainty, nor agreement. But there may be times when, even given the existence of our grammar, we are at a loss as to how to proceed. And then perhaps reflection on our grammar (philosophy) might help, or at least allow us to see the ground we are on in this case (the rationality of our options), so that we may go beyond our grammar, or against it, or extend it into a new world.Antony Nickles

    So, I further propose that this type of action, customary, habitual, familiar, and ordinary acts, are carried out with little, if any, reference to grammar in the performing of those acts. These acts are carried out with minimal thought, and the thought which is used is used to determine an efficacious method, for the particular circumstances (context). Therefore the thought is not directed toward, or by, grammar, it is directed by the intent to bring about the desired consequences in the particular context or circumstances. Grammar is not a principal feature of this type of act. If you disagree, then you could explain why, or how you disagree.

    And here is where we are caught by the same net. I admit (@Banno) that our language is the rope, as it were, but OLP's idea is not to "redesign language", use it in an "abnormal" way (I would say this is, backwards, putting certainty first and the words second), yet neither, as I have been saying, use it in a contrasting "normal" way, within the net as it were.Antony Nickles

    Here is the problem with this perspective, and it's a very simple and straight forward issue. Knowledge is a type of becoming. It is the type of thing which comes into being, progresses, and evolves. Knowledge advances. Language is the same type of thing as it is closely related to knowledge as a facilitator of knowledge. Because of this progression of knowledge, this philosophical need for evolution or advancement of knowledge, there is a need for a progression and evolution of language as required to capacitate the evolution of knowledge. Therefore there is a need for philosophy to "redesign language", and use language in a way initially perceived as "abnormal", or else we could not venture into the unknown with the intent to make it known.

    Just look up dictionary definitions of the two words and see if there is any consistent conceptual difference.Janus

    I have. We can say that an accident in some cases is the result of a mistake, the consequences of. But a mistake might also be the consequences of another mistake, or some other unforeseen thing, making the mistake itself an accident. So in many instances the same thing could be correctly called an accident or a mistake.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    I don’t care. From a metaphysical point of view, that is, as opposed to mere anthropology or rational psychology, reason is presupposed as developed sufficiently to be the ground of learning, which has more to do with some arbitrarily sufficient measure of extant experience. In other words, in the philosophical examination of how knowledge is acquired, something must already known.

    Besides, given that a young dog is the same kind of thing as an old dog, it is logically consistent that a young brain is the same kind of thing as an old brain. No matter how an old brain learns or knows things, it must be the case the young brain learns or knows things in the same way, or, at least can learn or know. Otherwise, it becomes possible, e.g., that a child is taught of a thing, yet learns of some other thing, which can never explain how that other thing came to be. Rather, it is always the case that a child simply does or does not learn the one thing, rather than learns some other thing instead, and it is here that, by whatever means any human learns anything, the explanation is given, because the knowledge system is common to all humans.
    Mww

    Sorry Mww, but I disagree with all of this, at a most fundamental level. First, we cannot philosophically examine the acquisition of knowledge with the presupposition that something must already be known, for the acquisition of knowledge, because this is contrary to the observed evidence of empirical science. What the evidence indicates is that knowledge is the type of thing which came into existence when there was no knowledge prior to its existence, just like there was no human beings prior to there being humans, and no life prior to there being life. So knowledge seems to be a type of thing which comes into existence from a state of no knowledge. And, we ought not accept the premise that knowledge must come from prior knowledge, or else we get drawn into the problems Plato faced with the theory of recollection, and eternal Forms.

    Since we cannot characterize knowledge as relying on something already known, we cannot characterize it as the type of thing which continually builds upon an existing foundation. Furthermore, we cannot make generalizations like you do, that all learning is done in the same way, because there may be many different ways that knowledge comes into existence as evidenced by the many different forms of life. And this is very evident from the aptitude tests given to high school students. Some people are good at some things, and not so good at others, while other people are the opposite. So a child learning how to talk follows a completely different process from one learning arithmetic, which is different from learning deductive logic, etc.. And, there are many things which people come up with, which cannot be explained as to how they came up with them. Therefore a significant number of people stray off the beaten path of the straight and narrow, and learn some other things instead.

    Tell me you were mentally actively thinking....cognizing by means of concepts..... and not merely motivating your hand to follow the dots. And afterwards, henceforth forever, was it the hand motion you remember for the letter you want to write, or the rule that the shape identifies the name of the letter you want to write?Mww

    Are you kidding? I don't think I learned to think in concepts until advanced math in high school, and I had great difficulty, I did not catch on quickly, and I dropped out of math, so to this very day I might still not cognize by means of concepts very well, if at all. I take "cognizing by means of concepts" to mean when the symbols being used are meant to represent a particular unchanging idea. So even in basic arithmetic I thought that "7" represented a group of seven things, and I learned adding, subtraction, multiplication, etc. thinking this way.

    So when I learned to write letters, it was very clearly particular hand motions which I learned. The teacher performed these on the blackboard and made us copy. And I learned to memorize the alphabet along with the way that each letter looked, so that on the command of a spoken letter, I could recall the look of it, and perform the hand motion required to draw it. Then I'd look at the letter I'd drawn, and compare with the teacher's to see how good I was getting at it, and what aspects needed more practice. After many repetitions it became habit. If I wanted a C I'd imagine it's looks briefly, do the required motion, and check to make sure it looked right. The only rules I remember concerned the size and positioning of the letters on the page and perhaps the positioning of my hand on the paper. There were no conceptual rules.

    Arithmetic brought on many more rules, but again they were procedural rules, not conceptual rules. And the role of memory was increased such that there were little rules or tricks of association used to facilitate memorizing times tables etc.. I can't say that I think any of this involved cognizing by means of concepts. That perhaps comes about from reading. In reading I learned to imagine fictitious scenarios. This cultivates the notion that words represent something which you can create in your mind, something imaginary. I think that this is fundamental to cognizing by means of concepts, the idea that a word signifies something imaginary. That there may be very specific rules for how those imaginary things must be created (in mathematics for example) was not received well by me because I didn't have an aptitude for interpreting those rules. I learned very basic geometry, but that's about as far as I got in cognizing through the means of concepts. Things were too abstract, so I could not imagine very well what I was supposed to be doing.
  • The world of Causes

    Obviously I don't agree with you on that. This is because I think that time was passing before I came into the world, and I think that time will still be passing after I am gone from this world. And for time to be passing, it is necessary that there is future and past. Before I was here, my mind was not here, and after I am gone, my mind will not be here. Yet there will still be time passing, along with the prerequisite future and past. Therefore I do not believe that past and future exist only in my mind.
  • The self
    The argument here places the need for training in a matrix of concerns that are contingent, all such concerns ultimately beg the value question. It runs not unlike those irritating deconstruction questions run: Training? Why train? to be great at football? Why this? and on, and on. The non question begging answer appears only when contingencies are abandoned and inquiry finds it mark: I do it because it is fun, enjoyable, pleasureable, blissful. ALL are bound to contingencies in the living experience, but here, I am doing with value what Kant did with reason: reason is always, already entangled in the very language used to talk about "pure" reason. But one abstracts from the complexity to identify the form just to give analysis. Here, I identify the very mysterious metavalue In the pain, and it is not the form ofethical affairs, but the actuality, the substantive presence.Constance

    As I said, Plato demonstrated long ago, that we do not base value in pain or pleasure. I gave an example, as to why a person's attitude toward pain does not provide a good represent of one's attitude toward value, therefore pain cannot be used as a metavalue. There are many more examples, but it seems like you are in a condition of denial, so I don't see the point in producing a list of examples.

    The emphasis is on the way the value dimension of an ethical case is unassailable to competition and objections: no matter what alternative one can imagine to bring against the choice of choosing the one child's welfare, the "badness" of the torture is undiminished.Constance

    Yes, your state of denying the example, and also the reality about value, demonstrates this unassailability very well. However, the fact that one's personal perspective on value appears to be unassailable does not demonstrate that it is absolute. It just indicates that it appears to the person who holds the unassailable perspective on value, that value is absolute.

    I am identifying something that is not relative, but "absolute" acknowledging that this term is rather self contradictory because language itself does not possess the possibility of absolutes, all propositions being contingently bound to others. The claim rests on the premise that there is something transcendental about ethics that lies at its essence that is nondiscursive and intuitive. One is being invited to simply observe the pain simplciter, observe--- not weigh, compare, contextualize.Constance

    The problem though, as I explained, is that a person will subject oneself to pain, for the sake of something valued in some circumstances, yet at other times the same person will avoid pain because in this circumstance avoidance is seen as more valuable. Therefore pain does not suffice as evidence for any sort of absolute value.

    Yours is pedantic foolishness. When you go inside, are you absolutely inside or only relatively inside? When you pay your bus fare, do you discuss whether your coins are of relative or absolute value? and the answer is that these are foolish questions.tim wood

    Yes, why are you asking such foolish questions? This is you with the foolishness, not me.

    If you wish to argue the relativist position, that everything is relative, nothing absolute, be my guest, but I won't attend, for the arguments quickly become absurd, ridiculous, and a waste of time.tim wood

    No, I'm not arguing that everything is relative, I'm arguing that value is, because that's the nature of what value is. You just seem to be incapable of accepting the fact that you were wrong to deny this obvious fact about value, so now you want to claim that I was arguing everything is relative.

    We have already affirmed that the absolute as a practical matter is always already established within some framework.tim wood

    I sure have not affirmed this. As I said, that is contradictory nonsense. The absolute is always outside the framework, as the ideal which the framework is based in. Even the idea of "the absolute as a practical matter" is nonsensical, because any absolute is an ideal, a theoretical principle which is not obtained in practice. Go ahead and keep insisting on your foolish nonsense if you like, insist that when you go into your house you have obtain the absolute inside, or that the fact you can pay a bus fare with coins means that the coins have an absolute value, but I'll have nothing of it. I'll let you live in your absolute fantasy land.

    And don't forget to hit the relativity of relativity paradox above that you ignored - that at least and for sure you will want to smash.tim wood

    I have no idea how to interpret your so-called "relativity of relativity paradox". I see no paradox, of course the relativity of relativity is relative. How could it not be without contradiction? Where's the paradox? You might want to explain what you were trying to say, but it appears to be just more foolishness like the rest of the things you've been saying.
  • The world of Causes
    The only thing we will ever experience is the present moment so how do you rationalize that?Thinking

    It's very easy to rationalize that. I cannot change what has happened in the past, and what has happened in the past has a very real effect on what I am experiencing now. And in a similar but different way, I cannot deny that things will happen in the future, but I can have a very real effect on what will happen. So anticipation of the future is just as much a real part of my experience of the present, as is memory of the past. I remember the past as real, and anticipate the future as real.

    Since this is the way that I experience the present, as the past being very real, and also the future being very real, then you would have to provide me with a very good argument that they are not real in order to make me believe otherwise.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    I thank you for your patience and persistence.Antony Nickles

    I'm naturally persistent, so there's no need to thank me for that. However, it is you who is being patient, to put up with my persistence. Patience is a virtue.

    ell I think you are still stuck on something about these words; maybe thinking there is "no such thing as the ordinary way", as if the ordinary way were opposed to the philosophical way (which can make those distinctions). I can't sort it though.Antony Nickles

    "Ordinary" in this instance implies normal, does it not? As if there is a customary, familiar, or habitual, normal, or "ordinary" way of making this decision as to whether it was an accident or a mistake..

    Each"? "Particular"? "Must be judged"? And here we are imagining that each case is the worst case (skepticism without a net). Do we "apply" the criteria? Well, we didn't know them before and now we do, but do we always need them? Imagine that there is all this life we have that makes it so we don't usually need them ("Don't make a mistake!" "Don't worry; it's only an accident."); does this help dispell the sense of doom with every "circumstance"?Antony Nickles

    I can't grasp what you're saying here. I don't see where the reference to "worst case" comes from, or "sense of doom with every 'circumstance'". We are talking about judging an action which has already occurred, as to whether it was an accident or mistake. The action has already occurred so there is no sense of impending doom if the wrong decision is made.

    What I said, is that in each particular instance of such an action occurring, if such a decision is to be made, the action must be judged in a way which is specific to that particular instance. That is because each particular instance is unique, and there is a very fine line of difference between the two possible judgements. There is no customary, familiar, or habitual way of deciding this, therefore no "ordinary way" of making such a judgement.

    My point was, that if there was an ordinary way, the person would not have to appeal to any criteria in making that decision. There would be a customary, familiar, habitual way of saying that the action was accidental or that it was a mistake, the person would say that, and therefore no criteria involved in deciding which word to use. However, since there is not a customary, familiar, habitual, or ordinary way of deciding which of these two words to use, one would have to appeal to criteria to make that decision. The point being that the two, "ordinary way", and "using criteria" are incompatible with each other. This makes "ordinary criteria" an oxymoron.

    I think here it is important again to say that Witt is focusing on a special idea of criteria, as I mentioned to Mww above. One difference is it is not the kind of criteria that we set, say, for identification of a show dog, or when someone has broken the law. Those criteria are in the wide open; one, standards of a perfect specimen; the other, the law. And yes, the law Is not a science and takes judgment to clearly align the facts of this case with the law, or, when necessary, rest on a precedent circumscribing a tricky set of facts or the interpretation of the law in a new context, but, even here, not every case is distinct in the eyes of the law either.Antony Nickles

    "Criteria" is a very straight forward, unambiguous word, with a very direct meaning, a standard or principle for judgement. It makes no sense to say that Witt is not using "criteria" in the ordinary way, but in a special, private way. This is why we need to be aware of Wittgenstein's masterful hypocrisy. It's as if there's a soothing, calming voice, repeating over and over again, 'Deception is impossible, I am not deceiving you because deception is impossible...'. What is that person doing with that voice other than deceiving you? This is why there is a very clear need to distinguish, in principle, between what a person is saying, and what a person is doing with the words. If I judge what a person is 'saying' to me, according to my customary, familiar, habitual, ordinary way, but the person is actually 'doing' something different from what appears through my ordinary interpretation, then I will be deceived. Therefore, I need to apply criteria in my interpretation, to go beyond the ordinary interpretation which the deceiver intends for me to use to support the deception, in my effort to determine what the person is really doing with the words.

    And I believe I came to this same spot with Mww above. OLP does not have a solution (to skepticism). Ordinary criteria are not acceptable for certainty, universality, predetermination, etc. I can perhaps some time in the future (or in my other posts) show its usefulness in morality, aesthetics, politics, etc. where traditional philosophy has failed to satisfy. Or I stand ready to try again to show /explain and hope to do better.Antony Nickles

    But you said this: "Yes, but you're probably not going to be happy about it because it takes the concepts that philosophy wrings its hands over and reveals their mystery and seeming power as driven by our disappointment with misunderstandings and our desire to take ourselves out of the solution." If you can judge that philosophers are acting on a desire to take themselves out of the solution, then you must have some criteria by which you can say that what they are observed as trying to take themselves out of, is the solution. If you allow that OLP cannot dispel skepticism concerning "the solution", then you have no principle whereby you can argue that OLP is better than any other philosophy, so it's revealed for what it really is, and that is just another form of poorly supported metaphysical speculation.

    The standard for OLP of a claim to our ordinary criteria is if you see it and agree; if you see what I see--that you can show yourself.Antony Nickles

    But this has no logical rigour. Agreement does not require criteria. You propose something to me, I can agree or disagree, but neither requires criteria. So if I agree with you, this does not justify your claim to "ordinary criteria". The problem being, as I described above, that we only apply criteria in cases which go outside the ordinary, customary, familiar, or habitual. So as long as your proposal appears to me to be ordinary, I will agree without question, or applying any sort of criteria. And your claim that "ordinary criteria" is justified by me agreeing, is unsupported.

    Well, let's pull out "refer" just in case anyone gets confused that this is word=idea. For example, intending is a concept; to say it is (only) a word is to make it seem isolated, connected only to a "meaning', which is a picture Witt is trying to unravel. Meaning being more like, say, what is meaningful to us about a concept, along the ordinary criteria for it. And let's put it that: we are seeing how a concept is used. Witt says "sense" for the fact that a concept (knowing) can be used in various ways; here, again, I know as: I can give you evidence; I know as: I can show you how; I know as: I acknowledge you, your claim. He is imploring us to Look at the Use! (#340) to see that our concepts are various and meaningful in different ways.Antony Nickles

    Here you go, with the hypocrisy. You say let's get rid of the notion "word=idea", it's a faulty "picture". Then you say "let's put it that: we are seeing how a concept is used." But what we are seeing is words being used. We cannot proceed to "seeing how a concept is used" without that faulty picture, 'word=concept'. And when you allow for this separation between word and concept, and see that people are doing things with words, and that meaning is a feature of what the person is doing with the words, not a property of the words themselves, you'll understand that using the same word in different contexts gives it a different meaning. So if your desire is to associate "a concept" with the word, you must allow that it is a different concept in each different context, according to the difference in meaning, otherwise you are susceptible to deception by equivocation.

    Well, I would say the vast majority of situations are mundane and uneventful and non-specific, such that our criteria (of this type) never really come up (Thoreau says we lead quiet lives of desperation).Antony Nickles

    This is the problem. If, in the vast majority of situations criteria never come up, why assume that criteria are employed in those cases. "Criterion" is very specifically a principle or standard which is followed. This means that it is something present to the conscious mind, as a principle for logical reasoning. This is the same issue I had with Mww who proposed logic proceeding from premises which are not present to the conscious mind. How can logic proceed in that way? Clearly it's not logic, if it employs unstated premises. Likewise, the person cannot be said to be employing criteria when proceeding without any criteria present to that person.

    If we move to say, this is "criteria" in another sense of the word, then we need to define that sense, otherwise people will just think, as I did, that it's "criteria" in the ordinary sense. What I've been arguing, is that these customary, familiar, habitual, or ordinary acts are not even remotely similar to acts which employ criteria. In fact, they are more like polar opposites. The customary, habitual, familiar, ordinary acts proceed from an attitude of certainty, while we only apply criteria when we are uncertain. So if we wish to obtain a true understanding of these types of acts, we need to maintain that separation between acts carried out with an attitude of certainty, and acts carried out with uncertainty, we ought not use "criteria" when referring to the motivating factor in customary, habitual, familiar, ordinary acts, which are carried out with an attitude of certainty. We only apply criteria when we are uncertain.

    I'm not sure this was a great example (surprise). But you could say that identifying the smirk was an interpretation (of their facial movement), but what I was trying to point out is that everyone could agree that to correctly apologize, you can not scoff at the whole procedure--that it is not open to interpretation, that it is a categorical necessity.Antony Nickles

    This opens another can of worms. What constitutes certainty?. We say "everyone could agree that...". And this means that everyone would interpret the situation in the same way. This would constitute a customary, habitual, familiar, or ordinary way of interpretation. We can see that "certainty" is tied together with the ordinary way. Which causes the other is not evident, but they are reciprocating and mutually supportive. The problem though, is that the term "everyone" is extremely inclusive, in an absolute sense, therefore too inclusive. All it takes, is one person who is abnormal, and doesn't share that ordinary way, to be skeptical, uncertain. This person might start applying criteria, and develop the belief that the judgement which everyone else is certain of, as they proceed in the ordinary way, with certainty and without criteria, is actually wrong. The criteria might even show very clearly that the abnormal person is correct, and everyone else proceeding in the ordinary way, with certainty, but without criteria, are incorrect. This is why we cannot ever exclude skepticism.

    So the idea is that the fear of doubt and the black hole of skepticism/relativism cause the philosopher to skip over our regular criteria to fix meaning and word together, to have certain knowledge, normative rules, universal criteria, predetermined, etc.Antony Nickles

    Let's consider this statement. When we "fix meaning and word together" we are inclined to assume "a concept" which the word signifies. This is because we want the word to represent something directly, just like when a proper noun represents an object directly, or even when we use a noun to refer to a particular object directly. This facilitates our capacity to talk about meaning, when there are things, concepts which we can talk about. Now we have these objects of meaning (Platonic Forms perhaps), concepts, which we can talk about, just like we can talk about physical objects. It's a matter of what's customary, familiar, and habitual in our ordinary use of language. We ordinarily use language to talk about the physical world, objects and such, so language is designed and evolved to work in that way. So when we go to talk about meaning, we are inclined to use language in the customary way, thus we invent this notion of "concepts" and that facilitates the discussion of meaning. because natural language is purposed to talk about things not meaning.

    But what Wittgenstein demonstrated is that this is really a mistaken way to proceed in talking about meaning. It actually encourages skepticism, because we talk about objects of meaning, "concepts", which just are not there. So now we need to reassess this desire to talk about meaning. Either we must just accept as a fact that language was not designed to talk about meaning, and we simply cannot go there with language, it is a realm of what cannot be spoken about, or, we need to redesign language such that it can be used to properly speak about meaning. I think that the latter is the appropriate way forward, and the way which philosopher generally proceed, giving the impression that philosophy uses language in an abnormal way. Well yes, but that's because we cannot do philosophy using language in the ordinary way, because ordinary language was not purposed for doing philosophy. OLP ought to simply acknowledge this difference.
  • The self

    What I was trying to say, is that value, by its very definition, is something which is relative. It is something assigned relative to a scale or some sort of hierarchy. The value therefore is always relative to the scale, and not absolute. To try and make value into something absolute would render it something other than value. A value without a scale?

    So I can't even comprehend what you might mean when you suggest that life has absolute value. And, when you say that your money is legal tender for those specific things, this means that it is legal tender for those specific things, therefore it's value is not absolute by any stretch of the imagination.
  • Ordinary Language Philosophy - Now: More Examples! Better Explanations! Worse Misconceptions!
    It is our ordinary ways of telling an accident from a mistake--the criteria of their identity and employment (grammar), and all I can say at this point is it is a term to hold a space opposite of how philosophy sets up the traditional criteria (certainty, universality, etc.) it wants for the concepts of meaning, knowledge, understanding, etc.Antony Nickles

    I'll repeat then, what I've said from the beginning, there is no such thing as the ordinary way of distinguishing an accident from a mistake. Each particular incident, in each set of circumstances, must be judged according to the available evidence, and there is no such thing as the "ordinary criteria", to be applied in a particular situation. That's why a judge in a court of law has a difficult task. And one only gets to the point of being a judge through an extended period of experience. The experience does not teach the judge the criteria, it gives the judge many examples to compare with. These are known as precedents. We say that the judge upholds the law, in many unique circumstances, but this is not really done through reference to criteria, it's done through the experience of many precedents.

    Yes, but you're probably not going to be happy about it because it takes the concepts that philosophy wrings its hands over and reveals their mystery and seeming power as driven by our disappointment with misunderstandings and our desire to take ourselves out of the solution. OLP is investigating our concepts to show that desire in our philosophy by showing that our concepts have ordinary (various, individual) ways in which they work and ways in which they fail, and, at some point, they involve our involvement, accepting, denying, asking, walking away, etc. and in ways that reflect on us, or require us to change ourselves, our world, or extend these concepts into new contexts, a new culture, perhaps to make a word include a change in our lives, perhaps to re-awaken it to old contexts.Antony Nickles

    If this notion of "ordinary criteria" is your proposed solution, then it's quite clear to me that you do not have a solution at all. And if philosophy appears to be trying to take itself out of "the solution", you might take this as a hint, that the supposed solution is not acceptable to philosophers.

    I was speaking of epistemology as the investigation of knowledge. OLP gives us a knowledge of our concepts that we did not have, of their ordinary criteria. Now justification is a trickier subject as we can say our criteria align with the ways in which our lives are, but that is not to say our forms of life are the bedrock of our criteria or that we "agree" on our criteria. And also not to say that radical skepticism is the outcome either. The truth of skepticism is that knowledge only takes us so far and then we are left with ourselves, you and me to work out the failings and clarifications that our criteria/lives lack the necessity, conclusiveness, completeness, etc. to ensure. Our concepts are breakable, indefensible but also open-ended (justice) and extendable into new contexts (freedom of speech).Antony Nickles

    So it appears to me, like OLP is a lot of idle talk with no justification for what is said. See, you are claiming that OLP give us knowledge of our concepts, which we didn't have, through reference to their "ordinary criteria". But you cannot even justify this assumption that there is such a thing as ordinary criteria. If you cannot even point to any instances of ordinary criteria, how are we ever going to get a better understanding of our concepts through examining their ordinary criteria?

    Maybe it is better to say concepts have different criteria for the different ways (and different contexts in which) they are used (the sense in which they are used). So they have more possibilities than under the fixed standards (one picture) that philosophy wants. So in a sense they ARE different "games we play" with a concept, but a concept is not just about "words" or even expressions, because concepts are not "conceptual" or "ideas" as opposed to the world as philosophy's picture of certainty creates.Antony Nickles

    All you are doing here is attempting to validate equivocation. If you allow that the same concept has different criteria according to different contexts, you are saying that the word refers to the same concept despite having a different meaning. Using the word with different meanings, and insisting that the different meanings constitute the same concept is equivocation.

    Criteria are not like rules, they are not always fixed, or unbreachable, or determinative.Antony Nickles

    A criterion is a principle or standard used for judgement. There is no ambiguity there. Either a person is following the criteria or not. It makes no sense to say that the person is at the same time adhering to the standard, yet also not adhering to the standard. The thing which you don't seem to be acknowledging is that in the vast majority of "ordinary" situations, the circumstances are unique and peculiar, such that a judgement cannot be made on the basis of criteria. There might be some criteria which would serve as some sort of guideline, but the real judgement is made by some process other than referencing the criteria.

    Take the judge in the court of law, in my example above. Let's say that the law is the principle or standard which the judge uses, so the written law is the criterion in this example. There must be a judgement as to whether or not the person is within, or outside the criteria (law). The work of the judge is interpretation, interpret the person's actions, and interpret the law. Interpretation cannot done solely through reference to criteria, because the criteria itself has to be interpreted. So we find the true nature of judgement in interpretation, not in criteria, and interpretation cannot be dependent on criteria.

    One thought on application is that, even unconsiously, we know the criteria of an action to ask "You know you smirked when you apologized." not because we explicitly are thinking of the criteria, but that we were raised in a world with others, and pain, and a need for forgiveness, etc.Antony Nickles

    Reflect on this action, your example here: "You know you smirked when you apologized." I think you'll agree with me that what is referred to is a matter of interpretation. What is at issue though, is what does interpretation consist of? If it is a matter of "we were raised in a world with others, and pain, and a need for forgiveness, etc.", then it is a matter of an emotional reaction rather than a matter of criteria. So this is why I am arguing that we must make sure that we get our principles straight here. "Criteria" implies that we are employing principles of reason, but if these basic kernels of meaning are really emotional reactions, then the use of "criteria" is really misleading to us.

    Well two small tweaks. I take epistemology not as the search for grounds for knowledge, but as the search for knowledge, and that looking at what we say to see our criteria, as in to make them explicit--known from the unknown--is a way of knowing ourselves since our lives (what is important to us, what should count as a thing, judging, making distinctions) are our criteria. And that sometimes, we are responsible for our claims to aversion, to our extension of a concept asserting a new context, (politically, culturally) creating a new context.Antony Nickles

    I'm sure that wherever we have criteria, they are important to us, that would be the reason for having them. But if we assume that there is criteria where there is none, then if something goes wrong, don't we confuse an accident with a mistake? We'd accuse a person of not adhering to the principle, not correctly following the criteria, when the person was not even following any criteria in the first place.

    I'll leave"applying criteria" alone for now (still not sure what to do with it), only to say that criteria could be described as "unexamined" (not unconscious exactly) which means we are maybe missing the fact that criteria are just all the ordinary ways we might judge someone as doing or saying this well, how we show in this case how it matters to us, what counts as an instance of it, etc. These things are not mental constructs, or created standards (though there are those too), these are our lives of doing these things like apologizing, thinking, knowing, threatening, identifying a dog, etc.Antony Nickles

    Do you see how it may be the case that "criteria" is not the right word here? But if we go to replace it, then what would we replace it with? We are entering into the realm where words will most often fail us. But this does not mean that we ought to use words like "criteria" which might give the wrong impression. Nor do I think it means that we ought not try to describe what we find here. It just means that we must choose our words very carefully. And I think, this is why it often appears like philosophers do not use ordinary language, it's because they choose their words carefully.

    Types have identities, just as tokens do. So the type <dog> has an identity as a kind, just as an individual dog has an identity as an individual.Janus

    You may say that a type has an identity, but a rule is not a type, even if it defines a type.

Metaphysician Undercover

Start FollowingSend a Message