Comments

  • Existential Ontological Critique of Law

    Even if rights could be absolute (an assumption which in itself requires justification), how would that be relevant to the human capacity to realize intended future states?
  • The Indictment
    Do you really think Trump walked into the white house and took documents?NOS4A2

    Isn't that exactly what he did? If not, how would you describe what he did?
  • Existential Ontological Critique of Law


    Well, If you cannot grasp the fact that the human capacity to realize an intended future state is not "absolute", then go ahead and keep your daydreaming mind occupied by your fantasy world. But don't try to do anything strenuous please, that might shatter your illusion.
  • The beginning and ending of self
    My apologies Meta. For all I know, you understand Un's message better than I.creativesoul

    Apology accepted, but I'm still trying to understand, maybe not even as well as you do. Unenlightened seems to talk about a losing of the self, I prefer to think of the same thing as a finding of the true self. So the self which gets ditched in what they call enlightenment was not the true self in the first place, and this allows the true self to emerge. And while Unenlightened and I seem to agree pretty much, we still manage to use words in opposing ways. What Un calls "completion", I think of as a beginning.
  • Existential Ontological Critique of Law
    According to Sartre human freedom is an absolute capacity to intend a particular future, although, circumstances can and will obviate the realization of the intended state.quintillus

    Do you accept that the human capacity to realize an intended future state, is not "absolute" as you claim Sartre affirms? If so, then you will see that "obviate" is not an appropriate word to use here in the description of how circumstances relate to the realization of the intended state.

    In reality, the circumstances indicate to us the real restrictions which exist in relation to our "capacity to intend a particular future", making this capacity far less than absolute. In general, we recognize the circumstances as limiting the possibilities. Possibility, often known as "potential" is how we understand the future. So the conventional understanding is that the future is not "nothing", it is what may or may not be (as potential), and this violates the law of excluded middle. So the future has some sort of real existence, but it is a type of existence which violates this fundamental "law" of logic.

    Human consciousness nihilates, i.e., makes a nothing which is a particular intended future state of affairs, which is unrealized; absent; not-yet; hence non-being/nothing.quintillus

    For the reason explained above, this is not an accurate representation. The "particular intended state of affairs" is understood by the human consciousness as a possible, or potential state. Potential is not nothing, it is categorically distinct from being and non-being (existent and non-existent) as that which cannot be described by those terms. Further, the human consciousness apprehends circumstances, (describable in terms of what is and is not), as having a bearing on, or being somehow related to future possibilities. This relationship is understood in terms of "necessity", what is "necessary". That is why, despite what Sartre says, human freedom is not commonly understood as "an absolute capacity", and this is a misrepresentation of how human consciousness actually apprehends its own freedom.
  • Existential Ontological Critique of Law

    I think what I was trying to show is that Sartre and the "current worldwide existential ontological thought", might be a little of the mark in how future is represented. If the future was really nothing, then our freedom would be absolute. But our freedom is limited, as I explained above. The future exerts a real force on us, which would kill us if we did not act to prevent it from doing that. Therefore the future is not noting, and Sartre's existentialism seems to have a mistaken premise.

    However, I would appreciate it if you could provide for me a coherent interpretation of Sartre's "double nihilism" so that I might better understand the concept.
  • The beginning and ending of self
    in fact I think that this identification with the past is the necessary first step to a projection into the future.unenlightened

    Yes, I think that's the point, without that first step, which is to relate the past to the future, all those past stories are pointless. They are only meaningful with respect to the future.

    There can only be any idea at all of the future as a projection from the past.unenlightened

    This is where we disagree. I would take the opposite position, claiming that we get an idea of the future prior to getting any memory of the past. Wants and desires are the product of a being looking forward in time, toward the future. These are a manifestation of an intrinsic respect for the future. And, it is only when it becomes evident that past experiences may assist in getting what is wanted, that memory is produced, and goes to work. So a child for example is born with wants, but no memory. This is the nature of being in time, when a being comes into existence it has a future but no past. So in reality we are born with an inherent view toward the future, existing as wants and desires, and then the story or narrative of the past is derived from this view toward the future, as a tool to help us deal with the future which we already have respect for. That is why intention has such a big influence over the shaping of a narrative.

    It seems to me that you're completely missing the point.

    It's the stories we tell ourselves about ourselves and the world around us that constitute the self.
    creativesoul

    I think you are missing the essence of the self. The self is nestled within intention, which is a view toward the future, getting what is wanted. This means that the defining feature of one's self is one's decision making capacity, the way that a person selects. Story telling is nothing but an amusement, though a person might use it to help get what is wanted sometimes.
  • Existential Ontological Critique of Law

    It is not an "absolutism" concerning the future, but proof that the future is real and therefore not as you claim, "nothing".
  • Existential Ontological Critique of Law
    You are positing an absolutism of the future, while, all the while, consciousness is always free and unbound, to imagine its next future...You have become so totally deterministic in your world view, via living in a totally jurisprudentially deterministic world, that you absolutely insist there must always be something Other out there which, cinesiologically, is in motion forcefully moving you...quintillus

    This is not me at all, and it demonstrates that you did not understand what I said.
  • Atheist Dogma.
    I was thinking that if there is some truth in the madcap interpretation, it isn't madcapLudwig V

    I don't think we should use "truth" here. I tried to distinguish subjective and objective features, but since the subjective was described (by me) as primary, I don't think there is a good place for that word.

    No. Tools do have a general or standard use.Ludwig V

    This is a very problematic position to take. Any claim of such a "general or standard use" will miss out on a whole bunch of non-standard usage which is just as real as that contained by the general description. Making such a claim, is just a generalization intended to facilitate some argument. "The standard use of a hammer is to pound nails". That statement, although one might agree that it is "the standard" use, does not validate any rigorous sense of "the use of a hammer", in a general sense.

    In other words, we have invalid inductive logic at play here. Generalizations are produced through inductive logic, and exceptions are evidence that the induction is invalid. So every example of an exception to the rule of "general or standard use" is proof that the generalization is composed of invalid logic. In the case of word usage, the proof is overwhelming. Therefore the problematic position you propose is not at all philosophically useful because the invalidity of the inductive reasoning is very strong.

    That exactly my bother about the "intent" criterion and why I can't accept the definition of a speech act in terms of intention. Plus there's the objection that "meanings just ain't in the head" - who was it who coined that?.Ludwig V

    That the intent is sometimes simply not there, is no reason why we ought to look somewhere else to find "the true meaning". The lack of intent only reinforces the claim that the meaning is subjective. That we ought to look somewhere else for the true meaning is completely unwarranted. Such a procedure, to seek objective meaning when the meaning is subjective, can only produce can only produce false or fictitious meaning.

    You ought not think of meaning as in the head. It's far easier to understand meaning as being in the writing itself, but put there by the author. So for example, when the writing is judged as unclear, vague, ambiguous, incoherent, or inconsistent, this is a judgement against the artistic capabilities of the author. However, some of these features may be placed intentionally into the work, by the author, and if the interpreter does not apprehend this it is actually the capabilities of the interpreter which are at fault. The writing itself is the object, and meaning is in the object, as a representation of the author's objectives. That the meaning is subjective implies that it is "of the subject" as in from the subject, not in the subject.
  • The beginning and ending of self

    Aren't you just distinguishing two different types of narrative here? One is intended toward telling the truth, the other, the counterfactual, is fictional. Whether you think that you ought to have done the counterfactual is irrelevant, because you have no choice at this time, it is in the past. The true application of "ought" is when we look toward the future, where we have real choice as to what ought to be done.

    So I think that this state of conflict you describe is artificial, contrived, because there is no need to consider alternatives for the past narrative, like you suggest, because we have no choice at this time. However, my statement needs to be qualified, because when the past situation is relatable a the future situation, then there is benefit to considering these options, so that you do not make the same mistake twice. But that opens up a whole different problem for personal "identity".

    If we look at the past, as a narrative, a true narrative, of what cannot be changed, and we look toward the future as possibilities where we need to apply "ought", then how does one relate the two to each other? If we assume a combination of past narrative and future possibilities, as constitutive of the person's identity at the present, you can see that there is a huge hole here, as this is completely insufficient to make up what we assume as personal identity. In fact, the essence of the identity is missing here because what we see as a person's identity is "the way" that a person relates the past to the future. Each of us has a particular way of relating the past to the future, applying the "ought", and I think that this "way" proves to be just as unique to the individual as one's physical appearance.

    This is a sort of philosophical dilemma. "Ought" is one of the most general principles, there ought to be an "ought" for every possible situation that a person finds oneself in, a correct course of action for that person. And moral philosophers often want to say that no matter who the person is, and no matter what the situation is, the applicable "ought" (the specific correct action) ought to be the same for everyone. But this can easily be seen to be a completely wrong-headed way of looking at things. To create this counterfactual scenario which places a different person in the exact same situation as another, would be to deny that the two people are actually different, thereby negating personal identity, and creating a fictitious inapplicable scenario.

    So in reality, each person is unique, each situation that a person is in is unique, and each correct action, or "ought" which is applicable at that time, is also unique. This fact of the true narrative, is what turns moral philosophy on its head. We ought not look at "ought" as a general principle, but we must look at it as unique to each and every different person, who all have a unique "way" of relating the past to the future. This is the only way to apply "ought" to the true narrative. That each person's spatial-temporal location is unique, is proof, through application of the special theory of relativity, that one's relation of past to future (one's present) is also unique. Therefore one's "ought" is unique and particular to that individual. The idea that there is a general ought is a false ideal created from the fictional narrative which looks at distinct individuals as the same.
  • Existential Ontological Critique of Law
    The absent future does not, cannot, force free consciousness to do anything, for it is free consciousness which prefigures, imagines, makes the not-yet that is its future existence. Time originates via this nihilative capacity to conceive the absent future, whereby, the present is transcended and made past...
    Nothing, nothingness, as consciousness, is real.
    quintillus

    But imagine if a person does nothing, the person would die of starvation or something like that. This dying, which would occur, would be the person being forced into the past, by the future, as the future becomes the past in the passing of time.. So the person must do things of necessity, or else be forced into the past by the future Therefore the force of the future makes it necessary for the person to do things. Our inclination to do things is a reflection of the reality of the future.

    Contrary to what you say, the future is not absent at all. It is here with us, all the time, continuously making it necessary (forcing us) to act. If there was no future imposing itself on us, we would have no anticipation, no anxiety, no desire to eat, or desire for anything whatsoever for that matter, therefore no inclination to act.
  • The beginning and ending of self
    But one always stands outside the story as narrator to tell the story. One is absent from the story one tells, because the story is related, and even the closest relation is not oneself, in the same way that god is outside his creation.unenlightened

    This is where you lost me. I don't understand why the narrator must be outside the story. Isn't there such a thing as a first person narrative, in which the narrator is part of the story?

    It appears maybe you are distinguishing between a person's real life experience, and the story one tells of it, the narrative being a story and the real life which the person is in, being something other than a story. Is that what you are saying here?
  • Existential Ontological Critique of Law
    The actual, authentic, true mode of origination of human action is consciousness; which proceeds via the double nihilation, wherein consciousness, on the one hand, makes the nothing that is an imagined future state which it wants to be; and, on the other hand, makes the present state nothing by transcending that state toward the not yet existing future which it wants.quintillus

    Surely the future cannot be nothing in any absolute sense, because the future is what forces the human being to act. If a human being did not act, it would be crushed by the force of passing time, (the future becoming the past). Accordingly that human being would be forced into the past, by the future, annihilated. So the future must be something very real, therefore not nothing.
  • Atheist Dogma.
    Perhaps we should lump all madcap interpretations into the same trash-heap.Ludwig V

    The point though is that I do not want to throw all madcap interpretations in the same trash-heap. As I said, the madman still expresses glimpses of insightful intelligence. And different madmen express different forms of insight. So their interpretations cannot all be classed together.

    I don't quite understand your last sentence. If it means that all interpretations must be mutually reconcilable, that undermines the point of different interpretations - unless the reconciliation is simply the original text, which all interpretations have in common.Ludwig V

    That's right, they are all reconcilable through the original text, as "the object". But this implies that I affirm that there is nothing absolutely random which is added by the subject. If the subject added something which was absolutely random, it would be unintelligible through reference to the text, as completely unrelatable to it. So as much as we have free will and freedom to interpret however one pleases, I deny the possibility of an absolutely random act of interpretation. You can see how this makes sense, because such an act could not be related to "the object" and therefore could not be an interpretation.

    My dream that I can jump/fly over tall buildings makes sense, but isn't plausible.Ludwig V

    Sorry, but without some more information, such as the apparatus you would use to propel yourself, this idea of you flying over tall buildings makes no sense to me at all. How does it make sense to you?

    Well, as usual, you have a coherent position. Revealing the incoherence of a text on its own terms is a perfectly coherent project. But would you say that Locke anticipated modern physics, or that Berkeley anticipated modern relativity theory?Ludwig V

    No, I would not say that at all, I do not use "anticipate" like that. But some people seem to use the word in a way which implies that this would make sense to them. I do not understand such a use of "anticipate". One can "anticipate" a defined future event, in the sense of prediction, but this requires that the event be defined. Also, one can have "anticipation" in a most general sense, without any definition of the future event which is causing the anticipation. This is better known as a general anxiety, and it can be very debilitating in some situations, because it is an anxiety which cannot be dealt with, as having a source beyond the usual "deadline" as a source of the stress.

    But to mix these two senses of "anticipate" into some equivocated mess is just a category mistake. That is to name some particular event which was in the future at the time, "modern physics", or "modern relativity theory", and say that the person anticipated the particular, in the general sense of "anticipate". That, to me is an equivocated mess of category mistake. It is incoherent and makes no sense, even though some people like to say things like this.

    But can we always divine the intent of the author?Ludwig V

    No, we can never "divine the intent of the author". That's why all interpretations are fundamentally subjective rather than fundamentally objective. We strive toward the objective interpretation, if truth is our goal, but we cannot deny the reality of the context of the interpreter, which is primary to the interpretation. The context of the author is primary to the object (written material), but the context of the subject is primary to the interpreter. Primary context is reducible, and simplified by representing it as intent. So the context which is primary to the author is the author's intent, and the context which is primary to the interpreter is the interpreter's intent. Since the interpreter's intent is primary in the act of interpretation, it is impossible for the interpreter to actually put oneself in the author's shoes, and "divine the intent of the author". This can never be done.

    But I accept that the intent of the author, so far as we can divine it, is always important in interpreting a text. The same applies to the context in which they are written. But if that's the only correct way to read them, I'm left puzzled by the fact that some texts remain relevant long after times have changed, and we continue to read and discuss them. Your approach seems to consign all historical texts to a museum.Ludwig V

    I'll say that the author's intent is the "ideal". It is what we seek in "meaning", as meaning is defined as what is "meant" by the author, and this is defined as the author's intention. The problem is that there is no such thing as "the author's intent". "Intent" is just a descriptive word which refers to some unknown, vague, generality, rather than a particular "object". We can formulate simple examples of an "object", as a goal, like Wittgenstein does with "slab" and "block", etc.. If my intent, object, or goal is for you to bring me a slab, I will say "slab", and this expression represents a very specific, even particular goal (object), if it is a particular slab that I want. But these are very simplistic examples, which lend themselves well to simple fiction writing where the goal of the author is to create an imaginary scenario in the reader's mind. That's a very simple goal or object, which is easily determined as the objective of the fiction writer.

    But when we get to philosophy, the intent of the author is not exposed in this way. This is because the intent of the author of philosophy, the author's goal, or objective, is often actually unknown to the author. We can express it in general terms like the desire for truth, or knowledge, or an approach to the unknown. But notice that since it is just a general "unknown" which the author is describing, or directing us toward, there can be no particular object which is being described by that author, so the intent remains veiled. This is the subjectivity of the author.

    Notice the two forms of subjectivity, author and interpreter, and how they establish a relationship between "the object" in one sense as the goal or intent, and "the object" in the other sense as the physical piece of writing. Subjectivity of the interpreter is the veiled, unknown intentions of the interpreter, which influence the interpretation regardless of efforts to remove them; the interpreter cannot proceed without personal intention, and this will always influence the interpretation as subjectivity. Subjectivity of the author, is the veiled unknown intentions of the author, which influence the author's writings regardless of efforts made by the author to know, understand, and be true to one's own intentions; their are unknown aspects of one's own intentions (motivating forces) which cannot be apprehended despite all efforts of introspection.

    Fair enough. But the catch is "how to apply that same intent today". That means interpretation in a context the author(s) didn't know about. There's a narrow line there between divining the intent of the author and speculating.Ludwig V

    The issue, I believe is that it is all speculation. There is no science of "diving the intent of the author". So the art of interpreting can go in two very distinct directions. Remember what I said about the madcap interpretation, that parts are intelligible and insightful. We can consider the work of the author in the same general way, as parts. We can focus on distinct parts which seem to have very clear and distinct intention (meaning), and bring those forward in the interpretation, and have as the goal of interpretation a very "objective" interpretation. But this would ignore all the author's subjectivity. Or, we can focus on the aspects where the intent of the author is not clear at all, because the author was not truly aware of one's own intent. This allows the intent of the interpreter to represent the intent of the author in various different ways, and the goal here is a subjective interpretation. Then we have many options in between these two extremes.

    There's a notion of objective meaning at work there which philosophy would find troublesome, but nonetheless, lawyers seem to be able to work with it, and if meaning is use, that validates the principle, at least in the context of the law.Ludwig V

    I don't see how "meaning is use" validates that principle. The word "use" implies a user, and the user of the words is the author. If meaning is use, then we must look for the intent of the author to see how the author was intending the words to be used. Words are tools, and tools have no general "use", as use is a feature of the particular instance where the tool is put toward a specific purpose.
  • Atheist Dogma.
    That's true. I'm happy to accept that a madcap interpretation is an interpretation, but only in the sense that a broken watch is a watch.Ludwig V

    I wouldn't even accept this analogy. A broken watch does not do what it is supposed to do, keep time, a madcap interpretation does what it is supposed to do, provide an understanding of meaning. The madcap interpretation is just different, in the sense of being outside the norm, so to make the analogy good, the watch would not be broken, but giving you the wrong time. In theory there would be a way to "translate" the interpretation, like relativity translates different ways of keeping time, because as a translation it must be ordered in some way and not completely rendom.

    You are quite right, of course. But fiction is a particular context. Even so, Aristotle says that a story must be plausible. I think that's too restrictive, yet there's something in it.Ludwig V

    That depends on what you mean by "plausible". If it makes sense, it's plausible isn't it? But writing goes far beyond that, as lyricists in music and poetry for example string together disassociated ideas, to make a strange story. When interpreting a piece of writing we tend to look for consistency, and adhere to consistency as a principle, while overlooking the fact that the author could very easily stray from consistency even intentionally. So in philosophy for example, if we read something, and we cannot find a way to make it plausible, there is just too much inconsistency or nonsense, then we simply reject the material as unacceptable.

    But even in these cases of rejecting the whole because it is incoherent as a whole, certain parts of the writing may be very insightful and illuminating. So the writing is rejected as a whole, but certain parts are very intelligible. And this can be reflected in the "madcap" interpretation. The interpretation itself is an expression, a piece of writing, and it is incoherent as a whole, but certain parts may be very intelligible. This is because the madcap interpreter releases the need for coherency, and this is actually very important because coherency is context dependent. We learn in school to think in certain ways. So when a modern person interprets an ancient writing, the person's ideas of coherency must be dismissed prior to proceeding, because the ancient people lived in a different environment of coherency. So the ancient person could very well be writing in a way which would appear incoherent to us today. Then the interpreter who tried to put things in coherent terms would br doing a faulty interpretation.

    Another example (legal in this case) based on ancient memories of "The West Wing". Suppose a country has a constitution written more than 200 years ago. There is a provision that each geographical division of the country should send to the legislative body an number of representatives proportionate to its population. It is taken for granted that women do not count. It is further provided that slaves shall count as a fraction of a person (say 2/5th). Fast forward to the present. It is clear, isn't it, that something must be done. No-one is a slave any more, so perhaps that provision can be simply ignored. The provision about women was so obvious that it is not even mentioned, so perhaps one could simply include women. But it would be safer to delete the slave clause and add a definition of "person". You might not count that as re-interpretation, but it surely demonstrates that it is sometimes necessary to take account of the contemporary context as well as the historical context.Ludwig V

    I don't think this is a good example. This is not a matter of re-interpretation, it is a matter of rewriting the rule to better reflect modern values. What you seem to be saying, is that the rule as written is not applicable today, because of societal changes, so it needs to be rewritten.

    A better example probably is the ongoing discussion around the second amendment in the US constitution, the right to bear arms. A common subject for debate is the intent of that amendment, and how that intent ought to apply in the modern day. It might seem sort of irrelevant to focus on th ancient intent, because we could simply change the wording if needed, as you suggest. But this is exactly where the problem lies, we look to these ancient laws as "authority", and so we make sure that it's not easy to change them. Therefore instead of looking to change them it just becomes a question of the intent behind them, and how to apply that same intent today. Once the intent is established it can be applied to the modern society. But to allow the condition of the modern society to influence how one interprets the intent of the authors would be a mistaken (subjective, because one's personal position would influence the) interpretation. The objective interpretation would be to look solely for the authors' intent, and not allow one's own intent to influence the interpretation.
  • Atheist Dogma.
    I was trying to say that not every madcap idea counts as an interpretation. There are limits. The text is flexible, but only up to a point.Ludwig V

    Why must there be such limits? A madcap interpretation is still an interpretation. On what bases can you argue that just because the interpretation is so radically different from your interpretation, and the norm, it is therefore not an interpretation. Suppose for example that a person hallucinates and sees a tree as a monster. That is the person's "interpretation". The thing we perceive as a tree is perceived as a monster. We can argue that the interpretation is wrong because it's not consistent with the norm, but we have no basis for the argument that it's not an interpretation.

    The readers' environment is another one, and of course that may break down into a number of sub-contexts; it may overlap, to a greater or lesser extent with the author's environment.Ludwig V

    I do not agree that the reader's environment ought to be allowed to enter as a factor in the interpretation. One must attempt to completely place oneself into the author's position, the context of the writer, to properly interpret, and this means negating one's own place. Of course this is impossible, in actuality, hence subjectivity enters the interpretation, but it ought to be held in principle because if it is not, then subjectivity is allowed into the interpretation, as a valid (your meaning of valid here) aspect. So, the reader's position, or environment is not a valid consideration in interpretation. For example, when interpreting your post, I would not assume that you must be using "valid" in the way that I would want you to, and insist that my interpretation is correct when I impose my understanding of " valid" on your writing, in my interpretation of your writing. For these reason's I would say that when interpreting the true meaning of an author's work, one's own environment must not be allowed to be a contributing factor. Incidentally, this is very evident in fiction, one must allow the author to describe the environment, and the reader must allow oneself to be transported to that environment, leaving one's own. In school we start by learning fiction, and it's good practice.

    This is relevant because when the text is read in a different context different questions, issues, priorities may come up and lead to a need for interpretations that go way beyond anything the author could have meant or thought. But still, it is not the case that anything goes.Ludwig V

    This is the matter of subjectivity. it cannot be avoided. And this is simply the nature of language, interpretation is subjective. Further, there are two subjects, the writer and the reader, so subjectivity enters from both sides. Just like the reader must put oneself in the author's context to properly understand, the author must put oneself into the reader's context to be properly understood. Now, writing is not a one on one form of communication, but the author intends to be read by many, so the author's task is much more difficult.
  • Atheist Dogma.
    Yes. But I don't think that anything goes. "Valid" is the word I think of as correct.Ludwig V

    I think you have this backward. Validity is logic based, and relies on interpretation. Definition is essential to validity, as the fallacy of equivocation demonstrates. So interpretation is prior to logical proceeding, as prerequisite and necessary for it. Therefore interpretation cannot be judged in terms of valid and not valid, which are standards of logic, it must be judged by some other standards.

    Validity depends on context. By asking different questions, one sets a context.Ludwig V

    Based on what I said above, I think this is incorrect. Logic is designed to be context independent, that's the beauty of it. Definitions and such release it from the confines of context, and this is what gives it such a wide ranging applicability. Context serves to ground any premises which are not clearly defined. And of course, since we cannot have an infinite regress of words defining words, there will always be an appeal to context, ultimately, for complete understanding. But this has to do with the soundness of the logic, not the validity. So soundness may depend on context, but validity does not.

    I would propose a distinction between two forms of context, primary and secondary context. "Context" in the primary sense refers to the mind of the author, what the author was thinking about. "Context" in the secondary sense refers to the author's surroundings, one's environment. We mut be careful not to conflate or confuse these two because this would leave us susceptible to deception. In general, we have access to the author's environmental conditions, to a large extent, through our sensory capacities, and we assume to an extent that the author's mind reflects one's environment to an extent. But this is really a mistaken assumption, because the author's writing is an expression derived from one's intention, which is not necessarily a reflection of that person's environment. Therefore it is necessary to establish the author's mind, with its intention, as primary context, and allow that there is no necessary relation between this and the secondary context, the author's environment.
  • Existential Ontological Critique of Law

    It is as you say, a matter of human nature. We, as beings, are inclined to act, so it is in our nature to act. But the rule of government has two possible directions to take, either to encourage us to act, or to discourage us from acting. The latter is to go against human nature. But by enacting all sorts of boundaries, and threatening punishment to anyone who strays outside the boundaries, the government takes that direction of discouraging action. Instead, it ought to focus on defining what constitutes good in humane acts, and doing whatever it can to encourage such acts.
  • Existential Ontological Critique of Law
    I am suggesting that we begin replacement by first uplifting the honor, honesty, and dignity of our legislators, judges, prosecutors and police, via assisting them to become reflectively free, and, thus, to lead them upward unto understanding the true structure of the origination of human action; which act-origination has nothing to do with law.
    I have not fully envisioned a future. I expect that other intelligences, upon becoming reflectively free, may have some dynamite thoughts regarding future sociospheric possibilities.
    quintillus

    This is why "rule" works better by giving people guidance as to how to behave well, rather than telling them what not to do, and punishing them when they still do it. And so, The New Testament's "Love thy neighbour" (as positive direction) marks a vast improvement in the understanding of human nature, and human action in comparison to The Old Testament's "Ten Commandments" (as negative direction).
  • Atheist Dogma.

    Hmm, are these two synonymous, in the sense of exchangeable with each other in usage, "faith" and "confidence"? Or, does one have a broader range of usage than the other? I would say that "confidence" is often directed towards oneself, internally, as an attitude toward one's own actions, while "faith" is most often directed outward, as an attitude towards what is external to oneself.

    If that is the case, then how is it that the health of "the economy", which is an attribute of the community as a whole, can be dependent on an attitude which the individual has toward oneself? There is something missing here, a hidden premise or something like that, which links the attitude which the individual has toward oneself (confidence or lack of confidence) to the wealth of the community as a whole. "Confidence" is just as easily directed in competitive directions as it is directed in cooperative directions, so it could be destructive to the community. So it cannot be confidence alone which supports the economy, there is a missing ingredient. Therefore it's not only a loss of confidence which could make the economy go to hell, but confidence maintained, along with the other ingredient missing, will also make the economy go to hell.
  • Atheist Dogma.
    What was at stake was whether a text could have meanings that were not intended, despite the writer having different, even incompatible, intentions - or rather, whether it is legitimate to attribute to the text meanings that the author did not intend.Ludwig V

    Intentional ambiguity is a common tool. In this case, what is intended is ambiguity, meaning that the author intends that multiple readers will produce a multitude of distinct interpretations, each interpretation suited to one's own purpose. It is useful because it allows the author to appeal to a wider audience. The various interpretations from the work may very well be incompatible with each other, but this does not mean that they are incompatible with what the author intended. The author intends that no particular meaning is the correct meaning, so it is only the attitude that my interpretation is the correct interpretation, or more precisely the belief that there is a correct interpretation, which is the incorrect interpretation.
  • Atheist Dogma.
    But are you really telling me you didn't know what you intended to write, that you just had some kind of vagae association, when you were writing it?Vera Mont

    The vague generality of intention, along with the uncertainties associated with the media, combined, produce the great mysteries of art.

    There is an experimental procedure which the artist can do, which demonstrates very clearly that what is produced is not necessarily what was intended. One can approach the canvas with no intent of painting anything in specific, and just start applying colours to it. There is of course, some degree of intent involved, but that is minimized to the point of allowing the nature of the medium (paint and canvas in this case, but it could be another form of art like music or rhyming) to dictate the outcome. This experiment demonstrates very clearly that it is possible for an author to not know what one intends to write, when it is written.

    The unintentional results of an intentional act are known as accidents. In the artistic world accidents are very important, and have great significance because they teach us about the unknown aspects, the mysteries, of the medium. So in the specific artform you are discussing, the medium is a form of communication, writing. There are many unknown aspects, and much mystery inhering within this medium, and this allows great possibility for accidents. And since writing is a form of communication and communication gets granted a high degree of significance, in general, this allows the accidents to have great importance.
  • The Naive Theory of Consciousness
    Your premise is wrong.apokrisis

    Which premise would that be? Do you disagree that organisms are generated, that they come into being, and they have a beginning?
  • The Naive Theory of Consciousness

    Did you read the rest of my post, and get to the "specific problem" with your "desire" theory, or did you just get stuck on the irrelevant, if not arbitrary, distinction between evolution and development.
  • The Naive Theory of Consciousness
    Evolution is one thing. Development is the other. Salthe covers this nicely.apokrisis

    That you think a distinction between evolution and development would solve the problem indicates that you haven't recognized the problem. To begin with, to evolve is what life does, it is essential to our nature. So if your theory of "desire" as a directing force within the microphysical aspects of living organisms, explains the reality of development, but cannot account for the reality of evolution, then it falls short of being an hypothesis which is consistent with the evidence.

    But the specific problem I was trying to bring to your attention is the issue of generation, the coming-to-be of living organisms. Consider the nature of reproduction if you will. When the seed, or embryo, is being developed, it is a part of the parent, so according to your hypothesis, it is being directed by the desire of the whole, which is the parent. We could say that this is the desire to produce another similar organism, and this desire drives the mechanisms which produce the seed.

    After the seed is separated from the parent and begins to grow on its own, as a separate individual, it is a distinct whole, yet it is still directed by the very same desire, the desire to produce a similar organism (similar to the parent). Now this desire, the desire to produce a similar organism, which directs the parts in their various activities clearly pre-exists the existence of the individual itself, this distinct whole, which is the growing seed.

    This is the nature of all forms of reproduction. The desire which directs the parts (if we are going to explain their activity in this way) always pre-exists the individual whole which is composed of those parts. The "desire" comes from the prior organism and is imparted to the new organism in the act of reproduction. That it is the same "desire" is evident from the fact that the very same type of organism as the parent is produced, and that same "desire" is responsible for the organism coming into existence as the specific type of organism which it is. Therefore we can conclude that this "desire" which you talk about must always pre-exist the organism itself (the organism being the whole), because it is the reason why the organism exists as the whole which it is, an organism. Do you comprehend the logic, and agree with this principle, that the "desire" you refer to must pre-exist the whole, as the cause of the whole being the type of thing which it is?.
  • The Naive Theory of Consciousness
    Nope. Only reductionists think that way.apokrisis

    It's scientific knowledge, often referred to as "fact", commonly known as evolution. Complex organized structures have evolved from less complex microscopic structures. Therefore it is well known that the complex organized structure is posterior in time to the microscopic organized structure, and so cannot be the cause of the organization which exists within the microscopic organized structure. You can call science "reductionist thinking", in a derogatory way if you like, but that in no way proves the scientific knowledge (knowledge derived from empirical evidence) to be wrong, it's just a type of ad hom. .
  • Atheist Dogma.
    From the context, I'm guessing that you think that's problematic. Depending what you mean by "justified", that's true. For example, one could argue that our practices, which define "rational" as well as "fact", themselves are not exempt from the challenge of justification, hopefully of a kind different from the justification that they define. The only alternative is some kind of foundationalism.Ludwig V

    The objectivity of fact only requires justification if one intends to maintain the separation between fact and value. A practice can be held up as evidence in an attempt to justify a fact as objective, but such a practice is only successful in relation to an end, or a variety of ends, and so the extent of the justification is limited to the extent that the end or variety of ends is justified.

    But if the objectivity of facts is in question, it follows, doesn't it, that the subjectivity of values is also in question. But the means to a given end is already subject to rational justification, so it is presumably "factual", if a conditional can be factual. So it all turns on the status of ends.Ludwig V

    The status of ends I covered in the next post after the one which you quoted, here: https://thephilosophyforum.com/discussion/comment/812893
    The means cannot be truly "factual" if this is supposed to mean objective, because the means are justified by the end, and the end is justified as being the means to a further end. So we get either an infinite regress or a subjective "ultimate end". This is explained in Aristotle's "Nicomachean Ethics" where he proposes "happiness" as the ultimate end.

    ... So it all turns on the status of ends.

    As a preliminary, I observe that individuals are what they are within a society, which develops the rational capacities they are born with and, in many ways, defines the world in which they will live and do their thinking and make their choices. I'm happy to agree there is no reason to assume that what we are taught is a consistent or complete system, either for facts or for values.

    There are four possibilities that I am aware of:-
    Ludwig V

    I can't quite apprehend the premises you use to come up with only four possibilities. If ends are truly subjective, merely personal preferences, then the possibilities appear to be endless. So the only way to reduce the multitude of possibilities into something more reasonable would be to somehow make ends/values objective. This is why I proposed that we start with the objective fact, the truth, that ends are subjective. This is a sort of objectivity by proxy, because it does not get to the objectivity of any particular end, to say that such an end is objective, but it produces the general objective premise, or true proposition (as true as a proposition can get, I would say) that all ends are subjective. If this general statement was not true, then an objective end could be produced which would disprove it, and we'd have our objective end. Until then we must accept the truth of the general proposition that all ends are subjective, as a working principle for our purpose.

    From this perspective we can construct a proper hierarchy of values. The fact/value separation is denied because supposed "fact" is always supported by, or justified by, pragmatic principles, which in the end become subjective. Now all proposed facts are reduced to values, ends, and we can consider their individual merits, and position them as related to other ends. As general philosophers, we might just want to understand how all the various ends relate to each other, but as moral philosophers we might question the general proposition, that all ends are subjective, and try to understand what could bring some form of objectivity into any end. This would involve a defining of "objective". Either way, we must understand that moral philosophy is the highest philosophy when all knowledge is related in a hierarchy of values, because moral philosophy is directed toward that task of understanding values.
  • The Naive Theory of Consciousness
    Holism and its downward causation should resolve your confusion. The whole shapes its parts in accord with its global desires. The parts reconstruct that whole by expressing that desire at the microphysical level of falling together rather than falling apart.apokrisis

    The problem is that the microphysical is known to be prior in time to the larger and more complex physical "whole", as simple life forms are prior in time to complex life forms. So it is impossible that downward causation from the complex whole can construct the simple parts which exist prior to the complex whole. Therefore the "desire" which shapes the simple parts must be prior to the physical parts, as well as prior to the physical whole.
  • Atheist Dogma.
    What I meant was the social situation in which it is the means that are susceptible to rationality, rather than the ends.Jamal

    I think there is a very good reason for this. Ends are only rational as means. This is the problem Aristotle addressed in his ethics. If we take any specific end, and ask why it is wanted, then to answer this question we go to a further end, because we ask for the sake of what. In the process of being comprehended as rational, the end simply becomes the means to a further end. This is why he sought something which would be in a sense self-sufficient, wanted only for the sake of itself, and not for a further end. So he proposed happiness as the end which puts an end to the chain of ends.

    Notice though, that this ultimate end is not susceptible to rationality, because it cannot be transformed by rationalization into the means for a further end, and this is what is required to make it rational. But what this means is that no means are really properly susceptible to rationality, because they are only grounded rationally by the end, which only gets grounded as the means to a further end, until we propose an ultimate end, which itself cannot be rationally grounded. So this social situation in which means are rational is a sort of illusion, because they are only rationalized relative to an end, and ends are never really rational except as the means to a further end.

    At the personal level, ends may remain paramount, but these tend to be seen as subjective, a matter of taste or whatever.Jamal

    So ends always end up being subjective, and objectivity here is just an illusion. Even if we could come up with something, like Aristotle's "happiness", which we think everyone ought to agree to, someone is bound to disagree and propose something other than happiness, something like flourishing, which is a concept of growth, and insist that growth is better than simple happiness which is more like basic subsistence. And the religious community might insist that there are objective ends, supported by God, but this runs into the Euthyphro problem. Then it becomes rather pointless to define the ends or goods in relation to God, when we need to understand what is good in relation to human existence, as we are human. Therefore the idea of objective ends, or objective goods really does not provide us any useful ethical principles, or even a starting point for moral philosophy.

    At the social level, political parties campaign on how best to run the economy, not on what kind of economy there should be—and there too, ends may remain paramount (winning elections for the party, profits for owners of capital) but the rationality of basing a society on the profit motive is not questioned, thus the ends here are unexamined.Jamal

    I think that at the social level the rationality which the society is based in, is generally taken for granted. So for example we take it for granted that democracy is the best form of government. And if asked why you believe this, on would answer "because...". But the "..." tends to just get filled with whatever one likes about democracy, so it's really more of a personal preference than a rational justification.

    The problem of course is that as explained above, ends can never really get rationally justified, so we kind of create an illusion for ourselves, delude ourselves into taking for granted that they are already justified. This is the illusion of objective ends. It's not literally self-deception, but we just educate the children to stay away from these sorts of questions, by pretending that we firmly know the answer so there's no need to question. I know democracy is the best because I learned that from the elders who knew it to be the best. The religious way is pretend that God justifies the ends, and train the children not to question this, so when they become adults it's taken for granted. So it's not even a real pretense, just a matter of taking for granted (as known) what is unjustified. The illusion is that since it is the convention it must be already justified. But justification is not necessary for a convention to be accepted.
  • Atheist Dogma.
    I don’t think they’re competing explanations. I’d say that the power/money ideologies build upon the fact/value separation, because the reduction of values to subjective preferences—this being the corollary of the triumphant objectivity of science and the profit-driven progress of technology—entails, through its removal of meaning from the social and natural whole, a norm of rational behaviour where the means are paramount, and the ends are the unexamined personal preferences conditioned by a socially stratified society, i.e., status, power, wealth.Jamal

    Let me see if I unravel the mysteries of this brief, but extremely complex piece of writing. What I see here, is that you portray the fact/value separation as releasing value from the realm of fact, making values subjective rather than objective. So for example, religion would hold moral values as objectified by God (despite the Euthyphro problem), but the stated separation (apprehended as required by the Euthyphro problem) grounds values in the individual, therefore making them subjective. If we maintain objectivity as the defining feature of "fact", then we drive a wedge between fact and value.

    This places the ends (which in Platonic terms would be the goods, as what is desired) firmly within the individuals as inherent within, and intrinsic to the individuals. You characterize them as "unexamined personal preferences", but allow me to qualify this by saying that the ends have varying degrees of having been examined. We might find that people with a lot of ambition, will and determination, practise some degree of self-examination to form and maintain those types of goals you speak of, "status, power, wealth". For these people, with strong will and determination, the ends may remain paramount.

    On the other side, "the triumphant objectivity of science", "progress of technology", and the "removal of meaning from the social and natural whole" is accomplished by the very fact that "the means are paramount". By providing (i.e. providing the means) for the fulfillment of natural needs, wants, and desires of the people, the flock is satisfied, satiated, and very rulable. Only the relatively few who develop those higher goals through some degree of self-examination slip through the cracks of those provisions, because these personalized goals require strategy and specialized means.

    I believe, this puts "the norm of rational behaviour" in limbo. The reason why I say this is that "rational behaviour", meaning the behaviour of the rational mind in the act of thinking, is an activity of the individual subject. And, rational thinking in its natural state is intentionally directed, directed toward ends. However, the described situation, where "the means are paramount", as the norm, directs the thinking toward the means rather than toward the ends. The result is that "the norm" for rational behaviour is to direct the thinking toward the means rather than the ends. So the type of self-examination, described above, which seeks the true ends (we could say subjective ends are true ends, therefore objective), is outside the norm of rational behaviour, though it is really the natural state of rational behaviour. This leaves a discrepancy in "rational behaviour".

    Of course the ensuing issue is the matter of the objectivity of what is called "fact" in the first place. Maintaining that "fact" is objective while value is demonstrated as necessarily subjective, is what allows the wedge to be driven between fact and value in the first place. So to support this division, the objectivity of "fact" must be justified.
  • The Naive Theory of Consciousness
    Are these things that hard to understand?apokrisis

    No, not hard for me to understand at all, that all seems very evident to me. I think it's difficult to understand the wording though, when we use words of human intention like "desire", to refer to such fundamental biological activities. "Desire" seems to be attributable to the whole, in general use, but here you use it as if a tiny part of the organism possesses desire. But more precisely, you use it as if the parts are directed by desire.

    When we look at "desire" as an attributed of the whole, as what directs the tiny "ratchets" or switches, then what can we attribute this desire to in the coming into being of organic matter? Suppose that each tiny part of the living organism, when it comes into being, is directed in this sort of way, by a desire toward some end, then where does this desire toward an end come from? We do not see it in inanimate objects, they possess no tiny ratchets directed by desire. So when the living organism came into existence, and its parts were directed by desire, where did this desire come from?
  • The Naive Theory of Consciousness
    Or even more meaningful as a mechanical device is the ratchet. A ratchet is a switch that embeds a direction. It channels the physics of the world in some desired fashion.apokrisis

    Wait a minute, how does "desire" enter this scenario? "Desired fashion", implies that the channel, or direction is chosen. What do you think acta as the agent which does the choosing?
  • Atheist Dogma.
    This seemed to be further supported by the existence of another of the world’s most brutal and totalitarian regimes, one which was atheist and which engaged in the persecution of religion, namely Stalin's government of the Soviet Union.Jamal

    In relation to the op then, can you put your finger on the "dogma" or even the ideology involved here, which could motivate this sort of atheist politicism. Surely the issue is more complex than the "fact/value" distinction of the op. It appears to me like the proper subject matter would be better described as the power/money relation. The relation of fact over value does not seem to have the same motivating force as the relation of power over money. "Value" and "money" are comparable, which would mean that the dogma which motivates such an atheist movement is power based rather than fact based.

    It might be useful to consider Plato's description of the evolution/devolution of the state, in "The Republic". He describes a specific order of descent, which corresponds with a distinct attitude of the individual. Each of the successive forms of government, in what he calls the corruption of government, are described in terms of the attitude of an individual. And some form of explanation is provided as to how one gives way to the next. The three principal levels of distinction are the divine (by moral reason), the honourable (power), and the money (material goods, all sorts of chattel and property).
  • Atheist Dogma.
    Here's an example of atheist dogma. Einstein's relativity theory, by denying the possibility of an absolute present, also denies the possibility of God. "I am" of God requires an objective present, or else what is now, could also not be now, by the ambiguity of "now" Therefore relativity is atheist. And Einstein's relativity is the dogma of physics, hence "atheist dogma".
  • Atheist Dogma.
    I don't think it's possible to reasonably construe these statements otherwise, so I don't believe this is the result of a literal, fundamentalist interpretation which can be considered a reaction to "atheist dogma." It isn't necessary to be an atheist to maintain that such statements are the foundation for the intolerance which has characterized Christianity during the 20 centuries of its existence (which is also characteristic of other religions which make claim to being the one true faith).Ciceronianus

    Maybe you have this mixed up though. Jesus was anti-religion. He rebelled against the Jews. You must recognize that there was no Christianity at that time, so he was not promoting a religion called Christianity, he was simply rebelling against religion. So when, if, he said "I am the truth", then it was in an anti-religious context.

    I don't think it's possible to reasonably construe these statements otherwise, so I don't believe this is the result of a literal, fundamentalist interpretation which can be considered a reaction to "atheist dogma." It isn't necessary to be an atheist to maintain that such statements are the foundation for the intolerance which has characterized Christianity during the 20 centuries of its existence (which is also characteristic of other religions which make claim to being the one true faith).Ciceronianus

    Interpretation is everything in this context. Within the religion, it really doesn't matter at all what Jesus himself said, it only matters what those who came after him, and constructed the religion said he said. But since Jesus himself spoke in an anti-religious context, it is important to understand what Jesus himself said rather than what the religious people said he said, because they are not speaking from an anti-religious context. So it's really not the statements made by Jesus which are the foundation of intolerance, unless we're talking about intolerance of religion (which is an equal form of intolerance), it is the statements of others which are. The most difficult thing about understanding the New Testament is to discern what Jesus actually said, and did, when all that is provided is hearsay.
  • Atheist Dogma.
    1. Make a strong fact/value distinction, as per Hume.
    2. Establish the scientific method with truth as the only and unquestionable value.
    unenlightened

    The meaning of some words may change over time, and it could be that we have a shifting in the principal significance of "truth" here. This is indicated by Ciceronianus' quotation:

    . "I am the way, the truth and the life. No one comes to the Father except through me."Ciceronianus

    Notice here that "truth" is represented as a way of life, a way of being, instead of as fact . This is the distinction we find today between the two basic definitions of "true". The primary definition today is 'fact, corresponding with reality', while the secondary and sub-definitions are 'genuine, honest, faithful'.

    So what is at issue is your primary premise, the "strong fact/value distinction". This distinction drives a wedge between the two definitions of "true" by associating it with "fact", and assuming that facts are independent from values.

    We can see the very same issue with the separation between moral "values" and quantitative, or mathematical "values". It is often assumed, or simply taken for granted by people, that mathematical values are completely distinct and unrelated to moral values, instead of being seen as two different members (types) of the same set (category), "values". This way of taking for granted that mathematical values are completely distinct from other values, like moral values, and are somehow objective while other values are subjective, thereby categorically distinct, contributes to this delusional fact/value distinction.
  • A Case for Analytic Idealism

    I am very familiar with your sense of better and worse, so a statement of that sort was expected, and taken as a compliment.
  • Why Monism?
    Back on the topic of monism - I'm convinced that the original monist systems were derived from 'the unitive vision' in, for example, Plotinus.Wayfarer

    Yes, I think there was a form of Neo-Platonism which denied the reality of matter, making it monist idealism. I don't think Plotinus would quite fit that bill though. But I think monism was prevalent in philosophy before this, Parmenides being monist idealist (all is being), and Heraclitus being monist materialist (all is flux).

Metaphysician Undercover

Start FollowingSend a Message