Comments

  • How could Jesus be abandoned?


    This was taken as referring to the fact that God (and God alone) is subsistent being (everything else being contingent and relying on God as its ground, even things like number, shapes, etc., which are not entirely intelligible in themselves, but only as a part of the entire Logos). Psalm 139 is often interpreted in this way as well. It is God alone who most properly is, ipsum esse subsitens. Likewise, it is God, as universal ground and source of being in who "we live and move and have our being" (Acts 17:28).

    While such an interpretation is now sometimes presented as being "medieval scholastic innovation," generally by those with an anti-philosphical bent, or particularly "Roman Catholic," one can find it in the earliest Christian commentaries on Exodus (e.g. Origen, the Cappadocians, etc ) and in earlier Jewish commentaries (e.g. Philo), and its suggested more explicitly in some of the later Biblical literature included in the Septuagint.



    bmhtqdpvlxcicd4i.png


    "Three hypostases, one (unknowable) essence." God's essence is not known, only the divine energies.
  • What does Quine mean by Inscrutability of Reference


    So insects existed prior to humans and what makes them insects is whatever answers the question "how should we use the word insect?" Yet, there is also nothing essential to insects? Is there anything else we can point to outside of contemporary language use, in virtue of which insects are actually insects and not just called so?


    Because if what makes something an insect is exhausted by "how the word insect is used," it seems that this explanation will either be vacuously circular, or unable to explain how insects were insects before language existed.

    Melville famously spends a lot of Moby Dick arguing that whales are fish. And "fish" was used to describe whales for a long time. But clearly, while whales were whales before man, whales were not both fish and not-fish during this period. To word use seems inadequate to explain being.
  • What does Quine mean by Inscrutability of Reference


    So you do think insects existed prior to anyone deciding what counts as an insect? You seemed to be just objecting to that.

    If insects existed prior to humans deciding what counts as an insect, in virtue of what were they insects?
  • What does Quine mean by Inscrutability of Reference


    If your philosophy of language forces you to ho and hum and deflect away from questions like "did cockroaches not exist until humans decided to 'count' them as such?" then yes, that seems like a rather major defect. For one thing, when it comes to the discovery of new species or phenomena causation seems to move in demonstrably the opposite direction. First the phenomena is observed, then it is named. Presumably, it must exist before it can be observed as well.

    Essentialism isn't a problem, it's what prevents having to affirm things like: "North America didn't have a coastline until it was mapped," or: "insects didn't exist until they were named," or else having to deflect away from what should be fairly easy questions, or having to settle for "pretending" in questions of physical and biological science.

    What counts as a whole depends on what you are doing.

    Sure. When it comes to a consideration of the origins and evolution of language, and of animal communications more generally, it will not do to suppose/pretend that it is "equally true" that animals both did and didn't exist until people decided what would count as an animal.

    Either animals of different sorts existed prior to any human language community or they didn't, or the proposition that they did is somehow (bizarrely) not truth apt. That, or things can be both true and not true, depending on what is useful. Take your pick.

    I personally think "what is useful determines what is true," is a fairly disastrous way to do science and philosophy.
  • What does Quine mean by Inscrutability of Reference


    We can juxtapose two views, that either the dog is an whole regardless of language, or it is a whole in virtue of language. Then we can pretend that the one must be true, at the expense of the other.

    Must we pretend? Do dogs do not exist outside human linguistic frameworks?

    We can appeal to use, but this won't get us very far from the initial question. What is a key use of language? To refer to and describe things and processes!

    Plus, if we find some language more useful than other sorts, presumably this is not for "no reason at all," an uncaused brute fact, or else unanalyzable. There are reasons that different ways of "divvying up the world" are useful. There are causes that shape how language evolved and continues to evolve. One wouldn't explain the evolution of mammalian hearts by simply by pointing out that hearts must be useful and then leaving it at that, why should language be any different?

    What is the most obvious reason that it might be useful to think of sheep as a thing, as an organic whole? Because they are such! If you're a farmer and you cut your sheep in half to have dinner, the rest of the sheep isn't going to fare too well. A sheep isn't like a rock where you break it in half and have two rocks, a sheep broken in half is a corpse.

    Presumably, it would also be useful to understand how language works. If language works, at least in part, by referring to entities whose existence is not dependent on language, then the two views you offered up aren't equal. One is wrong. And as with anything else in science, starting from premises that are false is unlikely to lead to useful theories.
  • St. Anselm's Proof: A Problem of Reference, Intentional Identity and Mutual Understanding (G. Klima)


    So let's stop for a moment so you can correct any errors of mine. The - my - argument is that given definitions 1 & 2, and Anselm's claims, then the God that in the understanding is that than which & etc. cannot exist in reality. Have at it!

    I don't think the substitution will do. We can have ideas about ens reale. For instance, we have the ideas "man," "fox," etc. Yet presumably these also exist outside the mind.

    So:

    An immediate consequence for Anselm's is that what is in his understanding is an idea, and thereby cannot exist in reality - is not any kind of thing at all.

    Wouldn't this also imply that if I have an idea about a sandwich I am going to make for lunch later it cannot later exist outside my head? Or if I have an idea of Alabama, it cannot exist outside the mind?
  • St. Anselm's Proof: A Problem of Reference, Intentional Identity and Mutual Understanding (G. Klima)


    So it is an error to claim that Kripke thinks a description is needed in order to fix speaker's reference.

    Ok, so why do you think:

    For Saul Kripke this indicates that speaker’s reference may diverge from semantic reference. In the Kripkean framework, however, it is also assumed that the speaker’s reference is to that which the speaker at least believes satisfies his description

    ...implies anything to the contrary?

    Klima assumes that 'conditions for being the semantic referent' must involve a descriptive element

    Where?

    Is this important? Perhaps not, perhaps it was just a slip on Klima's part. Or perhaps it indicates some reservations he might have towards Kripke's semantics.

    Yeah, probably not important, but unless I've missed it and you meant to quote a different part of the article, I think you are misreading "the conditions for being a semantic reference must include a descriptive element" into that sentence. It doesn't say anything about it; it says that when a speaker's does use a description, the "speaker's reference" is that to which they think it applies.
  • What does Quine mean by Inscrutability of Reference


    I think the main thing I'd like to be able to distinguish is between when a person is talking about an object we are perceiving and when a person is talking about all the things we perceive when perceiving an object** -- I suspect that we do not need the notion of "the sign" to do this, but if we are speaking in terms of signs then I want to distinguish between the two references because I'm contending that language is not an object in the world like the other objects.

    That's a very interesting question. Are the two different? Clearly, on a common sense view, our vision of a tree is not identical with it, but in some philosophies there is just sense data and we are just naming similar bundles of sense data as suits our needs.

    Or even on a realist view: "are things anything other than the sum total of their (fundamental) observable properties?" i.e. "bundle theories." Most bundle theorists argue "yes," there is a substratum or haecceity to things to which properties "attach." If this seems odd (I think it does), it's because a great deal of difficult problems result if one denies any substratum, and makes things nothing but bundles of universals or tropes.

    I feel like this is an area where metaphysics and philosophy of perception has to come prior to philosophy of language. How can you solve "how do we refer to things?" if we are not sure if there are things to refer to, or if we can perceive them?

    Tripartite semiotics were developed on the assumption that the senses are the means by which we know things. On a representationalist view, the senses are instead what we know, i.e. "mental representations." I don't think tripartite semiotics has to imply a rejection of representationalism, but they often go together. The sign relation is said to be irreducibly triadic; the sign is what joins the object and the interpretant (in a "nuptial union" of sorts). The senses (imperfectly to be sure) communicate the actuality of things. The type of causation specific to signs is their capacity to make us think one thing instead of any other, so there is a sort of "in-forming" caused by the transfer of information in perception.

    Hence, on this view, we would _normally_ be referring to _the things_ we perceive. I think it's fair to say this is normally how linguistic convention is popularly understood. "Dog" is taken to mean dog, not "our perceptions associated with 'dog.'" However, it is certainly possible to refer to our perceptions instead of the object of perception. It just requires extra specification because it isn't the way people normally communicate. So, the clause: "my perception of dogs," would work fine for signifying this difference (for most interpretants, viz. competent English speakers).

    I am aware of philosophies that would deny this though. They would say "dog" really just means "our perceptions associated with dog," and that popular understandings to the contrary are just confusion. Likewise, there are those who would argue that any dog is just a heap of properties (or perceptions), and we could further subdivide this into those who think the properties are universals (realists) and those who think they are tropes (trope nominalists), or those who think they are just "useful fictions," etc. etc.



    Tim apparently thinks that there is at most one correct way in which the world can be divvied up

    I would put it this way: "not all ways are equally correct." For example, I claim that dogs and trout exist as discrete things, organic wholes, in the world. Their existence is not a merely linguistic fact; it is not dependent on linguistic conventions. By contrast, non-continuous trout halves and fox halves combined into "fouts" can certainly be named as "objects," but they do not have the same ontological status as proper wholes, such as trout and foxes. Do you disagree with that?

    One can allow that there might be many ways to be right without having to agree that all ways must be equally correct. If all ways to "divvy up the world" are equally correct, then "all ways of divvying up language," something in the world, must also be equally correct. But then there would be no point in doing philosophy of language, for one could never be wrong.

    Likewise, I don't think propositions about the physical world can be both true and false, at least not without equivocation or qualification. However, I don't think rejecting contradictions is a sort of fundamentalism. For one thing, people can certainly be fundamentalists in the opposite direction.
  • What does Quine mean by Inscrutability of Reference


    Take a look at Apustimelogist's latest post about evolution. That's basically the old paradox: if you remove a grain of sand from a heap and keep going, when does the heap stop being a heap? In other words, when do you need a new word? Chesterton, in your quote, doesn't seem to like considering grains of sand in a heap, if that makes sense.

    Chesterton is a gifted rhetorician, but he is a far better critic than positive theorist. I think his point is this:

    If the sorties paradox, problem of the many, problems of ordinary objects, etc. lead us to think that there are neither grains of sand nor heaps, but just one heap—a heap composed of nothing in particular—then our reasoning has gone off the rails. Indeed, arguably it has become self-refuting. If there are no such things as cows or pines, then it also seems there can be no such things as words or meaning, but then claims along the lines of: "there are no such things as cows," are completely vacuous. So, one has to settle somewhere above the horizon of an all-encompassing eliminitivism. Otherwise, we are rewinding the tape back to Parmenides or Heraclitus.

    But, per my post above, I think we can do a bit better than that. I think, for instance, that one can resolve the difficulties of feeling one needs to specify "the exact moment the first man existed" in order to distinguish men as a species, by realizing that change does not occur in "exact moments" but across a temporal series. And the same is true of physical being. Beings are always changing, so one does not differentiate them through a universal that acts a sort of static database filter, a metaphysical SQL query or something of that nature.
  • What does Quine mean by Inscrutability of Reference


    If you consider every single mammalian individual that ever existed, you will not be able to identify discrete boundaries between the concept of dog and not-dog. You may not even be able to agree on the criteria. Again, I am not considering potential, possible, counterfactual examples. I am considering all individuals that have ever existed in earth. I can't refer to most of these specific individuals, but I know for a fact that they existed. Sure, they don't exist now... and that is like a frame of reference on which the statistical structure of what is being talked about is different o if we change the reference frame, change the scale, change the inclusion of individuals, genetic structures that have actually existed.

    Well, the underlined part is pretty important there. It depends on what you mean by "discrete boundaries." If this amounts to a demand for "metaphysical superglue" between word (or concept) and thing, discussed pages earlier, then this demand cannot be met. Yet it doesn't need to be met to imply that a dog is a distinct type of thing, that dogs are not men, or cats, or trees, or that the distinction between dogs and men is not just a cultural contrivance, or even ambiguous.

    I am also not sure about the factual claim you're making. Numerous different species that are so similar we could not distinguish them exist for every existent species? But that is not what the fossil record suggests for man, for just one example. There have not been "very many species indistinguishable from man" existing throughout the Earth's history. There have been, on contemporary accounts, just the one. And this certainly wouldn't be true for domestic animals either. Cows and chickens didn't exist prior to man, "back in the mists of history," or if they did, there is no evidence of them remains.

    Unless you are merely speaking of the transition from wolf to dog, in which case what of it? Yes, domestication is not a binary. Yet the aurochs is extinct, the cow is not. More to the point, a stegosaurus is not a dog, an oak is not a dog, a rock is not a dog. These are quite discrete distinctions between dog and not-dog. They just aren't based on the "metaphysical superglue" discussed earlier.

    All organisms on earth share a common ancestor; it is surely the case that if you trace the changes of all of your ancestors, generation by generation, the changes in genetics will be tiny every time in the context of all of the genetic variation that has ever existed. If, from your earliest ancestor to you now, your lineage has gone through all of the different stereotypical biological kinds - we at least know apes, mammal, reptile, fis, I believe - there is absolutely going to be no dicrete boundaries along the way. Its more-or-less a continuous path of infinitesimal change.

    Yes, and this is indeed a problem for what I like to call "Excel spreadsheet metaphysics." On this view, what makes something what it is has to be defined in terms of something akin to a string an Xlookup function could match. Maybe this is some "bundle of properties," or perhaps it is some sequence of genes. And so, for biological species to exist at all, there must be some sort of code (properties, genes, etc.) that uniquely species it, such that we fill in "dog" in one cell and, through a series of lookups, bring back an array with all the dogs (past, present, and future) identified. Each dog is no doubt different, so the abstraction then has to function as some sort of search term.

    As I think we both agree, such a "lookup" variable cannot exist. But if the argument is something like:

    P1: For any discrete class/category of things to exist (e.g. dogs), a unique identifying "code" must exist to uniquely specify it.
    P2: Such a code cannot exist.
    C: Therefore, there are no different types (dog, oak, etc.) of things.

    Then the absurd conclusion should lead us to question our premises. You could make the same sort of argument re life, both vis-á-vis "when life began on Earth" and "when any individual organism dies." Where is the exact, discrete moment where any individual dog dies? If we cannot find it, shall we conclude that either no dogs ever die, or that none have ever lived? Or perhaps that "life" and "being a dog" are mere cultural or mental constructs, ens rationis and not ens reale?

    My take is that these problems are the result of inappropriate analysis. Change cannot occur "in no time at all." Change occurs over some interval, as does all experience of change (or of anything). And we can indeed distinguish intervals over which any individual dog clearly and unambiguously changes from living to dead. Likewise, there are intervals over which some biological species clearly and unambiguously emerge from others.
  • I Refute it Thus!


    I don't know enough about Berkley to know his influences (I read him pretty much blind), but this actually makes a lot of sense if one looks at his philosophy as essentially recapitulating the "classical metaphysical tradition"*, just through a sort of bizzarro world, fun house mirror setting of modernity.

    IMHO though, it ends up looking terribly deflated.

    *The term "classical tradition" often gets employed to name the wide, but surprisingly unified blend of metaphysics that dominated from late-antiquity to the late medieval period in Pagan, Jewish, Islamic, and Christian thought (broadly a synthesis of Plato, Aristotle, Neo-Platonism, and Stoicism, with different elements emphasized).
  • What does Quine mean by Inscrutability of Reference


    I would also argue the possibility though that "concrete characteristics" are contingent on how the world happens to be, but if you look at how the world could be otherwise, then it doesn't seem so clear.

    Indeed, my cat surely could have been eaten by dog, and maybe by now it would compose parts of a dog, dog feces, my lawn, etc. But if it did, it would certainly no longer be a cat.

    And it seems to me that the way we extract structure from the world depends on a kind of reference frame to which that structure is optimal, but may not be so in another (similar to how different descriptions become inappropriate when we move to different scales of observation). I think its very difficult to do anything with the carbon example without kind of going into silly speculative metaphysics and notions of unconceived alternatives, which may be meaninglessly intangible. But with regard to things like lions and oaks, when you just e pand the temporal horizons of the world we consider, the concrete characterization may no longer exists as you have to consider the gradual changes populations due to evolution over a long period. And here, the biological ambiguities of defining things like species may become more relevant. I think animals is a very good example since it clearly shows our ability to recognize different animals in an easy fashion is contingent on the fact that a lot of the diversity, variety, continuity between different animals is not observable to us, even though it clearly did exist if we consider out entire evolutionary history. Someone more radical might then want to argue that this kind of example should be seen as a general thing that applies to all things that exist when you consider the great diversity, variety, continuity in possible worlds. Things always could have been otherwise so that the boundaries or transition structures we tend to use to identify, distinguish or label things no longer seem to be as optimal or informative.

    :up: You raise an excellent point. There is a tremendous multiplicity and diversity, and I'd add that a lot of it is quite observable. Every dog is different, and every person—each snowflake as well as each fingerprint. My copy of the Metaphysics has different dog ears than my professors, different coffee stains, different places where the ink didn't quite come off the press correctly. And the same person or dog is also different from moment to moment, year to year, sometimes dramatically so.

    However, words generally try to focus on the actual, not the potential. The act of being a dog is what stays the same in all dogs. We could well imagine some sort of dog, bee, elephant fusion (horrific) and ask: "when does it stop being a dog and become a monster?" Yet no such animal actually exists, it is ens rationis, a being of thought. Language evolves through our interactions with actual beings, so we should only expect that our words will tend to indicate the beings we actually find around us. Language evolution isn't arbitrary after all.

    As G.K. Chesterton puts it:


    Evolution is a good example of that modern intelligence which, if it destroys anything, destroys itself. Evolution is either an innocent scientific description of how certain earthly things came about; or, if it is anything more than this, it is an attack upon thought itself. If evolution destroys anything, it does not destroy religion but rationalism. If evolution simply means that a positive thing called an ape turned very slowly into a positive thing called a man, then it is stingless for the most orthodox; for a personal God might just as well do things slowly as quickly, especially if, like the Christian God, he were outside time. But if it means anything more, it means that there is no such thing as an ape to change, and no such thing as a man for him to change into. It means that there is no such thing as a thing. At best, there is only one thing, and that is a flux of everything and anything. This is an attack not upon the faith, but upon the mind; you cannot think if there are no things to think about. You cannot think if you are not separate from the subject of thought. Descartes said, "I think; therefore I am." The philosophic evolutionist reverses and negatives the epigram. He says, "I am not; therefore I cannot think."

    Then there is the opposite attack on thought: that urged by Mr. H.G.Wells when he insists that every separate thing is "unique," and there are no categories at all. This also is merely destructive. Thinking means connecting things, and stops if they cannot be connected. It need hardly be said that this scepticism forbidding thought necessarily forbids speech; a man cannot open his mouth without contradicting it. Thus when Mr. Wells says (as he did somewhere), "All chairs are quite different," he utters not merely a misstatement, but a contradiction in terms. If all chairs were quite different, you could not call them "all chairs."

    But if the point is that nominalism and the expulsion of quiddities (of any consideration of phenomenology or the phenomenological presence and whatness of things) seem to lead towards an incoherent (and ultimately arbitrary) account of language and the world, I won't object. As Joshua Hochschild puts it:

    In short, the appeal to forms or natures does not just help account for the possibility of trustworthy access to facts, it makes possible a notion of wisdom, traditionally conceived as an ordering grasp of reality. Preoccupied with overcoming Cartesian skepticism, it often seems as if philosophy’s highest aspiration is merely to secure some veridical cognitive events. Rarely sought is a more robust goal: an authoritative and life-altering wisdom. Notice: even if contemporary philosophers came to a consensus about how to overcome Cartesian doubt and secure certainty, it is not clear that this would do anything to repair the fragmentation and democratization of the disciplines, or to make it more plausible that there could be an ordered hierarchy of sciences, with a highest science, acknowledged as queen of the rest—whether we call it first philosophy, or metaphysics, or wisdom...

    So Richard Weaver was wrong. Or rather, Richard Weaver was right, but for the wrong reasons. He correctly saw that Ockham’s logical innovation was “a crucial event in the history of Western culture… issue[ing] now in modern decadence.” But Ockham’s innovation was not so straightforward a move as denying that universals exist. Rather, it was a subtle, seemingly discrete, but ultimately much more insidious decision to revise an account of mind and language by refusing to include intelligible natures and formal causality, the conceptual lynchpin of the entire classical and medieval heritage. The fact that this loss remains so hard for us to see and to accurately explain is itself evidence of how momentous it is, and how much work of recovery we have yet to do.

    What’s Wrong with Ockham? Reassessing the Role of Nominalism in the Dissolution of the West

    https://www.academia.edu/36162636/What_s_Wrong_with_Ockham_Reassessing_the_Role_of_Nominalism_in_the_Dissolution_of_the_West
  • What does Quine mean by Inscrutability of Reference


    Well, "cultural" would tend to imply a diffuse, collective project, right? But surely a man stranded on a desert island can come to recognize new species of flora and fauna there, and abstract their properties from concrete particulars, or even come to name them, all in isolation.

    If you're living in a society it's culture (or subculture) will influence how you abstract.

    To some degree, yes. Yet disparate cultures only vary so much, which makes sense because the neurological and environmental underpinnings of our power of abstraction do not vary much by cultural context. For example, there are potentially infinite "objects" to identify in the world, but even cultures quite isolated from one another synch up pretty well in which they choose to identify, particularly when it comes to concrete entities (e.g. animal and plant species).

    (It will also influence how those organisms will act, which is another, more indirect, source of influence on how you abstract.)

    The direction of influence will be bi-directional, yes, but not all facts are institutional facts. So, if cats are worshipped, and not driven off when they try to get into buildings, then the cats will learn to act differently. Domestication is an extreme example. Yet this seems like it will tend to be a fairly distal influence. The way a cockroach or a tulip behaves is only going to be influenced by the surrounding human culture so much, and at any rate, the cockroach and the tulip existed prior to any human culture and their preexisting properties have shaped how any culture comes to interact with them in the first place.

    Domestication, for instance, has much to do with the pre-existing properties of domesticated animals' ancestors, and those animals' behaviors upon encountering humanity. So, bidirectional yes, but you can have bidirectional influence where one direction is primary. A dog will never learn to talk or drive a car, no matter how we treat him. Is this fact merely institutional? People don't mate pigs to goats, regardless of culture, because it doesn't work; whereas they will mate horses to donkeys to get mules. But the fact that a pig and a goat cannot produce offspring is not an institutional fact, it is not a product of collective recognition. Rather, there is collective recognition of this fact because it is true that one cannot breed pigs to goats.

    For another example, take hypotheses for why gold became a valuable medium of exchange in disparate cultures. It is scarce enough to be such a medium, while its properties are also very hard to counterfeit, giving it the cryptological features necessary for any good medium of exchange. Whereas pine needles, at least in much of the world, would make a terrible medium of exchange because they are everywhere, whereas something too rare also will not do, since there won't be enough around to trade in. Institutional facts are parasitic on facts that do not obtain due to collective recognition.
  • St. Anselm's Proof: A Problem of Reference, Intentional Identity and Mutual Understanding (G. Klima)


    Here is Klima's "offending" passage.

    For Saul Kripke this indicates that speaker’s reference may diverge from semantic reference. In the Kripkean framework, however, it is also assumed that the speaker’s reference is to that which the speaker at least believes satisfies his description


    Here is the very article you are citing:

    So, we may tentatively define the speaker’s referent of a designator to be that object which the speaker wishes to talk about, on a given occasion, and believes fulfills the conditions for being the semantic referent of the designator. He uses the designator with the intention of making an assertion about the object in question (which may not really be the semantic referent, if the speaker’s belief that it fulfills the appropriate semantic conditions is in error). The speaker’s referent is the thing the speaker referred to by the designator, though it may not be the referent of the designator, in his idiolect. In the example above, Jones, the man named by the name, is the semantic referent. Smith is the speaker’s referent, the correct answer to the question, “To whom were you referring?”22


    The semantic referent of a designator is the referent determined in virtue of the conventions of the language, and the speaker's referent of a designator is the object which the speaker wishes to speak of using the designator (see page 263-264).

    Here, , you seem to be confusing speaker's reference and semantic reference through the added complexity of the fact that people obviously can be mistaken about their beliefs when they make references. I can only make sense of the addendum here,, as somehow being counter to what Klima has said if the two are being confused again.

    Who is the sentence "He did not write "Naming and Necessity" about?

    From the perspective of convention, approached in the abstract as strictly "semantic meaning," it would refer to any male who didn't write something called "Naming and Necessity." But this can be modified or superseded in any particular context.



    Yes, and Boethius is generally seen as later offering the standard argument for why God must exist at "no time." It was already seen that to exist in just one place was to be limited. Further, God's existing in any one place would run counter the idea that God is the continuing ground for all being, that "in which we live and move and have our being," (St. Luke, Acts 17)—God as "within everything, but contained in nothing" (St. Augustine, Confessions I). And this generally goes along with the idea that God alone is subsistent being, the ground and first principle for all creation (Exodus 3 and elsewhere).

    St. Augustine points out that we can remember what we did in the past. Yet this does not somehow limit our freedom at the time of our choosing, even though we cannot change what we chose. Likewise, God recalling all of creation history from outside time does not affect the freedom of creatures in time. Boethius decisive innovation was to make it clear they being located at one moment in time is as limiting as being located in one space. To be at just one moment of time is to be separated from oneself, and not to fully possess all of oneself. God was already thought to be most truly One, so God's existence in time also runs into the problem of dividing God from Himself.

    Dante has my favorite "spatial" illustration of this:

    The point, as I have said, is that that home (the Empyrean) is nowhere at all. It does not exist in space or time; thus neither does the spatiotemporal world it “contains.” The Empyrean is the subject of all experience, it is what does the experiencing. As pure awareness or conscious being, its relation to creation, that is, to everything that can be described or talked about, may be metaphorically conceived in one of two ways: It may be imagined as an infinite reality containing the entire universe of every possible object of experience (this cosmological picture is the framework of the Paradiso) or it may be conceived as a point with no extension in either space or time, which projects the world of space and time around itself, as a light paints a halo onto mist. In the Primo Mobile, the ninth sphere, which is the nexus between the Empyrean and the world of multiplicity, between the subject of experience and every possible object of experience, Dante takes both these tacks.

    Christian Moevs - The Metaphysics of Dante's Comedy - Introduction: Non-Duality and Self-Knowledge - pg. 6

    The physical world is Satanocentric, having Lucifer at its absolute center, with all corporeal beings tending towards that center of gravity to the extent that they are material. However, matter can also be seen as the darkness that appears at the furthest fringes from the Empyrean's light, beyond which is nothingness. The entire narrative takes place in a spiral (first down to the center of the universe, then outward), with a sort of fractal recurrence (the same themes show up taken from different angles in the same mathematical order), but at the very end the entire picture is inverted into a spiral inwards, " “The Kingdom of God is within you” (Luke 17:21); "You were more inward to me than my most inward part and higher than my highest” (Confessions 3.6.11).



    Klima, Plantinga, etc. provide us with clear arguments and do discuss the distinctions between ens reale and ens rationis or possible and necessary respectively. However, these are also distinctions made throughout philosophy, and all the time in everyday language. An objection that one does not understand these terms says more about the objector than the argument. At any rate, if one was truly mystified by them, it's unclear how one could even understand what the argument was claiming to demonstrate.

    I have already noted some grounds on which I think the opening premise might be challenged. However, not all challenges are equal. One could object to Cantor's distinction between different sizes of infinity. People do indeed raise cogent objections. However, not all objections are cogent. For example, simply stating: "Infinite means nothing greater, hence what is infinite cannot involve varying 'sizes,' that's just what infinite means, and it is incoherent and illogical to suggest otherwise," is a weak objection.

    So, perhaps what Klima points out is that even the process of objecting to the argument draws an objector towards the thought object Anselm has in mind, just as a mathematician might somehow disagree with Cantor's diagonal argument in some respect, but they won't be able to do so properly without first coming to understand it as Cantor does.

    This creates a difficulty for the argument though, on two fronts. The first, is as I noted previously, that it seems that the conclusion will be known better than the premises (at least for the faithful), and that even the faithful should hope that they know the conclusion better than the premises. St. Thomas has an entire chapter in the Summa Contra Gentiles titled something like "Why Man's Ultimate Happiness is Not to be Found in the Knowledge of God Has Through Demonstration," for instance.

    Second, that if it takes a trip through millennia of thought on the unlimited and absolute to decide the issue, the premise is, while perhaps prima facie plausible for many, clearly not without its difficulties. However, I think Anselm's intent was to have the conclusion be fairly obvious.

    So, for instance, if one needs to go into the Doctrine of Transcendentals to explain the relationship between "greatness" on the one hand, and existence, unity, and goodness on the other, one can hardly claim the objector is a "fool," because such issues have always been considered (in Anselm's time as well) extremely difficult and beyond the aptitudes of many (a point at least as old as Plato).
  • What does Quine mean by Inscrutability of Reference


    Count Timothy von Icarus I'm not seeing how this addresses my post. I do not see where your diagrams take into consideration the fact of language as social phenomena, as the interaction of multiple people, doing things with words

    Language is not the only case of signification in the world. Nor is it entirely sui generis. For example, most mammals come fairly well equipped to make threat displays and signal aggression to one another. This is a communication of intentions accomplished without social conventions. Likewise, smoke signifies fire without any intentions involved.

    So the basic signification relation shouldn't include reference to social conventions. It would be inadequate if it did (or we would have to suppose that human language does not signify in the way that all other animal communications does, which doesn't seem plausible for many reasons). Nor is conventional or stipulated signification sufficient for language. If it was, then we would have to allow that dogs and chimps are language users, since they can clearly learn to respond to stipulated signs and conventions.

    The advantage of the semiotic and information theoretic frameworks is that they can explain disparate forms of signification, both natural and conventional.



    An excellent post. I would ask though:

    then I haven't changed the syntax at all, but I've certainly introduced a new word. Since I just mention the sentence as an example, and I don't actually say anything about animals and mats, I'm not referring to real life set of affairs. I am, though, referring to certain common cultural abstractions: "cat", "dog".

    are cats and dogs best thought of as "cultural abstractions?" Or are they just abstractions of a certain type of organism.

    I ask because some of the things we can reference certainly seem to be "nothing but" cultural creations. Something like "communism," "neoplatonism," or "French" might be a good example here. And with these sorts of things, it is common to run into "no true Scotsman" type disagreements, e.g. "how should true liberalism be defined?" But you don't tend to get the same sort of disagreements re lions, oaks, or carbon. You know: "no true carbon binds to just one oxygen ion!"

    I think the distinction is important in considering how language might have emerged (and also acquisition in children). It seems that understanding abstractions of real things should comes irst (it certainly does in children, and quite early) and then the tools used/developed for this task are eventually employed to comprehend and communicate about other, more complex sorts of purely conventional/mental entities.

    So, for instance, from a very early age, kids have no problem identifying different species of animal from pictures. But identifying labels like "Islamic," "French" or "impressionist" will be difficult to impossible, even in cases where any competent adult will have absolutely no problem making the identification. Likewise, identifying firefighters, or doctors, will probably come pretty easily, because, while a cultural role, it can be represented with clear, concrete characteristics.
  • Matter is not what we experience . . .


    Well, according to materialism, everything is made of matter. Thus, to see rabbits, trees, ants, flowers, etc. is to see matter. "What else could you be observing?" the materialist might respond.

    To say "something is not true because I can imagine some radically skeptical scenario where it is false" is not a particularly compelling counter example. Likewise, to presuppose that matter is merely an explanatory framework seems to essentially beg the question vis-a-vis a realist interpretation of materialism. I imagine they would prefer to say that knowledge of matter is rather abstracted from material things, and that matter is useful in explanations precisely because it exists, and that our knowledge of it comes through the senses.

    At any rate, the claim that we do not experience things because we only experience sensations seems to me a bit like claiming that man cannot write because he can only move his fingers around, or that man cannot drive a car but can only push pedals and turn steering wheels. The idea that we "do not experience the world because we only experience experiences, sensations, concepts, etc.," seems to suggest something of the Cartesian theater. But the materialist rejects the Cartesian humonculus on the grounds that it presupposes dualism.
  • What does Quine mean by Inscrutability of Reference


    One problem with the pictures is that there is only one signification/meaning/interpretant/dicible. Perhaps they are addressing a different issue to Davidson and Quine?

    It's the basic framework from which the same issues could be considered. One key thing to note here is that the interpretant is not always an "interpreter," a whole person. It can be a thought. So sometimes we equivocate, and sometimes we do so intentionally. A joke might hang on signifying two different objects by the same sign vehicle, perhaps using a homonym. The most common way to approach this would be suppose two different signification relations, with two different objects, two different interpretants, utilizing the same sign vehicle.

    So, from the perspective of convention, "cats are fish" has a quite determinant meaning (barring some unusual context), and "cat" is a sign vehicle signifying cats and "fish" a sign vehicle signifying fish. But supposing the Joker has told his vile henchmen that "cats are fish" is his codeword for taking hostages at Bruce Wayne's party (a poorly chosen venue), obviously there is a parallel act of signification achieved. Nor are the henchmen incapable of simultaneously understanding both meanings, hence more than one interpretant.

    Side note: now if "cats" or "fish" might refer to areas directly adjacent to cats or fish, such that "cat" and "fish" are always present when cats and fish are, then "cats are fish" is at least sometimes true (it is true whenever a cat and a fish are immediately adjacent, since the spaces adjacent to each are the same spaces). But by convention, cats are not fish, and they cannot become so based on their spacial proximity to one another.




    I can accept the first picture over the latter. "The signified", at least in my understanding of Saussure, was always ambiguous in the sense that sometimes it referred to the idea people had and sometimes it referred to the physical object.

    Yes, I think the tripartite structure helps to clear this up. You can, of course, signify an idea, or even a complex collection of them (e.g. "the theory of special relativity") as the "object." You can likewise signify incorporeal "objects," such as an economic recession, or hypothetical ones. However, what is signified is different from the thought that interprets it, the interpretant.

    Thinking and "talking to oneself" involves signs, but clearly what is signified and the interpretant are not thereby collapsed. So that's a common difficulty, an interpretant need not be conscious, nor need they be a whole person (an interpreter).
  • St. Anselm's Proof: A Problem of Reference, Intentional Identity and Mutual Understanding (G. Klima)


    I should like to see the demonstration of this. That or at least a somewhat rigorous definition of what it means to exist

    There is the ens reale versus ens rationis distinction at the beginning. Roark takes it as analytic that something that really exists is "greater," "better," or "more perfect" than something that doesn't exist (the Latin word gets us "major" as well as "mayor," the position of top authority).

    The existence between a real thing and a mere object of thought can be had by thinking of having an ice cream sundae, or a sail boat, or a Porsche, or anything else you might consider pretty great, and contrasting its mere mental existence with what it would mean to really have it. For me, it's the Star Trek holodeck; all the perks of time travel with none paradoxes.

    Another way to think about it is in terms of truth. If one understands what it means for it to be true that anything is really the case, then one must have some understanding of "is really the case."

    I am not sure if it works to respond to Anselm's charge of intransigence by refusing to accept that one knows what is meant by "is" or "true." Presumably, the critic wants to prove that it is "really the case" that Anselm's argument is defective, which presupposes a distinction between whatever they think is the case and what is really the case. If there was no distinction, thinking Anselm is wrong would be enough, no need to argue.



    The speaker's reference, given by pointing to Kaplan, is Kaplan. The intended reference, given by the name "Kripke", is Kripke. Hence it is not always the case that the speaker's reference is the one that satisfies the speaker's intent. Which is to make the obvious point that what someone is talking about does not always align with what they think they are talking about.

    This is a generic problem with accounts of reference in terms of speaker's intent. Reference is a communal activity, and so not reliant simply on the intent of the speaker.

    I don't see how this is at odds with what Klima has said.

    At any rate, isn't the "intended reference" also the "speaker's reference?" The pointing and naming are equally intentional. Hence the distinction between "intended" and "semantic" reference, although one could also frame it in terms of "intentions versus conventions."

    In some cases, someone uses the wrong name and their intended reference is still communicated clear as day. That's how these examples usually work, by setting up scenarios where both the intended reference and what is referenced according to convention (and the difference between the two) are readily apparent to any competent speaker of the language. In which case, if both intentions and conventional meaning are clearly communicated, why try to claim only one is signified? Why not both? Language is redundant and people do things like point because its a clear sign of intentions that will overcome errors in convention. It's a false dichotomy to suppose that words either signify a speaker's intent or they signify according to convention, but never both, so "simply" is the key word in your last sentence. But no one outside of a joke character in a children's book has ever proposed that words "simply" mean what is intended by them.





    In fact the question I posed to you about how one is to untangle God's existence from an acknowledgment of God's existence gets straight into the follow-up exchange between Klima and Roark, which makes sense since it was Roark who gave you the idea to phrase it that way.


    I had the thought before finding the paper; however I don't think it's a terribly original insight on my part, because IIRC this is how some people have read St. Anselm himself from early on, and I might just have been recalling that. I get the impression that Roark is not terribly familiar with Anslem and that he is working his way back to the same insight in a rigorous way, but it also sort of "pops out" in a natural language analysis.

    To the quote from Roark, I do wonder if "parasitic reference" is the right solution here. It seems possible to also frame it as a sort of mental bracketing. So, one can consider the idea of God and affirm that it implies its own affirmation, but then, outside the bracketing, deny that any concept should be able to imply its own affirmation.
  • What does Quine mean by Inscrutability of Reference


    I'd agree that Saussure's semiotics have not had a particularly helpful influence (in part because they led to Derrida :rofl: ). I was thinking more of the tripartite semiotics that tends to get employed vis-a-vis the natural science, e.g. John Deely, C.S. Peirce, and back on to the Latins. Having the interpretant in the process seems essential to me. Things don't signify to "nothing in particular." A dark cloud signifies rain to a goat or a bear as well as a man, but it doesn't signify anything to a rock.

    Edit: I should note that in the broader application, signification is happening everywhere, not just in language. For instance, in an analysis of the sensory system we might speak of light interacting with photoreceptors in the eye as the object, the pattern of action potentials traveling down the optic nerve as the sign vehicle, and then some particular resultant activity in the occipital lobe as the interpretant, or we might apply it to DNA and ribosomes, etc.

    q2t4gs6i5ci8ymke.jpg

    Versus

    saussurian-model.png
  • What does Quine mean by Inscrutability of Reference


    Moliere The emphasis on "sign" is problematic, in that it supposes that the main purpose, or fundamental element, in language is the noun

    Does it? It seems neutral to me. Consider a stop sign, traffic lights, etc. Many of the most obvious conventional signs are about processes or behavior. Musical notation is about things (notes) but is also an instruction on a process, etc.



    Language is more about constructing, rather than exchanging, information. This choice of words may mark a pretty fundamental difference between those who agree with Quine and those who do not.

    Perhaps. Depending on how one frames information I am not sure if these are mutually exclusive. What is helpful about the information theoretic perspective, aside from its tremendous success in communications technology, cognitive science, and linguistics, is that it highlights the very many sources of information in verbal or written communications that are not limited to words themselves and that information content depends on assumptions that are prior to the receipt of a message.



    It even references a great medieval thinker, so I was thinking it might be more appreciated by our interlocutors.

    Yes, but unfortunately not in a particularly helpful way. St. Augustine has a very nuanced view of language and his own formulation of meaning as use, but he mostly shows up in PI to present a very naive picture of language.
  • What does Quine mean by Inscrutability of Reference


    Would it surprise you that I disagree with Aristotle on this? :D

    It might be more that I have not properly communicated the claim properly. Pace Plato, Aristotle allows that weakness of will can occur, so he wouldn't necessarily be at odds with Sartre here. The point is more about predication. So, for instance, if you go outside and see a car, and it's blue, you cannot also judge that it is not-blue, in the same way, without qualification (so a car that is blue and another color isn't a counter example here).

    So, once on this forum someone brought up the old duck/rabbit optical illusion as a counter example. But that wouldn't be one. That would be an example where we qualify our judgement.




    The answer given previously was the Humpty Dumpty account

    By who? Humpty Dumpty in Through the Looking Glass is a joke, like Molière's Imaginary Invalid. "Language is used for communicating intentions" does not entail "words mean whatever a speaker wants them to mean."

    That someone can point to a picture and say "that's a picture of the greatest general in history," and not realize that their Napoleon portrait has been replaced with a picture of David Bowie is not any more mysterious than the fact that, while smoke is a sign of fire, there can be smoke without fire, or that people can misspeak, or that listeners can mishear. The same is true for sarcasm. Sarcasm works because the information used in understanding language isn't limited to words, but includes tone, surrounding context, memory, etc.

    Language involves stipulated/conventional signs that signify things, giving us a relation akin to smoke → fire or dark clouds → rain. Signs aren't univocal. A person's words are signs of things, by convention, context, etc., but they also signs of the speaker's intentions. Likewise, when someone conveys knowledge, their words are a sign of truth in their intellect. When they speak truthfully (as in, not lying) their words are a sign of their beliefs. Lying involves words that are not signs of a person's beliefs, and yet they still can be signs of their intentions (e.g. when someone lies about being wealthy, being able to bench press 300lbs, etc., they reveal that they think these are desirable).

    There is a lot of information exchanged in speech, and a lot of parallel signification. One cannot reduce this to the words or sentences themselves. I would argue that it is better to start with simpler questions, e.g., "how does smoke signify fire?" or "how does an angry badger succeed at signifying its internal state to other mammals," before jumping into human language. Nor can one reduce all such communications straightforwardly to use, since use follows from intentions, and a key use of language is to communicate intentions (one might consider here all the, IMO quite good, arguments that chimps and other primates do not learn to use human language as language, whereas a simple view of use would end up concluding that not only chimps, but dogs can use human language).

    Not all statements are first-person declaratives. We speak in the passive voice, we try to deny assertoric force, etc. either supposing an abstract speaker or abstracting away the speaker. And this might be useful in some cases, but it is perhaps where confusion arises as philosophers try to explain how conventional signs signify outside the contexts in which they are actually used.




    Basically meaning isn't tied to words, but the interplay of terms within the whole structure of the sentence.

    This is indeed an important point. However, it is not unique to Quine, nor does it entail Quine's particular approach to reference. See the rest of the post above. From an information theoretic or semiotic perspective, there is a ton of information relevant to communication that is related to context (linguistic and otherwise), tone, body language, the identity of the speaker, the identity of the intended recipient, past conversation/stipulation, etc., in addition to convention. There is also a lot of signification going on in conversations.

    However, signs clearly do signify according to convention, else language (and any communications convention) would not be useful for communications. Such signification, when analyzed from the perspective of convention in the abstract, can be more or less ambiguous or determinant. For instance, it is possible to specify signification such that any competent speaker of a language will know exactly what object you're referring to in some cases.

    That signification is not uniquely specified by a simple correspondence analysis does not entail that it is wholly undetermined. The communication of intentions clearly does occur (to deny this is to deny meaningful communications). An analysis of conventional signs need not exclude any reference to context, tone, etc. either. It is, at least sometimes, by convention that sentences will have different meanings in different contexts. Tone is involved in signaling sarcasm, questions, commands, etc. in ways specified by convention, but has nothing to do with the exact words used.

    The mistake is to do something like "look for the meanings of words in isolation." But then it also seems mistaken to assert that "fish" does not signify fish by convention. Like so:

    The-semiotic-triangle-according-to-Ogden-and-Richards-37-The-word-Symbol-the-thing.png
  • The logic of a universal origin and meaning


    We're close. The point I'm making is the philosophical ontological argument is now complete. The only logical conclusion is that the entirety of existence has no prior reason for its existence, and therefore could have been anything. No philosophical proposal is necessary ontologically, therefore there is no more debate or consideration.

    The scientific ontological argument is still on. Is it the big bang? A God that made a big bang? Etc. The different is it requires evidence, reason, testing, and confirmation. That is outside the realm of philosophy. Try it. Try to show that any particular origin is philosophically necessary if the OP is true and see if it works.

    Sure, you are correct. Provided that we accept that "it just is, for no reason at all," is as good an explanation of things as any other, this would indeed render any other explanation "unnecessary," and imply that there "[should] be no more debate or consideration." After all, such an explanation can be proffered for literally anything we might inquire about. "Why is grass green?" It just is. "Why does the caged bird sing?" It just does. "Why did Hitler declare war on the USA?" He just did. "How to words refer?" They just do. "Why do some people prefer their cucumbers pickled?" That is just the way of the world.

    Whether it would be wise to affirm such misology is another question.
  • The logic of a universal origin and meaning


    Where am I wrong?

    Arguments from brute facts in cosmology are almost always extremely ad hoc. Yours is no exception. "It just is, for no reason at all," could be applied to any phenomena at all. So, for instance, given the principle of indifference, the extremely low entropy of the early universe is almost unfathomably unlikely. It's the sort of thing that cries out for explanation.

    The brute fact explanation is only thrown out when a better explanation isn't available. It's ad hoc. If it is acceptable at all, it should be equally acceptable for "why does water dissolve salt," and yet no one would accept a brute fact explanation here because better explanations exist.

    No doubt, if Penrose's hypothesis for why the entropy of the early universe was so low, or any of the others, was borne out by more evidence and became the consensus opinion of cosmologists, it would not make sense to rebut the new theory by stating: "no, it's just is. No explanation is possible."

    Likewise, presumably you are going off the assumption that the universe does have a begining because that is the majority opinion in cosmology. But this wasn't always so, most cosmologists used to think the universe was without beginning or end. The Big Bang, and then a period of Cosmic Inflation prior to the Big Bang, were both posited to help explain phenomena that seemed vanishingly unlikely to have occured by chance. However, from the perspective of the brute fact view, there was absolutely no reason to go looking for these explanations, and no reason to prefer them. If "it just is," is actually as good of an explanation as any other, all sorts of now dominant theories in cosmology are unnecessary. We didn't need to explain the curvature of space: "it just is."

    And yet no one actually does appeal to "it just is" when solid explanations exist. Hence, it is just an ad hoc appeal for those who cannot countenance mystery.

    Consider: if new stars appeared across the sky tonight that clearly spelled out "Allah is the greatest," would that be evidence of a creator? Well, on the brute fact view the emergence of the new stars, and the timing of their light reaching Earth, is all just the result of brute fact laws and initial conditions. If the advocates of such a view are consistent, they will declare: "We cannot assume that this happening is any more or less probable than anything else, since the laws and initial conditions just are, for no reason at all."

    But this seems implausible. If stars spelling out Koran verses appeared across the night sky tonight, we would have epistemic warrant for thinking something special was going on.

    Of course. That's what it is to exist. The Universe is everything. It doesn't mean that there can't be other dimensions, or that it exists in a way that is currently foreign to us. But you can't exist and be outside of existence. Perhaps there are other 'universes' or things that exist separately from the total causality of our pocket of reality. But if the two ever met, then they would intertwine in causality. A God, if it ever interacts with this universe, is part of this universe...

    People say a lot of things. If they have logic and reason on their side, great. But a lot of things that are said and believed do not have logic and reason on their side. A history or large number of people who hold such beliefs do not lend any more weight to their truth.


    Do you not see how "well other people might not have logic and reason on their side, because people sometimes have irrational beliefs," is not a good response to: "We reject the premise of the univocity of being."

    Like I said, a good argument anticipates objections. "I don't need to anticipate objections because they could be irrational" is not a compelling resolution. This is particularly true of brute fact appeals because, if they are ever compelling, they are only compelling because they have shown that no other explanation is possible. Yet all you have shown is that, provided we accept the brute fact explanation, we will have accepted a brute fact explanation.

    Second, it's worth pointing out that:

    A. Cosmologists are in no way unanimous that the universe even has a beginning. Cyclical theories are still posited.

    B. The philosophers of the Abrahamic tradition and many Pagan philosophers agree that nothing is temporally prior to creation. Space and time do not exist prior to creation. God is not in space or time. God is not a being. As St. Augustine puts it, God is "within everything but contained in nothing." God is cause and first principle of the universe as ground, not as temporally prior efficient cause. So arguing that "it doesn't make sense to speak of a time prior to time or changes prior to change," isn't really addressing the most popular theologies at all.
  • St. Anselm's Proof: A Problem of Reference, Intentional Identity and Mutual Understanding (G. Klima)


    The argument professes to prove that; but it doesn't succeed, for the reasons given.

    Sure, I am pointing out that the conclusion can be given a different interpretation, and according to published responses to the article it may need to be given this interpretation to avoid problems.

    That said, the argument looks ok at first glance. I agree with this response:

    The formally regimented argument is pretty clearly valid, and none of the premises of which it is alleged to be a formalization is obviously objectionable. The first premise simply articulates Anselm’s conception of God as the thought object than which no thought object can be thought greater.5 The second premise is just the reductio assumption that God is a mere thought object. As for the third premise—the claim that thought objects that can be thought to exist in reality can be thought to be greater than mere thought objects—one might allege (as Klima himself does) that it is analytic by virtue of the meaning of ‘greater than’. Finally, one might convince himself of the truth of the fourth premise—the claim that God can be thought to exist in reality—by introspection. So Klima’s claim that these premises ‘have to be accepted as true’ is initially plausible.


    Tony Roark - Conceptual Closure in Anselm’s Proof

    I think this gets to "it is unclear that the reductio premise is the one that needs to go."

    The most common way to attack this sort of argument has been to deny that God actually can be conceived of (or for modal formulations, that God is possible). Plantinga's version has been vetted for 50 years now, has been overwhelmingly seen as at least valid, and this is widely agreed to be the weak part.

    But I think it is at least prima facie plausible that God can be conceived of in this manner because, as I said, disparate philosophies across the world have conceived of this same thought object and because atheist and agnostic scholars of relevant areas seem to have no trouble conceiving of such a being.

    Perhaps other problems remain. There are arguments that omnipotence is contradictory because it implies the ability to become not omnipotent. Plantinga has done a lot of work on this, e.g. showing that "God cannot create a stone that He cannot lift," is equivalent with "God can lift all stones." However, I am not convinced that "absolute power" or "absolute freedom" doesn't result in what are essentially paradoxes of self-reference, at least on some naive conceptions.

    Potentially more problematic are claims that perfect justice is in contradiction with perfect mercy, etc. Yet these don't seem relevant to the argument. For, the "being greater than any other that can be thought" need not have all the properties allocated to the God of revelation. Perfect mercy and perfect justice seem like they could be bracketed out so long as omnipotence, omnibenevolence, etc. are included.

    Of course, responses to Plantinga focus on the question: "is such a being is possible?" due to the modal formulation of his argument. St. Anselm's formulations deal with thinkability. I am not sure what important differences this could yield in attacks on the opening premise (i.e., the thinkability or possibility of God).

    Roark has his own critique. I would have to look at it more, but on first glance his main counter seems too strong. He argues that the atheist should be happy to allow that they are only engaged in parasitic reference because the theist's definition requires a framing that at least allows for the possibility of liar's type paradoxes. However, showing the mere possibility of paradox is far weaker than demonstrating a paradox.

    At any rate this counter would seem to lead to the conclusion that all sorts of people, not just atheists, should refuse to engage in anything but parasitic reference in a huge variety of cases. In particular, they should refuse to engage in anything but parasitic reference when it comes to liar's paradoxes. Yet, if someone took this line as a counter to logical pluralism or arguments against LEM, as opposed to God, I doubt it would pass the smell test. And it seems possible to push it farther, into a prohibition on accepting arguments in natural language, which would be absurd.



    If the argument is to hold, the it must not be possible for it to be in error. Accordingly it is not incumbent on the fool to show that one of the premisses must be false; but only that it might be false. So indeed, there is a clear way in which one can supose “something which cannot be thought not to exist", and understand that such a thing entails a contradiction. “something which cannot be thought not to exist" may well occupy much the same space as "a number greater than any other" or "A triangle with four sides" or even "The present king of France"; there may be no such thing.

    Can one ever totally eliminate the possibility of error? Is "error is possible," without pointing out any clear error a good counter to other demonstrations?

    This counter seems too strong, because it would seem to apply, in some sense, to skepticism vis-a-vis all demonstrations.



    Well this is related to what ↪Count Timothy von Icarus said about the notion of unlimited (although it is more precisely about power than general unlimitedness). Do we think that a being which is omnipotent is greater than a being that is not? Because maybe someone would say, "If it is an evil being then the omnipotence would make it lesser, not greater." And of course no one thinks it is greater to be evil than to be good, so presumably it would not be an evil being, but the idea brings out your difference between moral (?) goodness and and a form of greatness which prescinds from the moral.

    Right, and this perhaps touches on the theological concerns that came to the fore during the Reformation, that only doing what is best would somehow be a limit on divine sovereignty and power. I personally think this sort of concern doesn't hold water. Defining freedom in terms of potency leads to contradiction (e.g. the demonstrations at the opening of Hegel's Philosophy of Right) and so the notions that lead to a renewed salience for Euthyphro dilemmas in the early modern period seem to simply be flawed. This is relevant inasmuch as people claim that God is "unthinkable" due to these supposed "paradoxes."

    Right, I am following what you are saying here. But the difficulty is that affirmation of existence separates from existence, or something like that. Right? If the argument proves that we should affirm the existence of God without proving that God exists, then how does that work? Or do we want to take a half-step back and say that it proves that the atheist cannot deny God without proving that we should affirm the existence of God? (But that seems to fall away from Anselm.) So how would we address these difficulties?

    I'll have to think about it more. It is indeed strange. I do think intuitionist mathematics maybe offers some guidance here. To use an analogy that might be a bit weak, it seems obvious that something like a constructive proof of God cannot exist.

    We could also consider abductive arguments. There, we might have strong reasons to affirm the existence of something. It would be unreasonable to deny it. And yet this is also not a demonstration that it exists.

    At any rate, this outcome seems theologically preferable in some ways. For, if the demonstration was of existence, then it would be a demonstration that God exists in the manner of all other ens reale—of trees, stars, man, etc. But this is often explicitly denied. Dionysius the Areopagite writes:

    "It is wrong to say God that God exists. It is wrong to say that God does not exist. But it is more wrong to say that God does not exist...

    [God] is not a facet of being, but being a facet of Him. He is not contained in being but being is contained in Him. He does not possess being, but being possesses Him. He is the eternity of being, the measure and source of being. He preceded essence, being, and eternity. He is the creative source, middle, and end of all things."

    Indeed, we might say that a demonstration that shows that God exists in the same manner as both our conceptions of God's existence and the real existence of all other things would be guilty of equivocation. Ens reale and ens rationis can sit together on a Porphyrian tree, but God cannot.
  • St. Anselm's Proof: A Problem of Reference, Intentional Identity and Mutual Understanding (G. Klima)


    Arguably, the argument simply proves that the atheist cannot deny God (i.e. the being greater than which no being can be thought) without affirming a contradiction. So, it shows that we should affirm the existence of God, on pain of being fools or misologes.

    However, this itself does not prove "that God exists." We could consider here Brouwer and other's objections to the use of proof by contradiction in existence proofs in mathematics. So, there is a possible distinction here. And perhaps, having taken the conclusion in this way, we could dismiss some of the criticisms re "proofs cannot demonstrate existence," (what about existence theorems?) or "existence simpliciter must somehow be assumed somewhere in the premises" (I think it's fairly obvious that it isn't in Anselm's formulations though). I suppose the difficulty is that this only eliminates those fairly weak objections though.
  • St. Anselm's Proof: A Problem of Reference, Intentional Identity and Mutual Understanding (G. Klima)


    Edit: This is also a reply to ↪Count Timothy von Icarus. The non-theist need not maintain that the various notions of "unlimited being" are unintelligible, but can agree that it may be intelligible to some degree while maintain that it has not been demonstrated that this "unlimited being" is the same as say the Christian god, or indeed any god

    Sure, but the rejection of particularly Christian revelation doesn't affect the ontological argument at all.
  • St. Anselm's Proof: A Problem of Reference, Intentional Identity and Mutual Understanding (G. Klima)


    On the argument, there seems to be a few issues. The first is "greater than." A critic can equivocate on this and argue that it properly applies only to number (or more properly, magnitude) and that there is no "perfect" or maximal magnitude. This is in part why Norman Malcom switches to an "unlimited entity," (which he shows must exist in all worlds if it exists in any possible one).

    This brings us to the second problem, people are frequently confused by what is meant by "unlimited," "infinite," "transcendent," etc. This is why Hegel, in the Greater Logic, feels he must distinguish between the "good infinity" (viz. true transcendence, being without limit) and the "bad infinity" (viz. an unending series as supposed through mathematical induction, i.e., "just keep adding +1 ad infinitum").

    You see this sort of confusion all the time. In the Moral Landscape, Sam Harris opines that a transcendent or Platonic good must be irrelevant to conscious creatures, since it is always beyond the world of sensation. But this is simply confusion, a vision of transcendence where the transcendent is somehow absent from what it transcends. In the classical tradition all finite good, even what merely appears good, participates in the Good. As St. Augustine says, God is "within everything, yet contained in nothing." This is because such a good is actually transcendent, not limited by the borders of the world.

    What can we say to this confusion? Well, for one, it is worth noting that just because some people fail to grasp a concept does not mean it is problematic or cannot be used in valid proofs. I've seen people stubbornly resist the conclusion of the Monte Hall Problem, or claim that infinite multitudes or magnitudes are "unthinkable," and thus "unintelligible." Does the failure of some to "get it" or the recalcitrance of some finitists somehow impune something like Cantor's diagonal argument?

    Nor is the idea of unlimited being somehow a Christian dogma absent from the rest of philosophy. It is in Pagan thought prior to Christianity, in Jewish thought, in Islamic thought, in Hindu thought, etc. One finds appeals to it in contemporary thought that makes no appeal to any particular religious tradition, or in New Age syncretism. Many disparate philosophies have come, seemingly independently, to the concept of unlimited being (a being that is no thing) and the distinction between nihil per infinitatem (nothing on account of infiniteness) and nihil per privationem (nothing on account if privation).Therefore, the atheist rejection of the first premise on the grounds that the concept is "incoherent" or "unthinkable" seems weak. Folks as diverse as Plotinus, Hegel, Rumi, and Shankara have had no more of a problem with it than (orthodox) Christians. Atheist scholars of mysticism, esoterica, or religion also seem to have no issue thinking of it.

    But we might suppose that such a concept is hard to fully take in. Whole books are dedicated to explaining what the truly limitless entails. In which case, we might suppose that the argument could suffer from a premise that is not as well known as its conclusion. For the person of abiding faith, or who had received revelation, God's existence is obvious. Even for those who struggle in faith, other arguments (e.g. from teleology) might be more convincing. So, the conclusion is well known to some. Yet the premise involves our conception of an aspect of God's essence, which is generally thought to be unknowable (e.g. St. John Chrysostom, St. Gregory Palamas, etc.), even if "being without limit" is something approachable through the via negativa. To be sure, some will argue that since we can know "that God is," (existence) we must grasp essence in a muddled and confused way (perhaps through God's energies, and either natural or divine illumination), but this still leaves the premise not particularly well known. By contrast, the faithful certainly hope to know the conclusion in a way that is more solid than the premise.

    Finally, atheist opponents who are in favor of "brute fact" explanations of the existence and quiddity of, ultimately, everything, really don't have a leg to stand on in calling out opponents on account of "unintelligiblity." This is a far more obvious (and quite common) instance of what is virtually always a very ad hoc appeal to the entirely conceptually vacuous to decide an issue. "Why? It just is."

    This is relevant in that infinite, unlimited being is often called upon to ground metaphysics. The claim that this is "unintelligible" while putting forth "it just is, for no reason at all" as the root explanation for everything is more than a little ironic, particularly when the ad hoc appeal to brute fact is paired with eliminativism or deflationism re causes, such that everything "just is" and explanation seems to be little more than a hallucination resulting from inexplicable constant conjunction in the first place (isn't this just epistemic nihilism with extra steps?)
  • I Refute it Thus!


    I do not agree. Berkeley takes "matter" in very much the way of Aristotle. That's how he manages to conceive of substance without matter.


    I believe that Berkeley is actually demonstrating the incorrectness of this 'new' way of conceiving of "matter" by showing how these ideas that people have about "matter" do not hold up if we adhere to principles.

    Where does Berkeley lay out an alternative theory of matter? I mostly recall him being fairly adamant about wholly eliminating matter ("immaterialism"), even for non-representationalists (in the Dialogues).

    In any event, I was thinking of the "matter" of those he spends most of his time criticizing (e.g. Locke).

    [/quote]




    On the upside, science itself has seemed remarkably resilient in the face of this drum beat. I worked at a place that specialized in glaucoma for a while and the doctors would make pitches about "fully understanding the causes of this disease." Likewise, physicists have not been deterred from subtitling their books things like: "the quest for the ultimate nature of reality," or "what is real?" etc.
  • St. Anselm's Proof: A Problem of Reference, Intentional Identity and Mutual Understanding (G. Klima)
    Consider an analogous argument defining the highest number as that number which is higher than any other number. The definition is fine, except that there is no such highest number.

    Gaunilo of Marmoutier took this approach by positing an "island greater than which none can be conceived," in order to try to show that Anselm's argument can be used to demonstrate the existence of all sorts of things. But Guanilo's argument is generally seen to fail even by critics of St. Anselm. As with number, there seems to be no maximum for how great and island can be. Just in terms of size, it can always get bigger. But there do seem to intrinsic limits for those properties Anselm associates with God. Perfect knowledge is knowing everything; one does not make their knowledge more perfect by knowing more than all there is to know. Moral perfection is not a quantity, etc. There are intrinsic maximal perfections inherit in these concepts.

    This is not to say people haven't brought up challenges to these properties (e.g. that it is contradictory for a being to be both omniscient or omnipotent, etc.), they have. But "more omnipotent than omnipotent," doesn't make sense.



    The generic flaw in ontological arguments is that if they are valid then they assume the conclusion somewhere in the argument. The task for the logician is to find out where.

    Well, if the issue is that the conclusion must be contained in the premises, that's a problem for all deductive arguments. Hintikka's ol' scandal of deduction. What is being assumed here is the existence of a being of thought. No need to look too hard. The argument is meant to demonstrate that such a being must exist simpliciter if it exists as ens rationis.

    But I think real problem for ontological arguments is that they are unconvincing. I don't think anyone has been converted by an ontological argument, or that many people of faith feel their faith significantly bolstered by such arguments. And indeed, there are also atheist logicians who have allowed that modifications of Anselm, Gödel's proof, etc. seem to work and have premises that seem innocuous enough, but are nonetheless not even remotely convinced.
  • What does Quine mean by Inscrutability of Reference


    Then even "Truman's hair is Truman-blond", if true, the negation would have to be false. So even if we aren't speaking in universal terms we can use true/false.

    Yes, I agree that you could render a proposition like that. However, Aristotle's point was about judgement. So if we judge Truman's hair to be "Truman-blonde," and "Truman-blonde" is just whatever Truman's hair is, then we cannot be wrong in our judgement. Supposing we don't call it "hair" but "Truman-hair,' we also cannot be wrong that it is "Truman-hair" that is Truman-blonde.



    So, Aristotle would also say that we cannot simultaneously judge that Truman's hair is both Truman-blond and not-Truman-blond, at the same time, in the same way, without qualification. Indeed, if Truman-blond is just whatever Truman-hair is, and nothing else, no evidence can ever suggest to us that Truman-hair is anything other than Truman-blond.

    As respects the negation, we can speak such things in the discourse of spoken words, but not in the discourse of the soul (i.e., it does not make sense to say that someone earnestly believes and doesn't believe the same exact thing at the same exact time).
  • What does Quine mean by Inscrutability of Reference


    I was actually thinking of that as I wrote that. Clear evidence that English is a barbarian tongue. :rofl:
  • What does Quine mean by Inscrutability of Reference


    :up:

    That makes sense. And it is very easy to equivocate in this way with some terms, "person" being a prime example.
  • What does Quine mean by Inscrutability of Reference


    I must be missing something, since it seems clear enough that the sound of "dog" could be arbitrarily assigned to some different referent in each instance.

    Yes, we could arbitrary use the sound "dog." You could even use it to refer to something different in each instance. You could render "fixed by divine providence or some such nonsense," as "dog dog dog dog dog dog dog." Yet no one would understand each other if they were always making different sounds to refer to different things in each instance, so we "cannot" have a human language that works like that.
  • What does Quine mean by Inscrutability of Reference


    Of course, ships are not alive, but I don't think the question regarding whether a corpse is the same person as the living being, only now dead, is any different. It would depend on what we mean by "person'. The point I want to make is that there is no fact of the matter in these kinds of questions, but rather merely different ways of thinking and talking.

    I would disagree. The way we talk about such things is not arbitrary. When we appeal to "our ways of talking about things," we just push the explanation back one step. The question then becomes: "why do we talk about things in this way?" After all, we have an essentially infinite possibility space open to us in how we might use sounds or symbols to represent such things, yet we settle on some, and moreover if someone offers counterproposals on how we should do our speaking, some seem plausible and others ridiculous or arbitrary.

    In general, I think there is far too much of a tendency to jump from: "one cannot give a rigid, mathematical definition of a distinction," to "thus there is no fact of the matter." One cannot give such a definition for life, either from the perspective of medicine or biology. Yet surely there is a fact of the matter as to if anything is alive, or if some individual is alive or dead. And surely organisms were alive or dead, and died, prior to the advent of human language (that is, the distinction is not dependent on human language). Hence, the difference between life and death exists, and thus we make the distinction, not "we make the distinction, thus things are living or dead."

    The vagueness problem is still acute for any philosophy that insists on substance being founded on contradictory opposition as opposed to contrariety (e.g. something is either man or not-man, not somewhere in between). However, I think the problem seems more acute than it really is if one insists on presupposing a bundle theory whereby to "be man" must involve checking the box on some set of (observable) properties that are distinct from "humanity" and essentially "add up to it."
  • What does Quine mean by Inscrutability of Reference


    I'm not so sure we must have universals for a claim to be false. If Truman's hair were black then "Truman's hair is blonde" would be false, for instance, even though we're only talking about that Truman right there and not any other Truman.

    "Blonde" and "black" are universals. If either we're unique terms that are only predicable of Truman's hair then they certainly couldn't fail to apply.

    I'm not sure I'd separate language from perception, either. Seems to me that language has too much of an effect on perception to think that language even could be more general than perception

    But don't babies without language and people with aphasia who cannot produce or understand language (or both) still perceive?

    I'm skeptical of such a fusion, not least because the Sapir-Worf hypothesis is supported by very weak evidence, normally very small effect sizes and failures to replicate, despite a great deal of people having a strong interest in providing support for it. For instance, different cultures do indeed divide up the visible color spectrum differently, but the differences are not extreme. Nor does growing up with a different division seem to make you any better and spotting camouflaged objects. But moreover , aside from disparate divisions remaining fairly similar, no culture has a name for any of the colors that insects experience through being able to see in the ultraviolet range, and for an obvious reason.

    Likewise, disparate cultures have names for colors, shapes, animal species, etc. They don't pick any of the vast range of options that would be available to a species that largely creates their own perceptual "concepts." I know of no cultures that mix shape and color for some parts of the spectrum, and then shape and smell for another part, etc. or any of the innumerable possible combinations for descriptions.

    J mentioned Gadamer earlier, and I like Gadamer, but the idea that all understanding is done through language seems suspect. It seems like the sort of judgement a philosopher focused on language would have. But does an MLB pitcher finally have it all click and understand how to throw a knuckleball through language? Does a mechanic understand how to fix a motorcycle engine primarily through language? Or what of demonstrations in mathematics based on visualization?

    My thoughts are that language is a late evolutionary arrival that taps into a whole array of powers. It enables us in a great many ways. But thought also isn't "language all the way down." Nor do I think we need to suppose that non-verbal individuals lack understanding (or else that we have to suppose that they have "private languages" for them to understand anything) or any noetic grasp of reality.

    To my mind, part of the problem here is the ol' reduction of reason to ratio (which is maybe enabled by computational theory of mind). But my take is that reason is broader than language and that the Logos is broader than human reason.
  • What does Quine mean by Inscrutability of Reference


    What this doesn't rely upon is a fact about what we are referring to, or whether or not "dog", or any other sign, has some pre-assigned meaning wrapped up in it.

    I'd say it does. If it didn't rely on this at all, then communicating with someone with whom you do not share a common language (a common set of pre-assigned stipulated meanings and a grammar for form) should be just as easy and successful as communicating with someone with whom you do share a language. Indeed, if it didn't rely on this fact, it's hard to see why languages should exist at all.

    Obviously, it doesn't rely exclusively on this fact. Most mammals can understand each other well enough (e.g. aggression) for certain functions. The reason "reptilian" or "insectoid" is a sort of slur is because these species don't tend to have relatable communicative behaviors in the same way. Snakes are deceivers because they lash out following unintuitive (to us) threat displays.

    So for this:

    The focus on convention is because we live in a society which prizes being able to say who does something better than another person, and with language that indicates the need for standards to judge others' in order to give a grade.

    No, I think we focus on convention because it aids with communication. Part of good grammar is reducing ambiguity. The reason rhetoric and dialectical (and public speaking) were the cornerstone of education for so long is because fostering agreement and persuasion was the key tool of political life in pre-literate societies.

    Changes in conventions have accelerated because memorization is far less important in an era of both high literacy and, as important, cheap access to data storage (digital or analog, paper, etc.). Back when huge amounts of information had to be memorized, when prize libraries were smaller than your average professor's office collection, you couldn't have conventions moving wildly around at the same rate. I've seen the hypothesis that this is also why almost everything was put into metered, rhyming verse, it helps with memorization, but then metered verse requires stable conventions.
  • What does Quine mean by Inscrutability of Reference


    However, if reference wasn't fixed by convention at all there would be no need for languages in the first place. The sound of "dog" could be arbitrarily assigned to some referent in each instance.

    Borges has two interesting short stories on this:

    One is "Funes the Memorious," about a guy who has an accident and then is cursed by an absolutely perfect memory. Because of this, he gets annoyed with language. Why have so few words? Why not a specific word for the specific cloud I saw on the afternoon of 11/7/1932? Hell, why not unique words for different moments where he saw that cloud? In pursuit of this, he begins assigning unique numbers to proper names.

    I think this gets at a few things. One is Aristotle's idea that we must use universals to have the possibility of our claims being false. If we just predicate unique terms of unique things, terms that only apply to those things, we can never be wrong. Second, language and reference must always be more general and less determinate than perception to be useful.

    The second is "The Library of Babel," about an incomprehensibly large library of all possible 500 page books (every possible arrangement of characters). This gets at how it doesn't make sense to look at language as having any meaning at all in isolation from speakers, or at least some information source. Librarians in the story go seeking for passages that will tell them their future (of course, mostly they only find gibberish or at most a few words). Yet it's a simple fact that every possible description of one's future that can be written exists somewhere in the library with a probability of 100%. But finding such a story would tell someone nothing about if it was a true prediction or not. The same is true of the messages of a truly random text generator. All information is ultimately information about an information source. Signal alone gets us nowhere at all, and so it cannot be analyzed in isolation.
  • What does Quine mean by Inscrutability of Reference


    The consequence of the indeterminacy I think is not that we may sometimes disagree but that there is nothing intrinsic to words

    Yes, I think this is correct. It's similar to how the eliminitivist claims that when we claim that we are "conscious," "selves," or "taste, smell, hear, etc." we are simply confusing ourselves with "folk terms" that have no more place in proper scientific/philosophical discussion than do references to "the astrological effects of Mercury being in transit," in political economy or demons in medicine. Like I said earlier, I think there is a tendency to undersell the radical nature of the theses that get generated from underdetermination in general. For instance, I think Russell had something right when he said Hume's similar argument against induction collapsed any distinction between sanity and insanity.

    Unfortunately, I think equivocation sometimes plays a role here. For instance, following other arguments from underdetermination, what is meant by "truth" and "knowledge," etc. is radically redefined. In these cases, I think it would be more fair to the average reader to say: "my argument shows that knowledge (and knowing truth) are impossible, therefore we must settle for this other thing." A denial of reference seems less radical, but it is still fairly radical. It would disallow any notion of the sciences as involving per se predication, which I think would force one quite far from common perceptions of the scientific endeavour and scientific knowledge.



    As I noted before, if someone says "the rabbit in this room," in a room with one rabbit, and they mean to refer to anything but that rabbit, they have misspoken (barring of course, some sort of complex work around, like "the rabbit in this room" being the WiFi password, etc.).

    People do misspeak. I am not sure why this has generally be counted as any more mysterious than the fact that someone can fail to pay attention to another's words and mishear them.

    On a standard semiotic analysis, we would say the words "the rabbit in this room," are the sign vehicle linking the rabbit (object) and the person spoken to (interpreter). But of course, you could also analyze this as the object being the intentions of the speaker. If the full sentence is: "I want to eat the rabbit in this room," the motivation for such a shift becomes more clear.

    However, in that example the rabbit is still referred to. But do we refer to such things directly, or only through referring to our own intentions? The latter is not fatally problematic if we both experience the same rabbit (particularly given some sort of sense realism). We can triangulate the external reference. Notably, this ambiguity does not come up for all signs, for instance, smoke as a sign for fire, or even an angry badger's aggressive behavior as a sign of their internal state.

    From an information theoretic perspective, the same sort of ambiguity remains. The issue of what constitutes the proper object/sign vehicle/interpretant is very similar to trying to determine what the proper information source/transmitter/receiver/destination is when applying the Shannon-Weaver model to natural phenomena. For instance, the rabbit could be seen as the information source, with the speaking man serving as the transmitter, although no doubt one could decompose this into very many instances of communications (or instances of semiosis). We could also have the speaker as the information source. Either way, whatever message is received, it will contain information about all the preceding parts and any source of noise (e.g. a garbled message can tell you something about your receiver or transmitter).

    What I take from this is that it doesn't need to be one or the other, verbal communication can contain information about and reference both things and the speaker's intentions about things. "This rabbit right here, the only one in this room, the black and white one," spoken in the context of a room with one black and white rabbit, has a very determinant reference as per the English language. Any competent speaker will know what is referred to.

    Nonetheless, there may still be ambiguity. If the receiver of the message has agnosia and cannot make out the rabbit from the background clutter of the room, we have a problem with the alignment of the two speakers' intentions. This isn't really that different from the case where the receiver is hard of hearing and doesn't know what was said. Likewise, sometimes people use the wrong words; they say "baseball" when meaning to say "rabbit," etc. Some disorders make this sort of slip very common. Nonetheless, people often understand quite determinant references even when this sort of mixup occurs, and we still get our intentions to line up. Hell, even poorly trained dogs can communicate well enough to direct our attention to what they view as a threat.

    But if the alignment of determinate intentions is possible, then I think there is a strong sense in which reference must be. To even make the inscrutability argument, one has to assume that determinate intentions exist, so that one is given, but then it obviously seems possible to communicate them as well.



    Can you please expand on this? I am not sure what the critique is supposed to be. It's an "incipient reification" to disagree with Quine re first philosophy? I am not even sure what in that post could constitute reification. Can one not disagree on this (or his criteria for what constitutes as evidence, which is what I find most problematic) without having misunderstood the argument?
  • Disagreeing with Davidson about Conceptual Schemes
    R. Scott Bakker has a neat paper on this question.

    Given a sufficiently convergent cognitive biology, we might suppose that aliens would likely find themselves perplexed by many of the same kinds of problems that inform our traditional and contemporary philosophical debates. In particular, we can presume that ‘humanoid’ aliens would be profoundly stumped by themselves, and that they would possess a philosophical tradition organized around ‘hard problems’ falling out of their inability to square their scientificself-understanding with their traditional and/or intuitive self-understanding. As speculative as any such consideration of ‘alien philosophy’ must be, it provides a striking, and perhaps important, way to recontextualize contemporary human debates regarding cognition and consciousness.

    I rarely agree with Bakker on philosophy, but he is normally thought provoking



    Do we experience the cat or the concept?

    A good question.

    If sensation were not any different from imagination, and if belief, memory, etc. were dominant, it would be hard to explain why people often listen to songs they know by heart, watch their favorite movies very many times, or how cooks, prostitutes, theme parks, etc. all stay in business (since presumably visiting them multiple times doesn't do much to affect our beliefs about them, or perhaps even our memories). Likewise, there are all sorts of neurological disorders whose affects seem largely contained to concept recollection or word recall. Yet such disorders are not the same thing as being deaf or blind. As far as can be ascertained, it seems possible for the visual field to be largely unaffected (e.g. people can draw what they see, and navigate the world) even as a person losses the ability to attach concepts (e.g. "what a thing is and is used for") to what they experience.

    Sometimes it is argued that such disorders show that the all external objects must be "constructed," or must be "representations" of some sort. I don't find this conclusive at all. To the contrary, I think the most obvious reason to suppose that man has the capacity for picking out plants from rocks, a branch above from the sky, or a tiger from the jungle background, is that these things exist, and that it is very important for us to recognize them directly in sensation. So, while "what is experienced" might be, in some sense, the interaction of the sense organ and ambient environment (that latter of which mediates through its interactions with the objects sensed), this does not preclude a strong "sense realism," since this sort of mediation is hardly unique in physical interactions. Indeed, all physical interactions might be said to involve some sort of mediation, yet "everything is received in the manner of the receiver," does not presuppose "everything is received as representation."

    On a related note, I've come to have the opinion that a great many "escapes from representationalism" are just replacing one form of representationalism (normally a caricature of early modern versions) with some alternative form.
  • I Refute it Thus!


    Enter science proper, and stuff gets real interesting.

    What is "science proper?"

    To me, it's always seemed a bitter irony that just as there is an explosion in scientific progress (helping to drive the "Great Divergence" of the 19th century whereby Europe became much wealthier and militarily stronger than Asia), much philosophy of science seems to become incredibly dismal, consumed with skepticism. Even today, "philosophy of physics," "philosophy of biology," "philosophy of complexity," or "philosophy of economics," are filled with interesting ideas, whereas "philosophy of science" often takes the form of dull reductions of science to "observation + modeling," scientific understanding to "prediction," and knowledge of causes to "more prediction."



    In my understanding "matter" is a concept employed by Aristotle to underpin the observed temporal continuity of bodies, allowing for a body to have an identity.

    While it's true that for Aristotle "matter is what stays the same," when there is change, the "matter" and "substance" of Berkeley's era had changed dramatically from their ancient or medieval usages. The entire idea of "materialism" makes no sense from an Aristotelian framework. It would amount to claiming the whole world is just potency, with no actuality, and so nothing at all. But the term "matter" by Berkeley's era is more often conceived as a sort of subsistent substrate (often atomic) of which spatial, corporeal bodies are composed, such that their properties are a function of their matter (which would make no sense under the older conception of matter as potential). By way of contrast, Aristotelians would speak of the "material intellect" of the soul, the matter of abstract mathematical objects, the form of a logical argument (from whence we get "formal logic") versus its matter, the phrase "subject matter," etc.

    "Idealism" ("eidos-ism") would also make no sense in the Aristotelian frame. Saying "everything is idea" would be to declare that everything is act, which would preclude change, essentially putting you back with Parmenides. This is why "idealism versus materialism" is a modern debate (although it has some loosely analogous precursors). Aristotle might be close to Berkeley in some sense, in that the world is intellect, but this is taken in a very different (and IMO far more developed) way. The redefinition of substance also looms large here, since materialism versus idealism can be framed as "everything is material versus mental substance," a distinction which required the radically different early modern notion of substance to make much sense.

    Locke's matter, for instance, is closer to ancient elements than Aristotle's matter.

Count Timothy von Icarus

Start FollowingSend a Message