• Quantifier Variance, Ontological Pluralism, and Other Fun Stuff
    At least in other games, in order to avoiding even implicit metagaming in group play, there can sometimes be requirements for draws too so that players don't accept draws easily due to both being sure to advance on a draw.Count Timothy von Icarus

    Well, sure, and chess is notorious for this. But there is no game the rules of which can compel players to try to win.

    Chess competitions also produce the opposite problem: it is an established fact that white begins the game with a slight advantage, but because of tournament or match standing a player with the black pieces might "have to" play for a win, and so take risks he or she generally wouldn't.

    Even the existence of the rating system forces higher-rated players to take risks against lower-rated players, because a draw will cost them points.

    All of that is external to chess itself, the play of which is perfectly settled, and has been for a long time.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff
    I think this it he first time that paper has been quoted in this thread.Leontiskos

    FWIW, here first, which happens to be a post of mine you responded to, but I quoted it in the section responding to Banno, so understandable that you missed it.

    The SEP article deals at length with Hirsch and Sider, but I won't be reading it.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff
    just as we can only understand chess by watching it being played rather than staring at the queen under a microscope

    I see what you did there.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff
    namely by way of differing introduction and elimination rulesLeontiskos

    Their treatment of quantifiers is straightforwardly functionalist and unobjectionable: they note that if you can derive phi(x)Fx from Fa, then phi() is the existential quantifier in the language you're dealing with. So they rely on at least one standard introduction rule, and I'd assume all the rest.

    Is that bafflement gesturing toward incommensurability?fdrake

    Not by me. Incommensurability is not a useful or interesting idea.

    I'm of the opinion that there is something substantive here to talk about.fdrake

    I seriously doubt it. QV seems to be the love-child of incommensurability and a bizarre over-promotion of the principle of charity. I don't know why I'm even posting, it's so stupid.

    Here's another sort of variance with its feet on the ground (since you mentioned OLP a while back): in everyday speech "all" carries existential import, but not in Frege's logic; in everyday speech "some" implicates "not all" but not in Frege's logic. (I did not say "entails"; the implication is cancelable, but using "some" this way is patently uncooperative.)

    It is a fact that not everyone in every context means the same thing by "all" or by "some". But this is nowhere near the sort of variance our heroes are promoting, in my limited understanding.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff


    Eh. It might be a crap example, and maybe there only are crap examples.

    What interested me was two things:

    (1) This shouldn't be the usual one side saying "There are more things in heaven and earth..." and the other saying "No there aren't."

    (2) I like the idea of this exchange:
    "You left out some values."
    "No I didn't."
    "But I can see that you did. That's why we got different results. You left out these two."
    "But you don't count those."
    "But they're in the data set."
    "But they don't count."
    I like the idea of each side being baffled by what the other could possibly be thinking.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff
    Here's an example of quantifier variance from the wild, from the spreadsheet open in front of me: you have a set of data points; you can (a) average them (or whatever) talking all the values, or you can (b) average them (or whatever) after throwing out the highest and lowest. That's a difference in *how* you range over the given values.

    It's still in some sense a change of domain, but it's change you sort of delegate to the quantifier itself, treating it as a filter. In one case "all" means all, but in the other "all" means all but the usual exclusions.

    You could absolutely see analysts at loggerheads if one of them filtered, and assumed everyone did, and the other didn't, with a similar assumption.

    Nothing to do with *kinds* of objects here, but to do with *how* we range over a collection of values.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff


    Chess is interesting because it involves decision making under uncertainty, and it is moderately surprising that its complexity is just great enough to provide scope for style and creativity. Computers have kinda ruined it for me though.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff
    Funny enough, international bodies tried, and then gave up on developing a single canonical set of rules for chess, finding it too difficult. Differences in rules—variants aside—will tend to only affect high level play (e.g. how a draw is forced, etc.), but they are real differences that have not been settled.Count Timothy von Icarus

    Do you mean this?

    The Laws of Chess cannot cover all possible situations that may arise during a game, nor can they regulate all administrative questions.FIDE Handbook

    This is a catch-all for weird practical issues, a lot of which are covered, but shit happens.

    I assume the reference to draws concerns this:

    9.2 The game is drawn, upon a correct claim by a player having the move, when the same position for at least the third time (not necessarily by a repetition of moves): — Ibid

    It goes on at some length, but kids in particular pick up on this idea of repetition of moves, which the rule immediately addresses.

    What "real differences" did you have in mind?
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff
    The way we use the word "berries" is what changes. Not the way we use "is".Banno

    Maybe, but it's not really "A is B" that's at issue here, but "Something is B".

    Now how exactly do we manage that? Attributing a predicate to an identified individual looks straightforward, but in ordinary life we only reach for the existential quantifier in the absence of such an individual. (One of you drank the last beer. Someone left these footprints. There's something really heavy in this box.)

    Is predication still the same thing here? Is this even predication?

    I'm always inclined to translate these things in my head to a sort of "second order" predication -- that is, to a claim that some class (last-beer-drinkers, footprint-leavers, heavy-things-in-this-box) is non-empty. Not a claim about a thing -- as yet unidentified -- but a claim about a class. I think it's a habit I picked up in case the class does turn out to be empty -- I'm not left apparently talking about something that doesn't, ahem, exist. The class is usable either way, with or without members.

    If you're looking for something you can pry open a drawer with, there's deliberate, strategic vagueness in the class -- now we're almost "third order": we want something we can use to do something that will count as getting the damn drawer open, and what that will turn out to be depends a bit on what we find. "I'll know it when I see it" means I'll define the class I'm identifying when I find a member of it. That's a neat trick.

    I still don't see anything hereabouts to do with existence. Classes turned up, and they're supposed to be an ontological conundrum, but they're just a way of talking about my behavior, my predictions about what will work, what I decide and then actually try to do. They're handy for the mental work we do, as you suggest, whatever purpose we're pursuing at the moment.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff
    because there are eight berries that existCount Timothy von Icarus

    Consider that berries grow, ripen, and then rot. Can you think of an edge case where it's not clear whether something counts as a berry?
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff
    @Banno

    the Principle of Charity and assume that what they said was correctBanno

    As it happens, this is what the thread should be about.

    the lack of a coherent explanation of what "quantifier variance" might beBanno

    It's a side effect of a particular version of charity:

    A charity-based metasemantics assigns L the interpretation that, when all is said-and-done, when every disposition to correct and revise is accounted for, makes the best sense of the linguistic behavior of L-speakers by making their considered utterances come out true in actual and possible circumstances, ceteris paribus.Hirsch & Warren

    Modest variance says that there are many distinct quantifier languages — quantifier languages where translating one language’s quantifier into the other’s results in massive failures of charity. This follows almost immediately from top down charity and our account of quantifiers.Ibid

    It's not far from here to ontological pluralism or what have you.

    @Leontiskos

    I almost posted about this the other day, but decided I didn't care enough. This charity metasemantics they've cooked up, I mean, it's the sort of crap mainstream (analytic) philosophy has been getting up to for a long time. It's depressing.

    I think it's a holdover from an earlier and more exciting time when philosophers thought there were maybe a few levels of logic and categorization between our minds and the rest of the world. If you were clever enough, you might work out a reasonable toy model of how we assemble patches of color into objects, or parse the intentions of someone speaking to us. Alas, it's not a few layers, but hundreds, thousands, millions. How living organisms manage to be sensitive and responsive to their environment and their own state is orders of magnitude more complex than the stuff philosophers come up with.

    All of which is why I agree halfway with this:

    the now common assumption that reason is nothing more than discursive reasonLeontiskos

    Someone who thinks that all truth is known discursively will believe that discursive-syllogistic explanation is always possible, and that where such explanation fails knowledge does not exist.Leontiskos

    The obvious problem with this, as Aristotle notes, is that logical demonstration is not self-supporting. Logical demonstration presupposes simple or primitive truths in order to get off the ground.Leontiskos

    ratiocination presupposes intellection (that discursive reason presupposes non-discursive acts of the intellect)Leontiskos

    But it's toward the end there that I disagree. Yes ratiocination rests on something that isn't that, but I wouldn't call what it rests on intellection, which seems to suggest something like the grasping of self-evident truth, or something.

    Instead, as you know, I'm with Hume, and I think modern science is bearing him out. Down below whatever reasoning we do is habit and custom and our natural inheritance. When I described the brain as computational before, I may not have placed enough emphasis on the fact that it's all probabilities. The brain is not a deterministic, clockwork machine, but a probabilistic one, and again Hume intuited this -- all our reasoning concerning matters of fact is merely probable. He was horrified enough to discover that reason rested upon something not describable as reason, but I think nowadays we have to go even further: Ramsey was headed this way, linking logic with probability, and suggesting that inference rules were essentially habitual.

    So yes, I'm inclined to agree that there is a sort of fatal flaw in much modern philosophy -- the pointless and unrealistic model building like we see here -- and that it can diagnosed as a failure to understand what the foundation of reasoning really is, but I see that foundation quite differently.

    What's more, I'm inclined to think that this

    I believe this is largely a result of the democratization and pragmatization of reason, where questions of consensus and therefore adjudication become supreme.Leontiskos

    describes much of the nature and use of reason as we understand it. (See Mercier & Sperber, The Enigma of Reason for a related view, and the beginnings of research to support it.)
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff
    I am planning to take a hiatus from TPF.Leontiskos

    Good call. Think I'll scarper as well. Cheers, everyone.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff
    It is a common view these daysLeontiskos

    Glad to hear you say that. I'm not innovating here, I think, just trying to connect the dots.

    I don't think logic is inherently mathematical, I don't think "mathematics is good at treating of [everything]," and I don't think mathematical logic is necessarily the epitome of logic. In fact at my university mathematical logic was very much acknowledged to be but one kind of logic, and I think this is correct.Leontiskos

    I get that. I'm using "mathematics" pretty broadly. What I have in mind is the mathematical impulse, the attempt to understand things by schematizing them, abstracting, simplifying, modeling. A musical scale is such an abstraction, for example, and "mathematical" in the sense I mean.

    You're right, of course, that as commonly used the phrase "mathematical logic" is just a branch of mathematics, but to me logic is very much a product of the mathematical impulse, as when Aristotle abstracts away the content of arguments and looks only at their form -- and then follows up by classifying those forms! And we end up with the square of opposition, which is a blatantly mathematical structure. You see what I mean, I'm sure.

    Aristotle was more interested in representing the way the human mind draws conclusions than adhering to an a priori mathematical paradigmLeontiskos

    As am I, in fact. I think the foundation of logic is the idea that one thought "follows from" another, and this in many more senses than are covered by material implication, for example. But I also think this is so because this is how our brains work, though we are not privy to the details. Hume noticed this, that the mind passes in some cases freely and in other cases with difficulty from one thought to another.

    But I still say the foundation here is mathematical because with the brain we're really talking about prediction, and thus probability. The brain is a prediction engine that is constantly recalibrating. It instantiates a machine for calculating probabilities. The "following from" here is neural activity, which is messy and complicated, but has effects that are in principle measurable, and whose functioning itself is parametrized (concentration of ions and neurotransmitters, number of incoming connections and their level of excitation, distance to be covered by transmission, and so on).

    this seems to prove the point insofar as Quine's notion of existence (and quantification) differs from the approach of neuroscienceLeontiskos

    But his just thinking that doesn't get you there, to my mind. He was mistaken -- only because he was too early, really, and I think he'd be fine with how cognitive science has naturalized epistemology -- but does that eo ipso ground an alternative but legitimate meaning? Does QV amount to a claim that no one can be mistaken?

    You seem to be dragging me into the actual topic, but alas my lunch break is over.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff


    Thanks for your thoughts! Have to work, but I'll definitely get back to you after a bit.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff


    Mathematics has this double role: it's the ideal we strive towards in our thinking, but it's what enables our thinking in the first place. Out brains have already been doing the sort of clarification and simplification we want when we model something mathematically -- so of course it feels like we're discovering that structure, not inventing it; we're just doing more of the same.

    That's my working hypothesis anyway. Philosophy is almost entirely puzzling out the nature of idealization and its role in our thinking, and this approach makes some sense of that. To me at least.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff
    each object is an instantiation of oneJanus

    Is this a property it acquires naturally, along with its chemical composition, its mass, etc?

    Or do we deem each object to be an instantiation of One?

    @Count Timothy von Icarus @Wayfarer @Leontiskos et al.
    Here's what I think, if you're interested.

    Kant -- damn his eyes -- was right: we only understand of the world what we put into it.

    We distinguish one bit from another, sort those bits and classify them, even paint them different colors to make it easier to keep track of them.

    Mathematics is, first of all, our analysis of what we're doing when we do all that. More than that, it's a simplification and idealization of the process, to make it faster and more efficient.

    It's all signal processing. The brain is not fundamentally interested in the world, but in the maintenance of the body it's responsible for, and the signals the brain deals with are about that body: they have an origin and and a type and a strength, and so on. Some of this is instrumented, so there's a reflective capacity to see how all these signals come together, and that's the beginning of mathematics.

    Individual neurons themselves do this in microcosm, actively resisting firing until they absolutely have to, to sharpen and compress their signals from the analog toward the digital. And there's layer upon layer upon layer of this, simplifications of simplifications of simplifications. (The world itself is computationally very far away.)

    Signals always have noise, and it's an efficient simplification not to pass through to the rest of the system the whole mess with a peak around 7 MHz and just say "7". We do this in well-known ways with phonemes, for example, counting a considerable range of sounds someone might make as an "r" or an "a".

    Simplification and idealization makes it all possible, and that's mathematics. The world is in essence a mathematical construction of our brains, so of course it's a bit puzzling whether math is "in here" or "out there".

    That's the gist, or part of a gist, of my view.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff
    Yes, and this is an important way that the logic reflects the commitments or intentions of its creators. It is not logic qua logic; it is logic qua mathematics.Leontiskos

    Well, I brought up the issue, so I'm bound to say there's something to this.

    On the other hand, I'm hesitant to endorse what you say here because mathematics is special, and there's a sense in which mathematics is the goal of logic, the goal of thinking as such. (I think there are hints of the excitement of this discovery almost everywhere in Plato where he rattles off the list -- argument, mathematics, astronomy, and so on.) --- And that means "qua mathematics" is not generally a restriction of anything, a limiting of it to this one domain, but an idealization of it.

    And it's historically backwards -- but maybe that was deliberate? Frege was trying to reduce mathematics to logic, not the other way around, and that turns out not quite to work, but in trying to do so, he came up with a formalization of logic which could be extremely useful to mathematics rather than providing its foundation. A sort of logic "adapted to" mathematics, or to the needs of mathematics, which is what I was suggesting --- although this time around I've already suggested this isn't necessarily a deformation of logic by focusing on a limited domain, so much as an idealization of logic by focusing on the domain that most cleanly, we might say, represents human thought. And as it happens, I think Frege thought so as well. I think he was mostly of the opinion that natural languages are too much of a mess to do sound work in.

    Do all those steps amount to "logic qua mathematics"? Maybe kinda, in a dyer's hand sort of way. There's a lot that makes it look like a branch of mathematics, and the advanced stuff tends to be called "mathematical logic" and get taught in math departments. But that's a deeply tricky business because basic logic is the fundamental tool of everything done in mathematics, absolutely everything -- it's just taken as given at lower levels of learning, without any suggestion that you're actually borrowing from some rarefied advanced field of mathematics.

    So I think advanced "mathematical logic" is something like "mathematized logic" -- that's qua-ish maybe in the sense you meant -- but what that means is applying the tools and techniques of mathematics to the given material that is logic, which mathematics can treat of, because mathematics is good at treating of anything. (That's the whole point.) And one of the techniques mathematics brings to bear in treating of logic is, well, logic, because mathematics was just borrowing it for free in the first place.

    Still agree?

    I actually think ↪fdrake's post may be most instructive and fruitful.Leontiskos

    Wouldn't be the first time, but he was addressing the topic, and I have yet to develop an interest in doing that.

    Now if quantifier variance is occurring—superable or insuperable—then the existential quantifier is doing more than presupposing a univocal notion of existence. Or, if you like, the two secretly competing meanings of existential quantification are each “presupposing” a different notion of existence, and this is the cause of the disagreement. Thus arises the very difficult question of how to adjudicate two different notions of existence, and this is the point of mine to which you initially objected.Leontiskos

    Do as you like, I just don't see the point. We can talk about existence all we like without dragging quantifiers into it, and people -- they're always wandering around the forum -- who get worked up about the meaning of the "existential quantifier" are generally just confused by the name (a name I note Finn and Bueno would like to retire).

    It's a funny thing. This is all Quine's fault, as I noted. "To be is to be the value of a bound variable" comes out as a deflationary slogan, but what we was really arguing for was a particular version of univocity: the idea was that if you quantify over it, you're committed to it existing, and he meant "existing" with the ordinary everyday meaning; what he was arguing against was giving some special twilight status to "theoretical entities". If your model quantifies over quarks, say, then your model says quarks are real things, and it's no good saying they're just artifacts of the model or something. --- The reason this is amusing is that all these decades later the consensus of neuroscientists and cognitive psychologists, so far as I can tell, is that absolutely everything we attribute existence to in the ordinary everyday sense -- medium-sized dry goods included -- is an "artifact of the model" or a "theoretical entity", so the threat to univocity Quine was addressing never actually existed, if only because the everyday meaning of "exist", the one Quine wanted to stick with, is in fact the "twilight" meaning he wanted to tamp down. And so it goes.

    What do you think of the claim that discrete entities only exist as a product of minds? That is, "physics shows us a world that is just a single continuous process, with no truly isolated systems, where everything interacts with everything else, and so discrete things like apples, cars, etc. would exist solely as 'products of the mind/social practices.'"Count Timothy von Icarus

    I don't think we are any more justified in saying this than we are in saying the world is full of distinct objects. All we have is signal processing. Is the source one signal? Two? Two trillion? How can you tell when you're receiving and analysing them all at once? It makes a difference in your metaphysics, but in nothing else at all that I can see.

    We don't see individual objects in isolation, but as embedded in and different from their surroundings, so difference if not a property of some putative completely isolated object, but a property it displays in its situatedness.Janus

    And you don't see any circularity here?

    Remember the issue was whether number could be a property of an object, and it just obviously can't unless sets count as objects. It's really straightforward and it pissed Quine off considerably.

    What's more ...
    (It is curious that we don't adjectivize numbers much at all, so even sets aren't said to be two-ish but to have cardinality of two. ((We have "once" "twice" and "thrice" for adverbs, but then it's on to "repeatedly" or "continually" or something.)) Maybe it's an Indo-European thing.)


    But then you brought in this other stuff about "diversity, sameness, and difference being real" which just begs another pile of questions. I'm at a loss.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff
    If diversity, sameness and difference are acknowledged as being realJanus

    But what does that mean? Is "different" a property an object can have?

    Yes, I'm being a little cagey, but you can do better than a shrug.

    (And that's all for me tonight.)
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff
    number would be a real attribute of objectsJanus

    That's a tough sell, though. It was one of Frege's brilliant examples, that the logical form of "The king's carriage was pulled by black horses" is different from the logical form of "The king's carriage was pulled by three horses." This is the guy who (independently of Peirce, I believe) is going to invent our modern regime of quantifiers, because he noticed things like this.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff


    Well yeah, Frege was a platonist. He was a pretty good logician, but he wasn't a god, and platonism is an inevitable and understandable mistake. :smiley-face:

    There are simple algorithms for determining whether a number is prime; it's a mechanical process that doesn't require what you call "rational insight — Srap Tasmaner

    Machines are artefacts, are they not?
    Wayfarer

    What of it? Natural selection, for instance, is a mechanical, algorithmic process. Nature is full of them, without the need of a mind to have conceived them. That recognition is why Dewey thought Darwin would finally put paid to platonism in its many guises. That was over a hundred years ago, I believe, and people have yet to get the message. And so it goes.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff
    You'll never teach the concept of prime to a Caledonian crowWayfarer

    Hmmmm. You know that sounds a lot like one of those things people say because it's so obviously true, right up until it's proven false. (Heavier than air flight? Are you mad?)

    There are simple algorithms for determining whether a number is prime; it's a mechanical process that doesn't require what you call "rational insight". Our intellectual superiority to the crow, in this case, is our greater capacity for purely mechanical, algorithmic thought-work. (In similar fashion, teenagers with essentially zero grasp of the niceties of algebraic geometry can solve quadratic equations for you all day long.)

    Ah, but the concept, you'll say -- what extraordinary insight did it take to come up with the concept of primality? Eh. Primality is not subtle or complicated. If you do a lot of arithmetic, you're bound to notice that some numbers are a bit incorrigible in a similar way.

    I don't say that a crow would notice. I'm just pointing out that, as with everything, it's practice first then theory, if ever, and that what gets noticed is something about the experience of doing arithemetic -- no portal opens to reveal the crystalline realm of mathematics, with an altar to primality at the center.

    Neither am I denying that the noticing is where the action is, and we're damn good noticers. I would just want to be clear about what the noticing is and how it occurs before drawing any conclusions. --- And none of this says anything about whether numbers "exist" or whatever. That's the tail wagging the dog.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff
    I actually find it odd to hear you say that quantifiers do not implicate existence (real or imagined).Leontiskos

    I mean, of course they implicate it, in the exact sense that they presuppose it -- but they don't have anything to say about it. Rather like the status that "truth" has in logic ... (Existence being not a real predicate, and in any given language neither is "... is true" -- need the metalanguage for that.)

    What's asserted in an existentially quantified formula is not really, say, "Rabbits exist," but the more mundane "Some of the things (at least one) that exist are rabbits." Or "Not all of the things that exist aren't rabbits," etc.

    And then there's all the complications that arise --- sortals and unrestricted quantification, vacuous singular terms, the elimination of singular terms, projectibility, the substitutional interpretation, et bloody cetera.

    Also I always think it's worth rememembering that Frege's quantifiers, and the rest of classical logic so many of us know and love, was not designed as an all-purpose logic at all, but was what was needed to formalize mathematics. It's got some very rough edges when applied more broadly, about which there's endless debate, but it runs like a champ on its home turf.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff
    I don't want to get embroiled in this threadLeontiskos

    Me neither. I've already spilled a lot of virtual ink on the forum about quantifiers.

    The problem here is that quantification derives from the meaning of 'being' or 'exists',Leontiskos

    But this i disagree with, so here we are.

    I don't think quantifiers have much of anything to do with existence or being or any of that. They're entirely about predication -- classification, categories, concepts. Quantifiers are about what things are, not that they are.

    It's amusing that Quine is more or less directly responsible for the revival of metaphysics in English-speaking philosophy. By suggesting that there's not quite nothing to say about ontology, and that what little there is to say is covered by logic, he cracked the door open for everyone from Dummett to his own former students (Lewis and Kripke). He tried to build a dam to hold back modal speculation and caused a monumental flood of the stuff. And so it goes.
  • Quantifier Variance, Ontological Pluralism, and Other Fun Stuff
    I just think there is a category error in supposing that numbers must exist or not exist.

    Rather, they are something we do. A way of talking about things. A grammar.
    Banno

    I think there are better answers, to do with how we use words.Banno

    Why words, though? I'm not googling, but isn't there somewhat robust evidence that some non-linguistic animals (crows, isn't it?) and infra-linguistic children have some rudimentary understanding of arithemetic? (With numbers befitting their size, of course.)

    What's more, there are, or have been, human languages -- and thus functioning human communities to speak them -- that only have "1, 2, many". So language doesn't directly lead to mathematics more advanced than crows and infants possess, even if it enables it (as it does, you know, everything).

    I think the gist of your approach is right -- that numbers are to do with us. I just wonder why you think it's to do with how we talk.
  • Purpose: what is it, where does it come from?
    Almost everything that matters happens when you are a child. — Srap Tasmaner

    Then there's no point living past puberty, right?
    Vera Mont

    I only meant, everything that matters for determining what sort of being you are. Your understanding of physics, geometry, numbers, your native language, social bonds and social cues -- etc etc etc.

    The point isn't even that you're finished by the time you're seven. Your brain's not even done yet. But you're set on your way and given the wherewithal to develop into something complete. What that will be depends on what happens to you, and of course on the choices you make, but how you make those choices is guided by what happened in those first years.

    Do you disagree? Are we born and remain autonomous free agents? Rationally, I suppose, choosing our values and so forth, decade after decade? -- I presume that's a caricature of your view, so what's the real view? We are formed

    over time, one observation, idea, judgment and commitment at a timeVera Mont

    certainly, but what's the nature of these? What's their origin? Do you freely choose what you notice? Do you choose what ideas occur to you? If you are moved by something you observe, something that changes your worldview or your values, did you choose to be so moved?
  • Purpose: what is it, where does it come from?
    This kind of all-or-nothing decision is made consciously, with a head full of passionately held ideals.Vera Mont

    Do we also consciously decide which ideals to hold, and how passionately?

    But are all commitments like that? Just habit or coercive circumstance?Vera Mont

    Ah, is this the issue for you? You're concerned that I'm downplaying if not denying the individual's agency, in favor of habit or circumstance?

    Yeah, I expect I am. I don't think you choose who you are or what you believe. You at most become aware of who you are, what you are, what you believe.

    When it comes to absolute commitment, dimly understood childhood conditioning is not a major factor.Vera Mont

    "Give me the child till the age of five-- " you know the rest.

    I really can't imagine what you have in mind here. Almost everything that matters happens when you are a child.
  • Purpose: what is it, where does it come from?
    I see purpose (now) as a settled state of mind beyond ordinary questioning about something significant, that serves to inform action or other beliefs, though flexible, if need be.tim wood

    Yes, I think that's closer. I was thinking similarly of a sort of comportment, a style, a way of doing things.

    The usual model boils everything down to decisions and preferences, but those are always open to change. Something no longer in play, if it ever was, isn't much like one of those, but more like a sort of framework for them. It's given. It will shape all the changeable stuff, channel it in a particular direction.

    But that's just a model. The question is whether we're really like that, and if so, why?

    There's Hume's line in the Treatise about the "belief" in (that is to say, unwavering commitment to) object permanence: he says there are things Nature has deemed too important to leave up to our fallible reason.

    What we're talking about looks something like that. (Not the sort of thing Nature left to the rational-agent, decisions & preferences model.) If it does develop over time, over the course of a life, it does so by a process we play little conscious role in. It's practically something that happens to us, like aging itself, not much like something we do. You wake up one day and realize you have a principles (or prejudices), or feel you have a purpose, whatever. Not your doing, exactly, though somehow for that very reason close to the core of your identity -- because it wasn't up to you, anymore than your identity in any other sense is.
  • Purpose: what is it, where does it come from?
    Jumping off a tall building would do it.Vera Mont

    Yes, exactly. The idea is to constrain your own agency, even to the point of extinguishing it, if necessary.

    It's a point of interest that we often find this sort of irrevocable commitment praiseworthy. I suppose the idea is that it takes a supreme act of agency to so constrain your future agency -- and then whatever praise later acts would normally get, if undertaken freely, is instead heaped upon the original act.

    When it's all praiseworthy, anyway. But it's up to us whether to call such stubbornness "principle" or "prejudice". (From the Latin, judging ahead of time.) And that determines how we take this:

    For lesser commitments, you don't; there is always the possibility of failing, chickening out or changing your mind.Vera Mont

    We certainly talk that way when we're in the mood to judge the behavior of others, but we know perfectly well it's not that simple. You don't really make choices about your blind spots, for instance. Exactly how to hold people accountable for prejudices they grew up with, and may only dimly be aware of, is rather hotly debated these days. Less so, though analagous, whether anyone should get credit for having been raised to have sterling or at least unobjectionable character.

    We know more too. We know that it can be terribly difficult actually to put into effect a choice we've made. We may firmly believe that some course of action would be "the right thing to do" and still not do it. Why? Who knows.

    Between the idea
    And the reality
    Between the motion
    And the act
    Falls the Shadow

    Anyhow, it's a known fact. So what appears to be principle or prejudice may be neither, but merely an inability to act otherwise, whether accompanied by an ability to think or choose otherwise or not.

    And all of this suggests, I think, that choice is just the wrong model here, or less helpful than it might seem; if there is a way of making a choice you can't unchoose, it's whatever enables that, that's really doing the work. Hence people reach for lots of things that aren't up to you: God, human nature, your individual nature, whatever.

    Bonus: some dialogue from an episode of Firefly I'm so fond of I may have posted it on here before.
    Spoiler
    It's from The Train Job. Mal and the crew of Serenity have been hired to steal what turns out to be a shipment of medicine for a town of miners with what amounts to black lung. Once they know, they decide to stealthily return it but are caught by the sheriff:

    Sheriff: You were truthful back there, when you said jobs were hard to come by. A man gets a job, any job, he might not look too close at it. But when he finds out more about a situation like ours, well, then he's got a choice.
    Mal: I don't believe he does.

    So there you go.
  • Purpose: what is it, where does it come from?


    There's social contact stuff that comes readily to mind. Schelling's credible deterrent scenario is a surprise application of that. -- That was meant more as a "structural" analogy, because while the social value of predictability on your part is obvious, what good is it to you?

    It's the sort of temporary move you make all the time, just to be able to think: you hold some variables as fixed, just for the moment, so you can see what the others do. But why fix them forever? And how?

    One of the talking heads in the Heidegger film, Being in the World, attributes to Kierkegaard the idea that we can't be the source of all the meaning in the world, because if we were we could also take it all away.

    I think we're kind of in that territory. It's easy enough to see why you'd want someone else to believe you'll hold up your end, keep your promises, honor your contacts, and all that, but how do you convince yourself and why would you?

    It looks like it has to be a slightly different mechanism, and in fact that's the point of Schelling's scenario: removing the wheel is a move which *changes* the game. The game is built on each side swearing they won't turn but the other side knowing they still might anyway, even if they honestly believe they won't.

    For an individual, how do you make a commitment to yourself you can't back out of?

    (I'm passing over a lot of interesting stuff.)
  • Purpose: what is it, where does it come from?
    I recognize I have a standing purpose of never being in a position of not having clean clothes. Call it rule.tim wood

    Thomas Schelling gives the example of two drivers playing chicken, and one of them pulls his steering wheel off and holds it up so the other driver can see it.

    You seem to be talking about something near here, something that in some ways looks like a choice, but a choice that's no longer in play, one you can't go back on. (Maybe in some cases that's only relative, or temporary.) Such a commitment, that's beyond our reach to go back on it, is what you're reaching for with "purpose". Is that close?

    If that's the right analysis, that might explain why people are inclined to say that purpose comes from outside (from God, Nature, Aristotle, Darwin, whatever): either way you experience it as not up to you.

    But it does raise a question: what is this capacity to remove the steering wheel? How is this kind of commitment different from other choices we make and why do we do it? To what end?
  • A simple question
    A pipe-fitter can have intelligent children.Vera Mont

    in an unskewed sample of pipe-fitters, most of them will not have intelligent children.Lionino

    My father was a member of Plumbers & Pipe-fitters Local No. 5 for 35 years, though he spent most of that standing at a drafting table, cigarettes burning in three different ashtrays because every time he stopped to think he'd light a new one, then put it aside when he got back to it. He made beautiful drawings the men (and occasionally women) at the job site could actually use, full of thought.

    I can string thoughts together pretty well. My brother, on the other hand ... Well, there's your sample.
  • Purpose: what is it, where does it come from?
    if our world is so familiar that we treat it as an unchanging given, then we achieve no freedomJoshs

    Absolutely, and one reason I squeezed in the word "variable" up there somewhere.

    I think much of the challenge of freedom for us comes from our culture. We all swim in a sea of inherited ideas. It's all too easy to grab an off-the-shelf interpretation of anything, and that's not freedom.

    But "going it alone" or "starting from scratch" is just not an option, so your inheritance, and a certain ambivalence about it, is something else you have to be, well, both comfortable and uncomfortable with. I don't imagine feeling at home in the world as static, but taking it all as it comes, including your own occasional feeling of alienation.

    Something like that is what I think of as largeness of soul. Keats was a terribly unusual young man, who got here remarkably quickly. (The "negative capability" letter and the "vale of soul-making" letter are both earlyish, if I recall correctly, and probably only a few months apart.) And of course then there's his real hero, Shakespeare, who had an extraordinarily capacious soul.
  • Purpose: what is it, where does it come from?


    FWIW, I didn't mean anything metaphysical by the word "soul". I don't know whether Keats did.

    Purpose is a dynamically self-adjusting back and forth between self and world , remaking itself constantly both from the side of the organism and its environment.Joshs

    I think what interests me about the Keats is near here: you're not just born with an eternal soul, and that's what makes you special; it grows within you, or doesn't, through the process of living a life. The organism and the environment have memory, and the organism -- us -- can also reflect on those interactions, and develop some sense of how things are related, and the great variability of those relatings. There's a possibility there of coming to feel at home in the world, which can be very difficult for us. And in feeling at home, achieving freedom, which is also hard for us.

    I don't know if "purpose" is a great word for talking about all this, or a phrase like "the meaning of life", but they're all ways of trying to get at the surprising challenge of living a good human life.
  • Purpose: what is it, where does it come from?


    I think the first thing is to distinguish the sort of "purpose" you're talking about from any sort of goal, the sort that instrumental rationality is good with.

    That makes the issue of "being alive" a little tricky, because it's easy to say that this is the primary and overarching goal of a living organism, but it's also set apart, as that which enables any other goal. Is there something else set apart from such goals, perhaps also set apart from maintaining yourself as a living organism? I think there sort of is.

    @unenlightened gives you the first bit: this kind of purposiveness is something that inheres in living, in acting, in being, not something outside it. Getting your ducks in a row is a row-ly way of behaving with ducks.

    I also think @Wayfarer and perhaps @Leontiskos are on the right track -- though they might be surprised to hear me of all people say this.

    Here's how I get there. Goals we understand: the wolf on the hunt behaves in a goal-advancing way. But what about the wolf dozing a little, keeping an eye out, waiting, passing the time. I want to say that this wolf may not be pursuing a goal at the moment, but is still 100% being a wolf, behaving with perfect and complete wolfishness.

    And this calls to mind the way the Greeks talked about the essence of things, of plants, of animals, and of human beings as well, that biocentric vision they had of a thing growing into the most complete expression of its own nature, whatever that is. I don't think that requires mind, although for some things being minded is part of it. It is for us, and it is for a wolf.

    That's not an answer so much as an idea about how to think about or look for an answer. Some people seem to live purposefully, in the sense I mean, to have a kind of presence, a genuineness -- it isn't necessarily always certainty about what's right, but an engagement with the very idea of there being rightness. Some people don't. It can be hard for us, harder than it is for a tree or a wolf or a knife.

    One of those Greeks advised us: "Know thyself." Maybe that suggests that in our case there's no avoiding self-awareness and therefore, if we are to approach the sort of pure expression of essence that a tree or a wolf or a river has, we must first understand, must know something about what we are, not just be it. And that's why it makes more sense to say this sort of purpose is discovered rather than invented.

    I'll say one more little thing: I've always been attracted to Keats's -- what? observation? suggestion? -- that the world is "a vale of soul-making". Through suffering we grow a soul, and thus become more fully human, more than we were when we were born. I think that's the idea, and it's interesting to cast that Greek idea in these terms -- it's the growth not of your body but of your soul, that matters.
  • We don't know anything objectively
    Ever since I watched the movie "The Matrix" I have been troubled by how to tell what is real and what is not.Truth Seeker

    Sure.

    One thing about The Matrix, like other stories about caves and evil demons and vats, is that people will be inclined to say that there are two kinds of experience presented: the one where you are being fooled by a simulation and don't know any better; and the other where you discover the true state of things, that you're in fact a coppertop in a vat, that you don't actually have a job and an apartment, that you've never even walked around, that sort of thing.

    Of course, there's nothing in the story to guarantee that this second world of experience is "the real one". It could also be a simulation, right? (Looking at you, last sentence of Ubik.) Point of fact -- The Matrix is a movie, in which both of those worlds of experience are simulated, and you observed those worlds for a while from this one. According to your experience in this movie-going world, those worlds aren't real, either of them. But what about this world where you think you've been watching a movie called The Matrix? Could also be a simulation, right?

    (Aside: this is all about what's possible, and then the simulation argument adds claims about what's likely.)

    As it happens, the consensus of scientists seems to be that your experience really is in some ways a simulation: what goes on in your mind is your brain managing your body and keeping it alive by dealing with what it counts as the environment outside your body. We get glimpses occasionally of the slippage between our mental life and the real world, and maybe that's the source of this ancient worry (not always a worry, I guess, but sometimes a hope) that it's all an illusion. There is a very real sense in which it is.

    But by and large scientists don't seem to worry much about this snake eating its own tail, science itself being some sort of mass delusion or something. Why is that? Are they just less sophisticated than philosophers? Less imaginative?

    I don't think that's it. I think the difference is actually pretty simple. For example, every schoolboy knows that there's a real sense in which the objects of the world aren't themselves colored; that's just how we see, an artifact of our visual perception system, and there are other animals who see quite differently. How do you get from this mundane, but at first somewhat unnerving, observation to The Matrix?

    Abstraction. Abstraction and generalization, of a sort philosophers indulge in but not scientists and not ordinary people (and not even philosophers except when they're doing philosophy). Scientists make pretty specific claims about how specific sorts of physical systems work, but philosophers abstract away all those specifics and ask questions about perception "in general" or experience "as such". It's pretty straightforward these days (with computers and eye-tracking technology) to demonstrate that you have a blind spot right at the center of your "visual field" and you've never noticed it and cannot notice it. It's as if philosophers take that result as a demonstration that the blind spot "might be" all-encompassing! But if it were, there'd be no sense in which any such result had been "demonstrated". You see the problem here.

    I say all this not to answer your question -- I don't think it really has an answer, and if you're really into philosophy you might find that interesting. (Is it really a properly formed question? If it isn't, how and why do we ask it? What exactly have philosophers been up to for thousands of years, and how does it differ from what they thought they were up to?) No, I bring up the science because (a) you'll hear scientifically informed arguments to the same effect, and because (b) there are people who know in some detail to what degree our experience could quite robustly be called "illusory" who somehow are not overcome with the sort of skeptical vertigo you experienced upon watching The Matrix. I think it's important to know that they aren't why they aren't, though I've only gestured at a full explanation of that, and I'm not qualified to spell it all out anyway. But keep it mind as you puzzle about reality and our relation to it.
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus


    Well, we seem largely to agree on the general stuff, hurray for us, but I remain unimpressed. I can't even muster much curiosity about how the things work, so we diverge dramatically there.

    Incurious as I am, I've spent zero time playing with any of the available chat-things, so maybe that's why I haven't had the conversion experience.

    I can look at the one bit you posted that deals with one thing I posted a while back, and not only does it not get me right, it smears a layer of undergraduate term-paper weaseliness over the whole thing, which obviously I find distasteful. ("More often a function of"? Ick.)

    ((I got several paragraphs into explaining how it gets me wrong, but who cares.))

    Not for nothing, but the sheer undergraduateness of the product concerns me deeply. As if there weren't already enough superficial and mediocre thinking about. Apologies to Nick Bostrom, this is like a Supermediocrity -- better at sounding like you know what you're talking about than anyone outside the Oxford Union has ever been.

    Just today I read a piece in the Atlantic that thought it was about the Elo rating system (originally developed for chess but applied quite broadly now) but was in fact about Goodhart's law. I know just a bit about each, so I noticed that the author knew less about either. I have heard it said that this effect is actually pervasive in journalism -- that is, that most of it is written by people who don't really know what they're talking about, it's just that readers like me only notice when it's something we happen to know about. Most of what all of us read is bullshit. Thankfully, there are exceptions, journalists who put in the work to get it right rather than just to sound like they got it right.

    So here we have a machine that manages to sound like the sort of person who's intent is to sound like they know what they're talking about -- a sociopath, a pretentious undergraduate, a lazy journalist, a president of the Oxford Union. No, I am not jazzed for it acquire the sheen of authority.

    Your interest is elsewhere, of course, the sort of limited mentality arising here. My list there is meant to suggest we already have plenty of examples. There's something interesting there, but so far as I can tell there's absolutely nothing new about these machines. They're just automating the worst of humanity.
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus
    Your language model for LLMs is much larger than mine, but I'll chat a bit anyway...

    fine-tuned and aligned as chat agentsPierre-Normand

    It was apparent to me you were talking to a salesman. What they were selling was not apparent.

    akin to modes of impersonation of the authors of the texts present in the training dataPierre-Normand

    The mode that comes to mind is the sociopath. Or, as I said, the sociopath pro tem, salesman.


    distill the core ideas, re-express them in a more eloquent manner, and sometimes add relevant caveatsPierre-Normand

    If you take "Idea" to mean the propensity to produce certain words in certain circumstances, which okay maybe. I mean, human verbal behavior is in some serious sense no different; what's different is that our middling-to-large models run on a living organism, for which speech production (and consumption obviously) serve other purposes, actual purposes.

    And your standard of eloquence needs to be raised dramatically.

    such behaviors are merely polite and have been socially reinforcedPierre-Normand

    And serve a social purpose. Insofar as an LLM serves an analytical purpose, I'm fine with it. It's a tool. A tool to be used with considerable care and little trust.

    their verbal behavior is manifestly goal oriented. It aims at understanding your idea and at crafting responses that are found to be useful and understandable by youPierre-Normand

    And what metric does it have for understanding, either its own or yours, besides the verbal exchange between you? Is its understanding of you something other than its ability to produce a response? Is its helpfulness measured in some other way than by classifying your responses to its offerings? "Understanding" appears to have dropped out of the equation.

    Which, again, fine, whatever. I'm not about to fetishize human mental capacity. We're not magical beings that transform mere sounds into Meaning. But there remains a difference in kind between the sorts of goals we have.

    This honestly is one of the reasons that chat agent comes off as a sociopath: there are people for whom conversation is goal-oriented in this narrow sense, and they are people to steer clear of, or, if you must talk to them, you are very careful. This is a mode of faux communication available to human beings, but it's not the real thing.

    they don't have much in the way of personhoodPierre-Normand

    Indeed. Nothing behind those eyes, that smile.