• Srap Tasmaner
    5k
    A quick thought:

    Suppose as given a phenomenon we wish to explain. Ockham's razor is a widely recognized heuristic for choosing between competing explanations: choose the one that makes fewer ontological commitments.

    Why this should be a reasonable principle is difficult to explain. I don't have an explanation, but a way of framing the issue that might lead to one, or might just kick the can down the road.

    Think of ontological commitments as currency. Up for auction is an item I guess we'll have to call "having explained φ" for some phenomenon φ. If you bid more than I do, you may win the auction, but I believe you have overpaid.

    There are two points that come to mind. One is that you must actually pay. Weaseling out of your obligation to pay for the item you won at auction is not acceptable. But the other is whether it actually costs you anything to pay. Are ontological commitments a limited resource? Some theories might consider a rival theory a spendthrift, but if that theory will never run out of money, why should it care?

    Within broader human society, there is an authorized source for currency, and that entity fixes the (limited) amount of it in circulation, and thus available for use at auctions. There is only one explanation for someone having an unlimited amount of money: they are committing a fraud of some kind. Perhaps they are simply printing their own money.

    And there is this suspicion at the ontological auction, that in some cases the ink is not quite dry on the ontological commitments offered as payment by the winner.

    So perhaps this is the difference: some view the auction as having an authorized, sanctioned form of currency, and some assume that everyone is within their rights to print as much currency as they need. (How these two groups should end up at the same auction is a mystery.) Some are noticeably better at printing currency than others, and it turns out that's the real contest.

    I think the key argument for controlling currency is that unlimited currency is simply not currency at all, and that widespread fraud will destroy any market that relies on that currency. Thus the so-called auction was a sham all along.
  • T Clark
    13.9k
    Suppose as given a phenomenon we wish to explain. Ockham's razor is a widely recognized heuristic for choosing between competing explanations: choose the one that makes fewer ontological commitments.Srap Tasmaner

    I looked up "ontological commitments" on Wikipedia. It was pretty confusing. Here's a definition of Ockham's Razor from the web that I find easier to understand - "Suppose there exist two explanations for an occurrence. In this case the simpler one is usually better. Another way of saying it is that the more assumptions you have to make, the more unlikely an explanation is." That seems like a reasonable rule-of-thumb to me. Does that mean that if you have an explanation and I come up with one that is simpler, that yours is wrong? Probably not. Should you have to explain why your explanation should be chosen over the simpler one? Maybe.

    Some possible reasons why simpler means better:
    • Easier to explain, understand, remember, and use
    • Less chance for inductive or deductive error
    • More elegant
    • What else?
  • Srap Tasmaner
    5k

    The Wikipedia article also gives the "original":
    "Entities are not to be multiplied without necessity" (Non sunt multiplicanda entia sine necessitate).
  • fdrake
    6.7k
    I think it might be illustrative to try to come up with a case where only Occam's Razor distinguishes between the accepted and unaccepted accounts. Imagine that there's some set of explanatory entities A which gives a good account (whatever that is) of some phenomenon X. Further, imagine that there are two other (disjoint) sets of explanatory entities B and C which together (union) give an equally good account of X. If it was stipulated that A is a proper subset of (B and C), then Occam's Razor chooses A.

    But this seems to be a truism, of course A would be chosen since it's constructed to contain less explanatory entities. The artificial things in the account, I reckon, are:

    (1) The assumption that explanatory entities can be counted in a straightforward manner within theories. How can parts of an explanation be quantised in a seemingly independent manner?

    (2) The ability to tell whether an idea or account is a sub-idea or account of another is not straightforward either. Theories resist enumeration and collapse into sequences of propositions.

    I think if you grant that (1) and (2) are in principle possible and also that theories consist solely of events (dubious for ontologies), you can get some mileage out of probability theory.

    Imagine we're in the same set up as before. There's a phenomenon to be explained X, and we're looking for sets of events that explain it. For X to happen, there has to be a collection of events which make X happen and only X happen*1*. Since there's a collection of events which make X happen, there has to be a smallest collection of events which make X happen*2*. Call this smallest set A. Then we have that if a theory contains A, it accounts for X.

    This has the effect of saying that the probability of X given C, where C is a superset of A, is equal to the probability of X given A. That is to say that no additional information about X can be obtained by adding to A - specifying that other things have to happen. Any other account B would be an intersection of A with other events*3* which is less likely than A. In fact, A is the most likely theory.

    I've been fast and loose in saying that X is an event and the things accounting for it are events, it isn't going to be that clear cut in a real account - where statements don't necessarily correspond to any particular event or events at all. But hopefully it's useful as a first approximation.

    *1* assuming that this applies without loss of generality by specifying X as a compound or disjunctive event if required
    *2* assuming uniqueness here
    *3* since only A is relevant for X and A occurs as well as other things

    edit: another assumption is of course that intersection is a good model for combining ideas to make a theory - unlikely.
  • creativesoul
    12k
    Hey Srap...

    Yeah, it seems to me that ockhams razors is a standard by which to choose between competing explanations. I take it to mean that when faced with competing explanations for the same phenomena/event, we ought choose the one which is based upon the fewest unprovable assumptions and has the greatest explanatory power.

    I'm not sure that fiat currency is an adequate parallel to thought/belief about the way things are/were.
  • jorndoe
    3.7k
    Suppose we've found some adequate and sufficient understanding of something, for now at least.

    It seems conceivable that we could then come up with numerous other (much) more complex, but otherwise adequate and sufficient, accounts.
    We could even keep on inventing some, just for the heck of it, perhaps with increasing complexity.

    We might also find a simpler, adequate and sufficient account.
    Simpler typically means less chance of error / higher chance of subsequently discovering errors (and easier to comprehend).

    While considering all this, it seems like parsimony is a reasonable heuristic.
    Not a guarantee of getting things right of course, but reasonable nonetheless.
  • Srap Tasmaner
    5k
    That is to say that no additional information about X can be obtained by adding to A - specifying that other things have to happen. Any other account B would be an intersection of A with other events*3* which is less likely than A. In fact, A is the most likely theory.fdrake

    So this would explain Ockham's razor with one more step, that it is rational to select the theory most likely to be true, and that relates to the other usual version of the razor, that the simplest explanation is most likely to be true. Yes?

    It's nice to be able to prove this version of the razor. Violating it, on this view, is just violating a different norm than what I was headed for, but I assume I'll connect them one of these days, when the Grand Theory of Rationality has revealed itself to me.
  • Srap Tasmaner
    5k
    Simpler typically means less chance of error / higher chance of subsequently discovering errors (and easier to comprehend).jorndoe

    Yes, I agree with everyone's alternative characterizations of the razor. What's missing is why we should care. For instance, why should you want to make fewer errors? I see a related calculation to mine, given above. It's inefficient to waste resources by making errors you could easily have avoided. Implicitly you're designing an auction where we bid time and effort, and if you spend more than I, you've overpaid.

    ADDED: there's a coder's maxim that you shouldn't make a program as cleverly as you can, because if it has a bug, by definition you're not clever enough to find it.
  • fdrake
    6.7k


    Yeah, that was an attempt to show that applying Occam's Razor, under some arguably unreasonable constraints, gives theories which are more likely to be true. If you're still reading Jeffery's book, the argument relates to the ideas of 'sufficient statistics' and 'minimal sufficiency'. I'm really not convinced that applying probability theory in that way works for arbitrary theories - especially ontologies -, but I find it quite convincing as a simplified model.

    edit: Though since I've not read the book I don't know if it contains those parts of estimation theory.
  • Srap Tasmaner
    5k
    Something else I felt my model captured was our sense that adhoc explanations are a kind of cheating, printing your own money as needed.
  • unenlightened
    9.2k
    Applying Occam's razor, "God did it" is an economical explanation for everything.
  • Srap Tasmaner
    5k

    Yeah, "number of entities" posited is not a good measure. (Unless positing God is positing one more than you need.) It's really number of types of entities we care about. You could use Dawkins' complexity measure, that a designer for the whole universe would be have to be even more complex than the universe we're explaining.

    It still feels to me like a question of how much of some resource or other is too much to pay for an explanation.
  • Michael Ossipoff
    1.7k
    I guess if one theory is a little simpler than another theory, and needs just a few fewer assumptions, then that makes it a little more acceptable more, plausible, appealing, and more likely to be true..

    But what if we're comparing a metaphysics (like Materialism) that needs an assumption, a big brazen arrogant brute-fact, vs a metaphysics that doesn't need or use any assumptions or brute-fact at all?

    ...one that doesn't need any assumptions because it's inevitable?

    I've been proposing such a metaphysics.

    Michael Ossipoff
  • Srap Tasmaner
    5k

    You are of course free to bid 0, but there's a chance your bid will not be taken seriously.
  • creativesoul
    12k
    Yes, I agree with everyone's alternative characterizations of the razor. What's missing is why we should care. For instance, why should you want to make fewer errors?Srap Tasmaner

    Well, leave aside the fact that great wealth and American as well as much of international law can correct many if not most personal mistakes...

    Successfully navigating the world requires it.



    ADDED: there's a coder's maxim that you shouldn't make a program as cleverly as you can, because if it has a bug, by definition you're not clever enough to find it.

    The same notion applies to one's worldview and/or thought/belief system. It takes an other to point out an error. That's because thought/belief systems are accrued and self contained, and much like pure induction... well, you get the point right?
  • Srap Tasmaner
    5k
    well, you get the point right?creativesoul

    No.
  • creativesoul
    12k
    It is utterly impossible to make a mistake on purpose. It's likewise utterly impossible to knowingly believe a falsehood.

    However, making a mistake is not 'bad', in and of itself. I mean, it's an inevitable element of learning.

    I just found it quite odd that anyone would actually ask why we should want to make few(or the fewest possible) errors.
  • Michael Ossipoff
    1.7k
    You are of course free to bid 0, but there's a chance your bid will not be taken seriously.Srap Tasmaner

    You mean "Too good to be true"?

    Sometimes something that's too good to be true is true anyway.

    Do you not agree that my proposal doesn't make any assumptions or posit any brute-facts?

    I suggest that a person's life is one of infinitely-many life-experience possibility-stories, and that our physical world, the setting for that story, is one of infinitely-many possibility-worlds.

    such a story consists of a hypothetical system of inter-referring abstract logical facts, if-then facts, about hypotheticals.

    The whole thing is hypothetical.

    It's inevitable, because the abstract logical facts of which it's composed are inevitable.

    Among the infinity of such systems, it's inevitable that there's one that has the same events and relations as our world. There's no reason to believe that our world is other than that.

    You could still believe that our universe is, additionally, superfluously, something more than that. But that would be an unfalsifiable, unnecessary brute-fact.

    This was all said by Michael Faraday in 1844, except that he was talking about a possibility-world, instead of a life-experience possibility-story.

    I personally feel that the individual-experience point-of-view makes more sense than the universe-wide 3rd person objective point-of-view.

    Michael Ossipoff




    T
  • Srap Tasmaner
    5k
    I just found it quite odd that anyone would actually ask why we should want to make few(or the fewest possible) errors.creativesoul

    That is just about the nicest thing you could say to me.
  • creativesoul
    12k
    Glad to be of help...

    X-)
  • Srap Tasmaner
    5k
    Lots of other thoughts about my little model, so here's one more.

    How is payment made? By committing to uphold what you bid.

    Suppose "having explained consciousness" is up for auction. I may choose not to win rather than bid panpsychism. I don't get to claim I've explained consciousness, but I feel it's better to lose than win at such a price. I could even continue to work on my bid in the belief that the winner will eventually also feel he overpaid and give up panpsychism. Then the item goes up for auction again. Some may see this strategy as too risk averse, and some applaud the winner for his daring.
  • creativesoul
    12k
    I probably missed something. Although I just re-read the thread to check for myself...

    What counts as winning again?

    :-|
  • creativesoul
    12k
    Does winning this auction require recognition from convention/professionals. Accepting the justificatory burden that comes along with one's bid? Convincing others? Having the fewest unprovable assumptions and an equivalent and/or a greater scope of explanatory power than the other bids? The ability to tie seemingly disparate long held notions together seamlessly? The ability to effectively refute otherwise long-standing philosophical issues? The ability to solve, resolve, and/or dissolve long-standing 'problems' by virtue of having the broadest possible scope of rightful application?

    What counts as winning again?

    :(
  • Srap Tasmaner
    5k

    Ockham's razor is about competition among theories. One form such competition might take is an auction, and it occurred to me that philosophers do sometimes express their theoretical preferences in terms of cost or price. Quine, for instance, agonizes over whether admitting sets into his ontology is too high a price to pay for mathematics. (It's not. No price is too high for math.)

    So what's up for auction is "having explained" something. The way I've presented it is as if only a single bid is above the reserve price.

    It was a quickie post, and I'm honestly not sure yet if there's enough here to be worth formulating better. But I am finding the idea of formulating competition among theories in some way like this quite appealing.
  • Srap Tasmaner
    5k
    Here's another way to characterize the competition among theories that makes more sense in some ways:

    Competing theories offer for sale comparable products, purported explanations of φ, for some phenomenon φ, and the cost, as above, might be measured by the ontological commitments you must make. Competition will naturally place some downward pressure on these commitments,* but purchasers will also have preferences that lead them not always to choose on price alone: they may feel one theory's product is of higher quality, that another doesn't actually work (i.e., explain φ), etc. Some purchasers may choose not to buy at all if they cannot find a price they consider fair.

    Here questions of who is paying too much, who might be printing their own money, etc., are shifted to those who espouse a theory.

    So that's what one little stall in the marketplace of ideas looks like to me.

    * ADDED: If ontological commitments are treated as a limited resource like a currency.
  • creativesoul
    12k
    I think I'm following you. Sure, different people hold different reasons for choosing one ontological commitment over another, or as you imply, choose to suspend their judgment by virtue of remaining willfully undecided/ambivalent/agnostic.

    I tend to critically examine conceptual(linguistic) frameworks. You've chosen to speak in phenomenological terms.

    I'd like to know why...
  • Cabbage Farmer
    301
    Suppose as given a phenomenon we wish to explain. Ockham's razor is a widely recognized heuristic for choosing between competing explanations: choose the one that makes fewer ontological commitments.

    Why this should be a reasonable principle is difficult to explain. I don't have an explanation, but a way of framing the issue that might lead to one, or might just kick the can down the road.
    Srap Tasmaner
    I'm not sure I understand your way of framing the principle as an auction.

    I suppose the simplest explanations would explain everything in terms of a single, simple principle. What could be simpler than "God wills it", for instance?

    A knack for wielding Ockham's razor, traditionally associated with the fragile custom of methodological naturalism, balances the principle of parsimony against the principle of "explanatory power" or "predictive power", or some such counterweight.

    Use of the razor is thus related to the view that "theory is under-determined by data". Given the same batch of data or phenomena, the same record of observations, "competing theories" somehow give different accounts with equivalent "predictive power".

    You can take any useful theory and add "... and Zeus wills it", and now you have two accounts with the same predictive power. You can take the latter account and add "...and Apollo wills against it, but Zeus trumps Apollo", and now you have three accounts, and it's clear we can proceed this way indefinitely.

    Since our time is limited, and since cognitive and social resources are limited, it seems more reasonable to prefer the simplest account, all else equal.


    Now I suppose if more "currency" than required is printed -- more entities posited and more theoretical complexity constructed -- to purchase an explanatory model with the same predictive power, there's a devaluation of the currency relative to the "real value" of the model as expressed in terms of predictive power.

    Unfortunately, not all use-value of an explanatory model lies in its predictive power. If you can persuade more laborers to work harder for you by paying them less with your devalued currency, and if you can get more consumers to purchase more of your goods at higher prices, just because they like the brand of your inflated story better than a more efficient story -- you might think it's in your own self-interest to propagate that story, devalue the currency, and reap the benefits.


    I suppose those benefits must be weighed against the costs of inefficiency, which may vary from one case to the next.
  • Janus
    16.5k


    I can see how it might cost more (being less beneficial in terms of explanatory power which has practical uses) to hold one explanatory theory rather than another. Homeopathy compared to modern medicine might be an example of this, if the former does not cure the sick as effectively the latter.

    I am curious, though, as to what kinds of costs/ benefits you think could obtain when it comes to "ontological commitments". If I enjoy cleaving to one ontological commitments more than another, even if it merely makes my life more bearable, would that count as a good reason to hold it? I'm struggling to think of any other costs/ benefits apart from those that might involve the opinions of others about us. That might be a concern for some; but it could easily be remedied by associating with the like-minded.

    It would seem that it is only, as with Pascal's Wager, when implications for after this life are posited, that holding of one ontological commitment over another could have serious costs/ benefits.
  • Srap Tasmaner
    5k
    Since our time is limited, and since cognitive and social resources are limited, it seems more reasonable to prefer the simplest account, all else equal.Cabbage Farmer

    I'd now say what I was groping for was just a market rather than an auction, which in retrospect is obvious. I can see a case for all sorts of things, such as those you mention here, being usable for trade, but I went with ontological commitments because that's one common form of the razor, and it still feels relevant. Is there an entity that is the meaning of a sentence? Is there an entity that is my eating this sandwich?

    Ockham says entities are not to be multiplied beyond necessity . If I say I have a satisfactory account of everything you do, but without one of your assumptions, then I'm saying one of your assumptions is unnecessary. (You will counter that my explanation fails and this additional assumption is necessary.)

    But so what? Why do we demand that a theory's assumptions be not only sufficient but also necessary? Why this minimalism? I'm trying to take seriously what appear to be our intuitions about such matters, which suggest that indeed there is some sort of implicit accounting going on and that there is something like a market for theories. (Suggests to me, anyway, though I may be the only one.)

    I'm struggling to think of any other costs/ benefits apart from those that might involve the opinions of other about us.Janus

    Yes, well, I alluded to this above, and I think it bears looking into.

    I'll add this: I've referred several times to what would count as currency, who would decide, and so on. For instance, why does Zeus, whom @Cabbage Farmer mentioned, not figure more prominently in our discussions?

    I think there are wider and narrower ways to approach this: the narrower would be the sort of thing that turns up in Lewis's "Scorekeeping in a language game," where what you're allowed to assume, to rely on, is implicitly negotiated as you go along; wider could be very wide indeed, but we can at least say that "As Zeus wills it" is apparently not an option ruled out just in this conversation or that, but more widely.

    (I have a feeling I'm not making much sense yet, but thanks for chipping in, guys.)
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.