Comments

  • Does Morality presuppose there being a human nature?
    It seems to be the case that the majority of people on here don't think there is a "human nature" as such.bloodninja

    Really? What do you mean by "human nature," anyway? What would be the difference between possessing and not possessing "human nature?"
  • What happened to the Philosophy of Science forum?
    It may just mean that this forum, and web forums like this, are dying. Perhaps they really are past due, like the BBS or Usenet of old (anyone still remember those?)
  • Does infinity mean that all possibilities are bound to happen?
    "A line is infinitely divisible" which is a finitely describable definition of a rule

    with

    "A line has an infinite number of segments" which cannot be represented in our syntax.
    sime

    I have read what you have written in this thread up to this point, and I still don't see what difference you are getting at here.

    And I cannot think of a compelling reason to see the axiom of infinity is anything other than a meaningless syntactical rule for manipulating finite syntax that represents nothing and lacks real world application , with the possible exception of representing things that are not infinite.sime

    What makes you think so? The mathematics that is usually thought of as relying on such notions - mathematical analysis, linear algebra, etc. - is extremely useful for describing the real world. One could make the argument that the same could be accomplished without recourse to infinities - that's what the finitist project is about. But whatever one thinks of the successes and the prospects of that project, it can't take away the fact that standard mathematics has many real-world applications.
  • Emergence is incoherent from physical to mental events
    Correct, so I guess the claim is they are two radically different categories then, and that the former theory of ontological sameness is itself incorrect based on its radical difference that cannot be explained by heaping on yet more physical theories.schopenhauer1

    And like I said, just stating the claim is not informative or productive. "Mental," "physical" - these are just words that don't stand in relation to anything in particular, until you unpack them and show how you use them in ways to which we all could relate. It's quite possible that, given your meaning, the claim is true, and even banal and self-evident. And then there would be no argument, because those who think that mental could emerge from physical obviously mean something else.
  • Emergence is incoherent from physical to mental events
    Claim: Emergence only works from physical to physical events. Emergence is incoherent from physical to mental events. Thoughts?schopenhauer1

    Depends on what one means by mental events, I guess. Such canned statements are pretty meaningless without showing what philosophical scaffolding underlies them.

    If you think of mental and physical as belonging to the same ontological category, then there is nothing strange about the idea of the relationship of emergence holding between them. If you frame these two concepts as belonging to radically different categories, then of course the idea of emergence will be incoherent.
  • Simultaneity, Sameness, and Symmetry– or a complete lack thereof
    You don't understand what he is talking about. If you don't understand something, it is better to ask than to try and fake it, otherwise you will just look foolish. The issue that apo is alluding to has nothing to do with fractals (measuring the coastline) or with "digits," with which you've become obsessed. It's historically known as The Ultraviolet Catastrophe - look it up.

    Small problem. Nature turns out to be quantum. There is a fixed fundamental grain of action and dimension. So spacetime and energy are discrete and not continuous at the bottom-most scale of things.apokrisis

    So, two things: First, to say that "nature turns out to be quantum" because of such-and-such features that allegedly lie at the "bottom-most scale of things" is a thoroughly reductionist statement (and - oh horrors - bottom-up reductionist!) Which is ironic, given how apo likes to carp about reductionism. So nature cannot be anything above and beyond what said "bottom-most scale" expresses - not really. Now, I am not necessarily objecting to this view - just highlighting its philosophical presuppositions, which no one is obligated to accept on faith.

    Second, it's just not true. If you are a reductionist and you are going to bet your metaphysics on today's quantum physics, then there is nothing there that says that "spacetime and energy are discrete and not continuous at the bottom-most scale of things." This is well-known to anyone who studied quantum physics to any extent, so I won't even go into this. There are some speculative developments that seek to quantize spacetime at the fundamental level, but so far they have not been successful, and they certainly do not constitute the theory as it stands now.

    Nevertheless, quantum physics sort of vindicates your original idea, though not for the same reasons (nothing to do with numbers). Although space is continuous and infinitely divisible in quantum physics, and any spacial interval can be expressed there without a problem, the theory undermines the idea of the world consisting of sharp-edged objects with definite sizes. And more generally, it suggests that one does not have to be a realist about familiar, "classical" properties of things like position and momentum. Things at quantum scale aren't necessarily what they are at our everyday human scale. And that is an idea worth considering, even if you are not a reductionist, or if you are holding out for a future "bottom-scale" theory that would replace quantum physics.
  • Simultaneity, Sameness, and Symmetry– or a complete lack thereof
    Thanks for your input SophistiCat. I do understand the importance of using a standard unit to obtain consistent measurementsMikeL

    No, you don't understand. The point is not about making consistent measurements, the point is about how we make any measurements.

    Your idea is that we measure one thing, then we measure another thing, and then we compare the two numbers. But when you measure something, what are you measuring it against? What is an "inch" or a "meter"? They are nothing other than objects that you use as standard measures. When you measure the length of something, you are already comparing one object (what you measure) against another (the measuring instrument, the ruler). So your idea that the only way to relate the sizes of two objects is to measure each of them separately and compare the results is exactly backwards. We compare objects already as part of any measurement. That is what it means to measure something: it means to directly compare the same property, such as length, in two different things, one of which serves as the standard measure.

    Now of course we can't measure anything exactly. And measuring something as exactly pi units, as suggested, is no easier than measuring something as exactly 1 unit: either way, there are a number of factors that will limit the accuracy and precision of your measurement, so that it can never be exactly x units, whatever x is and whatever the units are (unless you take the object itself as your standard measure - in which case you do have the exact measurement of 1 unit, since every object is identical with itself!) But if you are a realist (at least about spacial dimensions) and believe that things possess sizes independently of our measurements of them, then I don't see why you regard our inability to perform an exact measurement as a confounding ontological issue. And if you a not a realist, then your conclusion that objects don't possess definite sizes follows directly from that, and you don't need to confuse yourself with any "digits."
  • Simultaneity, Sameness, and Symmetry– or a complete lack thereof
    If space is infinitely divisible, nothing can be measured accurately as there is no accurate measurement to give – the decimals keep rolling.MikeL

    Who is doing the measurement? How? And what for?

    When I measure the size of some object, I am comparing its size with the size of some standard ruler. But that means that there is an object whose size is not measured - it is taken as a standard unit for all other measurements. So there is at least one object in the world - the chosen standard ruler - whose size is known exactly: it is exactly 1 standard unit. Any object whose size is also 1 standard unit will have the same size as my standard ruler. And since I can choose anything as my standard ruler, then anything can have the size of exactly 1 standard unit, so those rolling decimals are not an issue (I have no idea what you think the problem is with decimals, anyway).

    In practice, of course, the more precise length measurements are made using less direct methods, but ultimately the measurement comes down to the same principle: we compare something against another thing that we take as the standard unit of measurement.

    The bottom line is that in order to make a measurement, we compare one thing against another - which is exactly what you conclude is impossible, because we cannot measure anything (according to you, because of the decimals :s). So you've got it exactly backwards: We measure by comparing things to each other, not the other way around.
  • The Double Slit Experiment
    Here is a forum that would better suit your purposes:

    http://www.thescienceforum.com/personal-theories-alternative-ideas/

    Your posts here are off-topic.
  • Is science equal to technology?
    From 19th century onwards, our civilization's concept of science is full of technological connotations. However, before the 19th century at least, we know that there was a different scene. Scientific disciplines are under the umberella of philosophy.Pacem

    I think technology is a red herring here. What changed in the course of the scientific revolution is a specialization of scientific epistemology. You can still consider science as "natural philosophy," but it isn't just any kind of philosophy directed towards the natural world. In the past "natural philosophy" could mean "thinking really hard" (in the words of Mike Alder), developing some intricate metaphysics, delving into numerology, or Biblical exegesis. Science nowadays primarily relies on empirical epistemology. That is not to say that empiricism is unique to science, or that non-empirical considerations play no role in science, but the overwhelming emphasis on empiricism is what sets science apart from the rest of philosophy.

    That and the social structures and procedures that have crystallized over the last several centuries and have become uniquely associated with science.
  • Interpretations of Probability
    I think so. But I don't think this accounts for whether Bayesian approaches to AI and the mind are correct or not. In my view AI questions about Bayesian methods are 'does this statistical model learn in the same way humans do?' or 'is this statistical model something like what a conscious mind would do?', but epistemic questions are 'does this interpretation of probability make sense of how probability is used?' and 'does (list of properties of Bayesian inference) give a good normative account of how we ought to reason?'.fdrake

    Well, if Bayesian probability is supposed to model our reasoning, then there is an obvious connection between Bayesian models and AI, if the idea is for AI to emulate human reasoning.

    But does Bayesian probability describe reasoning or prescribe reasoning? It seems to want to do both.
  • Interpretations of Probability
    The thrust of the comments is that contemporary statistics uses plenty of methods and mathematical objects that are not consistent with contemporary philosophy of statistics' accounts of evidential content and the methods and objects used to analyse it. One response would be 'so much the worse for statistics', but I think it's so much the worse for philosophy of statistics since these methods observably work.fdrake

    If philosophers are not current with their subject, I would say so much the worse for philosophers. I can only hope that things aren't quite as bad as you say.

    I think whether Bayesian models of the mind or of learning in general are accurate in principle is mostly orthogonal to interpretations of probability. Would be worth another thread though.fdrake

    Well, isn't the entire thrust of the Bayesian (aka epistemic) interpretation to psychologize probability?
  • Interpretations of Probability
    I read a few things on likelihoodism and other ideas of what is the 'right way' to show that data favours a hypothesis against a (set of) competing hypothesis.fdrake

    I am sorry, my statistics and hypothesis testing background is too basic and rusty to fully appreciate your comments. I didn't mean to advocate likelyhoodism though - I only mentioned it as an example of Bayesians not being satisfied with prior probabilities and seeking ways to avoid them while still preserving what they think are Bayesianism's advantages.

    In my view, if there is a conflict of the intuition with something that is already unambiguously formalised, go with the formalisation.fdrake

    While Bayesianism may be an inadequate model of human cognition in every respect, or even in most respects, it may still be a passable approximation on the whole, and a good local approximation, in an asymptotic sense. AFAIK Bayesian models have shown some promise in cognitive sceince and neuroscience, and of course they have been widely used in machine learning - although the latter cannot be considered as strong evidence in its favor, since there's still a lot of debate as to weather neural network AI approaches are on the right track.
  • Interpretations of Probability
    I don't have anything on hand, and I cited Sober and Fitelson (his onetime student I think) from memory. But if you google likelihoodism you'll readily find some texts.
  • Interpretations of Probability
    Yes, priors, their choice and justification are a vexed issue for Bayesianism, so much so that some would rather not deal with them at all (e.g. "likelihoodism" of Fitelson and Sober), or at least eschew ignorance priors (e.g. Norton). But to Bayesianism's credit, it at least makes the issue explicit, whereas frequentism kind of sweeps it under the rug.

    Good point about psychology as well. Orthodox Bayesianism is usually justified by Dutch book arguments or similar, which presuppose some highly idealized rationally calculating agent. It is often said that people's intuition is crap at dealing with probabilities. This sentiment, no doubt, sets that kind of rational probability as the standard for comparison. But wasn't the very idea of "subjective" probability to take our psychological intuitions as the primary source of probability valuations? There seem to be conflicting agendas here. But on the other hand, if we give up the simplistic rationalism of Bayes, won't we then diverge from scientific (not to mention mathematical) probability, carving out a special theory that's only relevant to psychology?
  • Interpretations of Probability
    There's also a very good introduction to interpretations of probability in the SEP article Interpretations of Probability. (Though probably biased towards Bayesianism, the author's preference.)

    I think that practical differences between frequentism and Bayesianism are overstated. With careful analysis one method can usually be translated into the other.
  • Irreducible Complexity
    It would be helpful to start with what is probably the minimal commitment of reductionism, which is supervenience. Supervenience can be summed up with the slogan: "No A differences without B differences." Suppose A and B are alternative accounts of the same phenomena (e.g. cognition). Let P' and P" be distinct phenomenal states, A' and A" - their accounts in theory A (e.g. psychological states), and B' and B" - their accounts in theory B (e.g. neural states). We say that theory A supervenes on theory B if for A' and A" to be distinct, B' and B" must also be distinct, but the reverse is not necessarily true. Thus, the relationship between A and B could also be described as coarse-graining.

    The connection with reductionism is that for theory A to reduce to theory B it must, at a minimum, supervene on B. And some would stop at that. But others attempt to go further in elucidating dependent relationships between different levels of explanation. The most ambitious view would probably be one that claims that high-level accounts - "special" sciences and psychology - as well as their specific theoretical entities can be analytically deduced from more fundamental accounts and entities (type-type reduction). A more modest claim is that any particular finding described by a higher-level theory could be traced to some configuration described by a lower-level theory, if only we knew all the relevant facts and possessed the necessary computational resources - but not necessarily according to some fixed bridge law (token-token reduction).
  • Irreducible Complexity
    Can I get an example of something that is unitary?Pneumenon

    That would be something like the problem as you framed it in the OP: "A bunch of billiard balls are bouncing around on a table in a classically cliched example of Newtonian whatever."

    This formulation explicitly admits of only one account, and so there isn't anything to be reductionist or irreductionist about. As others have noted (and I think @StreetlightX has been spot-on), the situation that you outlined does not really set up a reductionist/irreductionist conflict. Granted, "reductionist" is a nebulous and loaded term, more often used as a derogatory adjective than actually explained.

    So what is reductionism? Reductionism implies different accounts, different explanatory schemes, different theories. Often people talk about levels of explanation. So, not just alternative accounts, but accounts organized in a kind of hierarchy, with the one at the bottom being - on the reductionist view - the most fundamental and the most veridical, the others being merely convenient approximations. This is a view that is common among scientists and some philosophers of science*.

    Reductionism is sometimes described as taking a thing apart in order to explain the working of the whole in terms of its parts. This is probably what you were trying to get at with your example, except that from the start both your Mr. Reductionist and Mr. Irreductionist are already looking at an atomized picture, with no intimation of there being a whole (irreducible?) thing that these atoms constitute. (And by the way, the expression "irreducible complexity" was coined by an "intelligent design" (creationism) proponent, and denotes a different idea.)

    There is some truth to the part/whole account of reductionism - the truth being that the hierarchy of explanations that I mentioned earlier roughly corresponds to a hierarchy of spacial scales. When, as scientists, we attempt to provide a better, more accurate account of something, oftentimes we get out our microscope and examine it at a finer scale. And since matter tends to clump into more-or-less sharp-edged objects at many different scales (particles, atoms, molecules, cells, chairs, planets, etc.), this is where we get the idea of breaking a thing into parts to understand it better. But I think this part/whole view does not entirely capture the idea of reductionism. Quantum mechanics, with its universal wavefunction and entangled particles, is as "wholistic" as anything, and yet it comfortably fits into many a reductionist worldview as perhaps the most fundamental level of reality.

    * Philosopher of physics David Wallace opens this lecture about quantum mechanics by confidently proclaiming the sort of view that I outlined above as the consensus view of physics! I think he is rather overstating the case, even if we only ask physicists.
  • Explaining probabilities in quantum mechanics
    Yeah, the weirdness objection is the worst of the lot, and does not deserve any respect. Quantum mechanics is weird. The world is weird. Get used to it.
  • Explaining probabilities in quantum mechanics
    All interpretations of quantum mechanics explain exactly the same observations, so in that sense they are explanatory to exactly the same degree. One could make a case that some of them are more parsimonious than others, but that is never an easy case to make. That said, superficially at least, the Everett interpretation does seem to be more parsimonious than its main rivals. But I realize that things aren't so simple, and not having sufficient expertise, withhold further judgment.
  • Explaining probabilities in quantum mechanics
    It would be great if everyone wanted to preserve causality in their theories but that is what the Copenhagen interpretation explicitly rejects. The idea that the universe is inherently probabilistic implies that the probabilities are a brute fact and inexplicable.Andrew M

    Well, again, you are just equivocating between "causal" and "deterministic." What you are really saying is that our theories ought to be deterministic. I disagree. There is no a priori reason why we should prefer determinism. Or indeterminism, for that matter. I don't consider either to be a theoretical virtue, in and of itself. Of course, if one also offered better or more specific predictions, or a more economical description than the other, then it ought to be preferred - but those other advantages obtain independently of the determinism/indeterminism split. Indeed, in the case of the interpretations of quantum mechanics, none of the empirical advantages can be credited to one interpretation and not another, since they all make the same empirical predictions*.

    * That may not actually be true - some interpretations seem to make distinct predictions, but they are presently out of reach for empirical investigation.
  • Existence is not a predicate
    So I still don't understand, what was the point of the exercise? To use the existence quantifier in a WFF?
  • Existence is not a predicate
    If your point is that this is not an explanation of existence -- the ineffable there-ness of stuff -- I don't think it was intended to be.Srap Tasmaner

    No, you are right, I think I got carried away.

    Existence can be modelled syntactically:
    P(x) iff ∃x(x=x)
    P(x) is true if and only x exists. (x exists iff it equals with itself)
    Meta

    So x exists iff there exists x (∃x) such that it is equal to itself? The last bit seems unnecessary, but otherwise you have a perfect tautology.
  • Existence is not a predicate
    I think Sam was saying that talk of existence is really talk of whether a concept is instantiated.Srap Tasmaner

    The statement is saying that the concept of hobbits has no instances or individuals of which it is true.Sam26

    And what do you mean when you say that the concept is not instantiated, has no instances? (I deliberately emphasized the verb "to be" in these phrases.) Well, it means that there are no such things, that they do not exist. Oh, wait...
  • Explaining probabilities in quantum mechanics
    There have been a number of attempts to derive/justify the Born rule, including the self-locating uncertainty approach that Carroll and Sebens develop (I haven't looked at their paper, but they probably cite earlier works in the same vein). Not everyone is convinced that such justifications are (a) not circular, and (b) do not smuggle in assumptions that are not present in the starting interpretation. But adjudicating this debate is way beyond my pay grade.

    I just want to take issue with your characterization of probabilistic theories as "acausal." What you are talking about is causal determinism, and the keyword here is determinism. You can, of course, put your foot down and insist that causality necessarily implies determinism, but, as far as your arguments here are concerned, causality may as well equal determinism, because you are not actually talking about any aspect of causality other than it being deterministic. So for your purposes, causality is a redundant concept, since all that you are talking about is determinism. And I suspect that you only bring it up for rhetorical purposes (everyone wants to preserve causality in our theories, right?)
  • 'Quantum free will' vs determinism
    I agree with all you say above but would add that the probabilities themselves also have no causal explanation under the Copenhagen interpretation (i.e., the Born rule is postulated).Andrew M

    That's true. But having postulates is no sin in itself: any theory relies on some postulates. The important thing is that the Born rule postulate in the Copenhagen interpretation does not clash with its narrative.

    This is when considering a single beam splitter in isolation. When one photon is sent into a beam splitter, there are two position eigenstates - one for the reflection path and one for the transmission path with 0.5 probability for each.

    The MZI experiment shows that this cannot be the scenario at the second beam splitter. If only one photon were entering the second beam splitter, then a photon should be found at the second detector half the time. But it's not. This is what I was trying to convey with the "Alice rolling sixes" analogy. It is highly improbable that on multiple runs a single photon entering the second beam splitter would always be found at the first detector purely by chance.

    But this is what the Copenhagen interpretation is committed to by denying causality. The results that it predicts are inherently inexplicable on its own premise.
    Andrew M

    No, this is what we would be committed to if we interpreted light as a flow of classical particles. But the Copenhagen interpretation does not do that. It is committed to the same thing that the fully-quantum theory is committed to, plus a little extra - but that extra does not show up until the measurement occurs at the detectors, at which point the "extra" makes no observable difference.
  • 'Quantum free will' vs determinism
    The Copenhagen interpretation makes the same prediction but it denies that there is a causal explanation for the probabilities. But, if causality is assumed, then the MZI experiment shows that a beam splitter cannot be sending a photon exclusively one way or the other with 0.5 probability (or else a photon would arrive at either detector with 0.5 probability, not 0 and 1).Andrew M

    You do not need to assume causality, or anything else besides the operation of standard quantum mechanics, in order to obtain that result. You said so yourself: the Copenhagen interpretation makes the same prediction. It follows the standard solution all the way up to the moment of detection, at which point it says that the superposition state collapses into one of the eigenstates - acausally, as you say, but following the Born rule for probabilities. And since in this case the superposition is degenerate, the result is perfectly predictable, even assuming the Copenhagen interpretation: the wavefunction has to collapse into one particular position eigenstate with probability 1, simply because there is only one non-zero eigenvalue. So where do you get probability 0.5? And what does this have to do with causality? I don't understand.
  • 'Quantum free will' vs determinism
    No. It is in principle possible that Alice could roll a dice a million times and get a six every time. That result is no less likely than any other string of results for a million rolls. But her non-random-looking result begs for an explanation in a way that random-looking results don't.

    So the Copenhagen interpretation correctly predicts that a photon in the standard MZI experiment will always end up at the first detector despite passing through beam splitters. But that raises the question as to why. What is the causal explanation for that non-random-looking result?

    For the Copenhagen interpretation, the Schrodinger equation is equivalent to asserting that Alice just always rolls sixes. Each formalism gives the correct predictions and no causal explanation exists.

    The problem is with the plausibility of that idea.
    Andrew M

    It is true that an infinite sequence of sixes is a possible outcome of an infinite sequence of die rolls. But that doesn't change the fact that the probability of each such roll is given as 1/6 by the theory that the die is fair.

    In the MZI experiment the standard quantum mechanics calculation gives the probabilities at the detectors as 0 and 1. Any interpretation of quantum mechanics had better yield the same probabilities, otherwise it doesn't even qualify as an interpretation. Are you saying that the Copenhagen interpretation predicts probabilities other than 0 and 1 in this case, or fails to predict anything specific?

    (As an aside, this very special case where probabilities neatly collapse into all or nothing is uniquely favorable to the Everett interpretation, which otherwise faces a prima facie problem with specific observed frequencies of outcomes. In contrast to the Copenhagen interpretation, which happily assumes the reality of probabilistic outcomes as a matter of principle, the Born rule is difficult to justify in the context of Many Worlds. When they are not making popular presentations, like the one by David Wallace that you linked, Everettians tie themselves into knots trying to make sense of these probabilities. And this is where, I am afraid, the prima facie appeal of the MWI as the "no-interpretation" interpretation dissipates.)
  • 'Quantum free will' vs determinism
    I don't understand. If an interpretation gives us the correct result (i.e. the result predicted by the formalism and validated by experiments), then where is the problem? Or are you under the impression that a "non-deterministic interpretation" is contractually obligated to give a non-deterministic result for every conceivable measurement?

    Perhaps you can work an example in one of these interpretations and show where exactly the problem lies.
  • 'Quantum free will' vs determinism
    There's a nice apocryphal story about Wittgenstein there in the first lecture. For those who don't want to watch the vid, it goes something like this:

    Wittgenstein once asked a colleague: "Why were people so surprised to discover that the Earth is spinning and not that the Sun goes around the Earth?" His colleague replied that, well, it kind of looks like the Sun goes around the Earth, doesn't it? To which Wittgenstein shot back: "Well, what would it look like if it looked like the Earth was spinning?"

    And the answer, of course, is that it would look like exactly like it does look like, exactly like it looked like back when people thought that the Sun was going around the Earth.

    So I want to ask you. You keep saying that the Mach-Zehnder interferometer experiment would be inexplicable under any interpretation other than the Everett interpretation. So what do you think the result of the experiment would look like if the Bohm or the Copenhagen interpretation was true?
  • The Ontological Proof (TOP)
    Heh, that's a neat counter-argument. I don't think I've come across it before.
  • Question for non-theists: What grounds your morality?
    Answering for myself (I am not a theist, but I am not sure that I am a "moral realist," because this notion is not very clear to me), I don't seek to ground my moral convictions in anything. I don't think that, as far as moral claims go, there is anything more fundamental than moral convictions.

    Some of my moral judgments are more secure than others, and at times I seek to ground some less secure opinions in more secure, more fundamental convictions. But, as I wrote above, this kind of query cannot provide the grounds for morality as a whole.

    We might try to explain morality as a natural - or a supernatural - phenomenon, but this can only tell us what is, not what ought to be.
  • Question for non-theists: What grounds your morality?
    This is an ontological, not an epistemological question about ethics. I am aware atheists can be very moral beings.Modern Conviviality

    When you are asking for the grounds of a position, i.e. "Why do you hold to that position?" you are, by definition, asking an epistemological question. To insist that it is an ontological question is to beg the question. You are smuggling some kind of an answer into your question: the only "grounds" you will accept must be some kind of "thing" or fact in the world, right? I suppose this leads to your next stipulation:

    This is a question for non-theists who hold to objectivity in ethics (moral realists) - e.g. it is always true that murdering someone for no reason is morally wrong, etc.Modern Conviviality

    Moral realism is usually understood as the statement that (a) moral claims are statements of facts (more than just facts about our own thoughts and feelings), of the way the world is (outside our heads), and (b) at least some moral claims are true. Is this what you mean by moral realism?

    Grounding morality in: evolution (naturalistic fallacy), sentiment (subjectivity), or human reason (ultimately subjective, for whose reason are we speaking of? And human reason, limited as it is, cannot construct moral laws) - seems incoherent.Modern Conviviality

    Why do you think so? "Incoherent" means, strictly speaking, contradictory. What contradictions do you see in these positions?

    ETA: Some of the answers posted here (, , ) propose to ground all or most of morality in some particular moral dictum (the Golden Rule, the primacy of personal freedom), but these are not really answers to the question posed in the OP. These are proposals for theories of morality that reduce most moral claims to some fundamental moral principle. But these proposed grounds are themselves moral principles, and so they cannot ground all of morality.
  • Here is what I do not get about determinism and free will
    "Judged from a scientific and logical perspective, the belief that we stand outside the causal web in any respect is an absurdity, the height of human egoism and exceptionalism. We should get over the idea that to be real agents we have to be self-created..."

    Do you see that?

    The "belief that we stand outside the causal web in any respect is an absurdity", and we should cause ourselves to "get over the idea that to be real agents we have to be self-created".

    Do you see that?

    It is absurd to believe that we are outside of C, but from outside of C we should...
    WISDOMfromPO-MO

    No, I don't see that, not in what you quoted.

    There is always this contradiction in determinism, but nobody--preaching determinism or criticizing determinism--ever seems to be aware of it.WISDOMfromPO-MO

    What determinism? There is nothing about determinism in that quote. But if you want to know how determinism can be thought to be compatible with free will, I suggest you do some reading about compatibilism, instead of starting a new thread every time the question knocks into your head.
  • Confidence, evidence, and heaps
    Thanks, I mostly agree with you.

    Sorites
    [...]
    I think it's noticeably less controversial if you imagine this representing a population rather than an individual.
    Srap Tasmaner

    Yes, here the interpretation of the model is clear and the model may be a good fit (to that interpretation). Or is it? If the middle section is where people are genuinely uncertain about their choice, the actual distribution of answers may break down into random noise.

    As above, we could graph her uncertainty about her answer instead, and we'd expect a normal distribution, wouldn't we?Srap Tasmaner

    Does the statistics (if there is in fact a consistent statistics) of individual choice represent one's degree of confidence/uncertainty? If we define it behaviorally, as you say later, then it does so, by definition. But then reporting observed behavior as the degree of uncertainty is merely tautological: despite the use of an ostensibly psychological term, this does not shed any light on our inner world. But if the assertion is that the graph represents phenomenal uncertainty (which is, after all, the central thesis of epistemic/Bayesian probability interpretation), flattening out a mess of thoughts, feelings and subconscious processes into one number, then it is much less certain (as it were).

    I guess my uneasiness goes back to bridging the gap between probability and a single case. Unlike mathematical probability, real-world probability is always single-case (we don't deal with infinite ensembles!) Defining probability as a frequency is unsound for that reason, while defining it epistemically threatens to oversimplify a complex psychological phenomenon. The moral, I think, is to treat Bayesian models of behavior with caution. When you blow up a detail of a curve and ask about its physical meaning, always keep in mind the possibility that it may not have one: it may just be a modeling artefact.

    One thing this curve could represent is an individual striving for consistency under conditions of irreducible uncertainty.Srap Tasmaner

    Yes, that's the Dutch Book argument, and I do find it rather compelling. (And notice how you have switched from behavior to phenomenology, after all!) Don't get me wrong, I like Bayesianism. I like it for its mathematical elegance, consistency, and (when used correctly), instrumental usefulness. When it comes to modeling uncertain beliefs and decisions, it is probably the best game in town.

    I'm just interested in how partial belief works, and I keep finding reasons to expect individuals and populations to be homologous.Srap Tasmaner

    Well, one reason for that may just be that the curve, assuming it is the error function, is closely related to the normal distribution, which is ubiquitous whenever you deal with (or assume) random variables.

    There may also be an evo-psych story here: the reason individual is homologous with population is because cognition is an evolved feature, and evolution works on populations. The behavioral strategies that statistically increased the population fitness were the ones that were fixed in our genes. This may also serve to explain away the problem of induction: the reason we intuitively trust induction is that our environment does have certain regularities (how could it not? we wouldn't be here if it didn't), and we have adapted to recognize and exploit those regularities.
  • Confidence, evidence, and heaps
    I didn't know this is called a "logistic function."Srap Tasmaner

    Or the error function (different function, similar shape). There are a number of such functions, collectively known as sigmoid functions.

    As for the sorites paradox, I think the whole point of it is that it cannot be resolved like the Ravens paradox, using the subjective degree of confidence model of belief formation, where, in the straightforward Bayesian analysis, belief is represented by a single, continuously evolving real-valued variable. And the reason is simply that there is no sharp and precise fact of the matter to be located here. The "paradox" comes from the tension between the demand, urged by the framing of the problem and prompted by our familiarity with analysis, for a precise numerical solution - and the intuitive realization that no such solution will be satisfactory.

    This actually applies to the Ravens paradox and other such paradoxes of belief as well, though perhaps not as acutely as in the case of the sorites paradox. The Bayesian solution is a neat one, it seduces us with its mathematical elegance, but it is not a perfect fit to our intuitions, nor, in all honesty, should we expect it to be.

    Our problem here is the fallacy of misplaced concreteness. When given a problem, we habitually reach for familiar conceptual tools - in this case mathematical analysis - without giving sufficient critical attention to the nature of the problem. This is fine when solving practical problems, which often dictate the use of a particular toolset and the production of a particular kind of result. For example, if you were to conduct a survey, presenting different people with different quantities of sand, the distribution of answers as a heap/pile percentage vs. the amount of sand might look something like a sigmoid curve (though it will probably be a different curve in different language communities).

    But when the question is no longer about ticking off this or that checkbox on a clipboard and summing up results - when it is something as messy, ambiguous and fluid as a "belief" - why would you expect the answer to be in this definite numeric form? Why would you expect there being the answer in the first place? Do you really think the true nature of a "belief" is a simple mathematical function (preferably one already familiar to us from solving practical problems)? Do you think that "belief" even has a true nature?
  • Feature requests
    I don't like the idea of deleting ("disappearing") posts. Moreover, I think that, faced with the choice of deleting a shitty post and leaving it alone, a reasonable moderator will err on the side of leniency. Which brings us to the present situation, where the board is, frankly, drowning in idiocy. A special not-quite-up-to-standards area would be a less authoritarian solution, IMHO.

    Another solution for serial shit-posters would be to merge all their threads into one (I know one board that practices that).
  • Feature requests
    We really need a dump sub-forum, like in the old place. When I come to the forum and see 7 (seven) threads started by TheMadFool just on the front page, that really is depressing. The whole place looks like a dump. If I were a first-time visitor, I would have left immediately without giving the site much consideration.
  • Fun Programming Quizzes
    Or, you know, you could just calculate the answer :)

    The sum of multiples of 3 below 1000:

    3 + 6 + 9 + ... + 999 = 3 * (1 + 2 + 3 + ... + 333) = 3 * 333 * 334 / 2

    Likewise, the sum of multiples of 5 below 1000 is 5 * 199 * 200 / 2

    But we also need to subtract the sum of multiples of 15, since we counted them in both sums: - 15 * 66 * 67 / 2

    Total: 233,168
  • Fun Programming Quizzes
    Is there really a way to know the number of primes below any integer without having to actually calculate or iterate through those primes?VagabondSpectre

    Prime-counting function