Comments

  • Critical liberal epistemology


    No one would dispute that if theory A predicts X and theory B predicts ~X, and then we observe X, that we can claim progress by eliminating theory B. In practice, it's messier than that, in principle falsifying some theories is progress.

    But what about theory A? Part of Popper's program, as I understand it, was focused not on direct falsification as the way to choose between hypotheses, but on distinguishing science from non-science by requiring falsifiability of theories. Theory A has been submitted to being falsified and it wasn't. I think most scientists would claim that their confidence in A was increased by this result. For instance, when GR predicted that light waves would bend passing near the sun, but Newtonian mechanics did not, and then we observed that, you want to say we can now discard Newton, but our confidence in Einstein should not increase.

    But there are two points about this observation that I think merit attention: (1) it was a prediction, not a retrodiction; (2) what was predicted was unlikely to us, a surprise. Philosophers have definitions of "surprise", but in this case we could just say, observation for which no explanation is ready to hand. Einstein of course offers both prediction and explanation, a prediction no competing theory was offering, and reasons for that prediction. I think that's why everyone's confidence in GR was bolstered by the result. If we had made the observation by chance, we would be baffled.

    Of course, our confidence increases but not to the point that we think GR is the absolute truth; we only conclude that it's closer to the truth than other theories, in the specific sense that new theories will have to retrodict what GR predicted, so we know future theories have to look a little like GR.

    My question then is this: why shouldn't we count as progress a theory submitting itself to being falsified and passing? We already have the theory, so we don't learn anything new, agreed; but we do learn that the theory isn't crazy, and that it might, like QM, like evolution, survive many many more rounds of potentially being falsified and not be. Also: each successful non-falsification adds another requirement to future theories, another datum they must retrodict, so there is a similar pruning function, only it's for possible rather than actual theories. Surely that ought to count for something.
  • Critical liberal epistemology
    But there is no rational reason to think that the truth of the consequent of an implication gives even weak support to the antecedent. It’s not just less than certainly, it’s nothing at all.Pfhorrest

    This ought to be a clue that you've chosen the wrong way of formalizing the process, because confirmatory evidence just obviously does matter. If you've seen thousands of swans in your lifetime and they were all white, there's nothing at all irrational about believing that swans are probably all white, or believing defeasibly that they are all white.

    Given your general approach, I'm just not at all clear why you're so attached to this mid-century Quine-Popper thing instead of going in for something more like formal epistemology. Have you considered following the Quantitative Way? (LessWrong, SlateStarCodex, Overcoming Bias, et al.) I have some reservations, but it's a much more defensible model of rationality than yours, and it seems like it would be right up your alley.
  • Nothing to do with Dennett's "Quining Qualia"
    Apples aren't red. They reflect light in a wavelength range we see as red.Marchesk

    What is it to see something as red? Could I express such an experience by saying, "To me, the apple looks red" or maybe "To me, the apple looks like it is red"?
  • Critical liberal epistemology
    enclosingPfhorrest

    That depends obviously on the lines, so unless you're really working up some math here, this analogy is not so good.

    I understand the impulse to talk in terms of weeding out and pruning and increasing the odds; there's reason to think this is a sound procedure in some situations (my "Poe-Doyle rule" is an example). I remain skeptical that it can be generalized so easily, and think it more likely that pruning only works on already bounded solution spaces. If you want to generalize, you want an account of how we narrow the range of options to a manageable set we can successfully prune. But you're going to refuse to do that, because you refuse to acknowledge the conditions that make pruning an option. You want a single universal method that can get you from anything at all to probably true; I am doubtful there is any single such skeleton key.
  • Critical liberal epistemology
    I'm just pointing out that there is no bottom to stand on and hanging from the sky isn't an option either, so we've got to do something else directly involving the water we're immediately surrounded by instead.Pfhorrest

    That's actually nice, but it only works if you believe there's nothing to stand on and nothing to hold onto, and if you believe you don't have to demonstrate your faith by allowing yourself to slip into the water before His Hand reaches down to save you (think: the binding of Isaac). But as a description of philosophy "starting in the middle", I wholeheartedly approve. You just seem to think this is some use in dealing with people who don't already agree we have to start in the middle, and I don't see how it possibly could be.

    (By the way, this is exactly how Quine defends the naturalization of epistemology against charges of circularity: if science is the source of the doubts about the results of science, then we may legitimately use science in the defense of those results.)

    Something else I've had on my mind. It is sometimes said that epistemology is a search for a method that, if followed, would produce two results: (1) believing things that are true; (2) not believing things that are false. Critical rationalism is a claim that we get (1) for free so long as we do (2), at least in the very long run. But that only makes sense for finite sets of beliefs, hence you're inclined to model a person's web of beliefs as a snapshot that is at least arguably finite, on the grounds that it's hard to see how a person could hold an infinite number of beliefs.

    But that model could itself be wrong, if you include within our beliefs not only closed propositions about particulars but also material inference rules that are open-ended. That is, if we actually had belief generators that ought to be dealt with. (Can you throw a car over your house? What about that car? What about that car? What about that car? ...)

    And, as I think I've tried to say before but maybe didn't, it's entirely retrospective and we actually live with a stream of incoming new beliefs, so even if the model is okay we never quite have the opportunity to hit the pause button and use it.
  • Counting squares


    The math is interesting, but there's just no philosophical issue here.

    If I were you, I'd look at rectangles instead. Maybe have a look at partitions too.
  • Emergence
    So there is a duality to emergence here - that which is being produced as a necessity and that which is being left to the vagaries of chance.apokrisis

    Yeah that's a really funny thing. People seem to reach for "emergence" when expecting a story about how such-and-such unlikely something-or-other (usually consciousness) -- "unlikely", of course, in the eye of the theorist -- came about, what caused it, what made it happen.

    And you might very well answer, in some cases at least, "It happened because nothing was stopping it." That's quite a serious shift in worldview.
  • Emergence
    a higher scale of interaction emerging via an ability to ignore the physics of the internals of a lower scale of organisation. The higher scale now only sees the stable, long-run, statistical view. And that stability of view is what in fact allows there to be a new higher hierarchical scale of material organisation.apokrisis

    This was my first thought -- the difference between the mereological sum of whatever bits make up a boulder and a boulder. Do we call the boulder an "emergent" object? Certainly a boulder has properties of its own that the bits don't have on their own or as an abstract set of bits. The boulder can roll down a hill and smash through a tree. Is that an "emergent" property of something, like the bits arranged in ways that "produce" or "constitute" or "give rise to" a boulder?

    I'm not sure what an observer has to do with any of this. Either that set of bits is heaped or scattered around or whatever, or it's arranged as a boulder. If we're only asking because of the metaphysics -- whether we countenance the existence of the boulder, and in what way -- that doesn't look all that interesting to me, unless it's to call attention to larger system within which boulders play a part.
  • Nothing to do with Dennett's "Quining Qualia"
    I have no control over what's dumped into my awareness and what's not.Srap Tasmaner

    Whoops. Was thinking about how I can't choose to see an un-colored world and forgot about attention, which I've also thought should be part of this discussion, since introspection is not our default activity and so much talk about consciousness among philosophers is really talk about introspection.
  • Nothing to do with Dennett's "Quining Qualia"
    So the issue arises when attempting to explain our conscious sensationsMarchesk

    To be conscious is not to be conscious of some perceptual process or 1s and 0s in the brains "registers", it's to be aware of how things seem, whether nutty or purple.Marchesk

    I'm not even clear on what you want an account of. Is it that a given cup of coffee seems a particular way to me? Or that I'm aware that the cup of coffee seems a particular way to me? Or is there an awareness of the taste of the coffee which is only by definition how it tastes to me?

    One thing about consciousness is that it seems to be related to volition, might even be why we have it at all instead of just reflexes, however complicated. Or it could be this is the cheapest way to build up a repertoire of complex reflexes. (I spend far more time talking and writing than I do trying to remember words I want to use.) At any rate, we don't have volition here: I don't choose to see the world as colored, or to smell what I smell or feel what I feel, and so on. I have no control over what's dumped into my awareness and what's not. (Similarly, it's almost impossible not to understand speech in a language you understand, so robust is the habit.) That strikes me as interesting, but I've no idea what to do with it.
  • Nothing to do with Dennett's "Quining Qualia"
    If knowing how a magic trick is done would rob you of the enjoyment, then by all means don't listen to the explanation, but on what grounds would you claim there can be no explanation?

    There may be more to the academic debate, or not, but that's the only discussion I see in this thread.
  • Nothing to do with Dennett's "Quining Qualia"
    That's just another way to say the same thing though.Olivier5

    We seem to have very different understandings of what the issues are here. Not sure there's much else to say.
  • Critical liberal epistemology


    All this stuff sounds so good in theory.

    We like falsification because we can imagine science as one Michelson-Morley experiment after another. It's not, of course, but it somehow works anyway.

    We like holism because we're familiar with finding out our assumptions and presuppositions were wrong.

    We like the Asymptote of Truth because of the succession of theories and because probability.

    In a sense all you're doing is reinventing the dual process model. System 1, that "machine for jumping to conclusions" as Kahneman calls it, can be counted on to continually produce new beliefs, and when there's trouble system 2 attempts some logical process of evaluating and revising. How that's done is apparently, in some sense, within our control. That we do it is more or less a fact.

    For all that, when I want to know if I should stop for milk on the way home, I look in the fridge, and I don't need Popper or Quine or Peirce to get a definitive answer.
  • Nothing to do with Dennett's "Quining Qualia"
    mind and body are one.Pop

    Personally, I find the dual process account pretty convincing, so I think there's lots of stuff going on with us we aren't aware of. If you want to include all of that under "mind", and I would, then I agree wholeheartedly. The disembodied mind is an abstraction.

    To deny one's subjectivity is by definition to deny one's own life.Olivier5

    Geez Louise. Of course I'm not denying my own subjectivity. On the other hand, sense experience is not a subject-object affair; it's an interaction of organism and environment.
  • Nothing to do with Dennett's "Quining Qualia"


    I would try to take this seriously, but you seem to have settled into thinking of yourself as the spokesman for life and flavor and joy and everyone on the other side is some dreary life-denying ivory-tower dweller.

    That's all horseshit, of course. If anything, you're the one neglecting the body and thinking exclusively in terms of the mind, consciousness sovereign of all, center of the universe. If that strikes you as a mischaracterization of your position, maybe you'd be willing to reconsider all this "denying the senses" crap you've posted.
  • Nothing to do with Dennett's "Quining Qualia"
    Behaviors are secondary to subjective experience, which must take center stage.Olivier5

    I expect you know more about the field than I do, but I would be surprised to learn that biology backs you up on this.
  • Critical liberal epistemology
    the important and difficult matters that are raised by your posts. You seem to just want to drag them back to the trivial ground on which you are right, but uninterestingly and uncontroversially so.Isaac

    I'm with @Isaac here, @Pfhorrest, for the most part. This is what I was trying to get at it, how you sort of oscillate between "hard" and "soft": there's a methodology for belief revision that looks to be rule-governed or algorithmic. How do we form beliefs in the first place? "Do something reasonable." How do we apply the rules of the method? "Do something reasonable." How do we decide what belief to drop? "Do something reasonable." How do we gather and weigh new evidence? "Do something reasonable."

    You have described this as a feature rather than a bug, but it repeatedly appears that your theory has no theory in it.

    There is of course a fundamental problem to face up to: is reason computational? On the one hand, modeling reason in the obvious way with primitive formalizations of reasoning like classical logic leaves out about as much as your account; on the other hand, we need whatever model we come up with to be instantiated in a human being, and it's no good just retreating to some vague, pre-Darwin, gentleman's club sense of "reasonableness", a characteristic that cannot be described operationally. We know that it must be describable in operational terms *and* classical logic is not that description.

    So there's real work to do. Your approach seems to want to give both of the failed approaches a seat at the table and hope that works, when we really need to try new things.
  • Nothing to do with Dennett's "Quining Qualia"


    Here's another way to put the issue with your bacon and walnuts example: what you're trying to model, or should be, is learning, and I'm deeply skeptical that learning is just recording earlier instances and referring back to them.
  • Sex, drugs, rock'n'roll as part of the philosophers' quest
    Jim MorrisonJack Cummins

    I've been listening to Morrison Hotel and L.A. Woman a whole lot lately, so I'll say this: I don't take Jim all that seriously as a thinker, but now and then he gets onto something interesting. In particular I keep thinking about a simple line in "Roadhouse Blues": "Let it roll, baby, roll". What is he talking about? I think there's a sexual meaning but there's something else too. "The future's uncertain and the end is always near" (same song) and we are "riders on the storm", so... let it roll, baby, roll.

    I think some of what's going on here comes from reading about existentialism ("into this world we're thrown"), but "let it roll, baby, roll" doesn't sound like existentialism to me.

    What I want to hear is an embrace of the unpredictability of life, the excitement of that unpredictability, and that means thinking of life as a great adventure. That's what I've tried to teach my kids. Life's not a job. There's not a right way to do it. The only attitude worth having is that it's an adventure.

    For Jim, some of that adventure was indeed sex, drugs, and rock and roll, but also his books and his writing and god knows what else. I don't think the particulars matter much. The important thing is the attitude, the spirit of adventure, and letting it roll.
  • Nothing to do with Dennett's "Quining Qualia"
    Unless you're playing devil's advocate stronglyfdrake

    (And I haven't even got to size qualia yet. Alice says that Bob is fat, which Bob denies. Explain that, science.)Andrew M

    Humorously.
  • Nothing to do with Dennett's "Quining Qualia"
    Such a system cannot logically work without some ID system for tastes, some qualitative perceptual signal, recognisable somehow from the perceptual signals of other chemicals.Olivier5

    I have no expertise in biochemistry, but I would assume what we're talking about is a chemical we ingest or smell binding to a chemical within our bodies, and that binding triggering some other effects that eventuate in various bodily responses. Some of the biochemical interactions could naturally enough be characterized in terms of "information", and if the pathways of response are developed through experience, you could think of that as "encoding" that information. I don't see any reason to be suspicious of such a story, but I have no idea really. What I would find unlikely is that there is anything like a copy of the "input" filed away somewhere. There's our previous response, which is probably strengthened by repetition, but again I don't know. And I'm still not sure how you expect to point at something somewhere in a human interacting with their environment and say, "Right there! That's the quale."

    I guess part of my resistance is that I assume the whole point of any encyclopedia compiling we do is to develop of repertoire of responses and options to consider as a response. That's pretty crudely put, but the point is I'm not sure you need the encyclopedia as a separate thing at all, when you could just have the responses.
  • Nothing to do with Dennett's "Quining Qualia"
    The important functional point is to be able to recognise a smell, to be able to connect it to another perceptive event(s) that happened in the past, because the whole point is to learn from past experience when interpreting new ones.Olivier5

    Of course.

    I'm just not sure the original perceptual event should be characterized as a thing we could file away and then check new things against. Maybe the research shows it's exactly like that, but I don't see grounds for assuming it must be like that. The argument given, that there has to be a smell that we smell, is not convincing.
  • Nothing to do with Dennett's "Quining Qualia"
    Darwinian advantageOlivier5

    The important thing would be the habitual response of consuming or not consuming. But to get there, you claim we store particular qualia as memories. Really? Like, the whole thing, the exact smell of the rancid bacon? I'm skeptical. You picked up on a characteristic, an aspect of the smell you were experiencing, one that you were already familiar with and had a prepared response to, but it was hard to pick out at first because of all the other aspects and factors in play.

    It's still just not clear to me what this proves. You assume that for the response to kick in, at some point the current "input" must get compared to an exemplar you've stored in memory. That might be how you'd program a robot (although I guess connectionists would say no) but I don't see any reason to believe that our bodies work that way.
  • Nothing to do with Dennett's "Quining Qualia"


    But the argument is precisely over how properly to theorize our experience, isn't it? Unfortunately that means there are facts we all agree on, in some vague sense -- we can and do taste coffee -- and there are some we don't -- there being the taste of this coffee to me right now, primarily as a theoretical entity, but related to our common-sense understanding.
  • Nothing to do with Dennett's "Quining Qualia"


    In everyday contexts, sure. The question is whether you should build a philosophical position on how the languages you and I know happen to handle the grammatical transformation here. Obviously I think that's not a formula for success.
  • Nothing to do with Dennett's "Quining Qualia"


    Grammar, in the schoolbook sense, is not a sure guide to ontology. Think of Quine's puzzle about "seeking" and friends: if I'm looking for a spy, that doesn't mean there's a spy I'm looking for.

    How much do you need to complexify it?Olivier5

    I'll leave figuring out how memory works to neuroscientists.
  • Nothing to do with Dennett's "Quining Qualia"


    Like the way I memorize a phone number?

    Memory is a whole 'nother complicated mess. How much do you need to oversimplify it?
  • Nothing to do with Dennett's "Quining Qualia"
    How would you have a preference if the coffee didn’t taste like something to you?Marchesk

    This has nothing to do with coffee or with coffee tastes or with mental events, nothing like that; this is all about how to deal with words like "something".

    You believe that if i experience something, there's something I'm experiencing, and the only alternative is that I'm experiencing nothing. But that's wrong. I've stipulated that I'm experiencing something; I'm denying the platonist inference that there's something I'm experiencing, period.

    Would you be able to recognise the coffee you like in a blind test?Olivier5

    I dunno. What would it prove either way? What if there were a large trial and people couldn't; would that empirical result disprove the existence of qualia?
  • Nothing to do with Dennett's "Quining Qualia"
    You can’t have movie preferences without movies.Marchesk

    But nobody is denying that I can't have coffee preferences without there being coffees. We're just denying that preferring how one coffee tastes to how another coffee tastes necessitates there being such an entity as how each coffee tastes to me.

    This is just the same old fight against Platonism.

    It is a logical claim. As such, it could be disproved by using propositional logic.Olivier5

    How would I go about doing that? How would you go about proving that if I like how this coffee tastes, there is an entity, how this coffee tastes to me, that I like?
  • Nothing to do with Dennett's "Quining Qualia"


    ?

    But is it empirical, logical, grammatical, theoretical, what?
  • Nothing to do with Dennett's "Quining Qualia"
    you can't have flavour preferences if flavours don't exist.Olivier5

    What kind of claim is that? Is there anything that could convince you that it's false?
  • Critical liberal epistemology
    they're assumptions it didn't even occur to us that we had taken any position about until something we didn't expect happens and we're pressed to explain what exactly about our expectations was in error.Pfhorrest

    But is this what we do? Is it even what we should do?

    Penzias and Wilson switch on the machine, expecting not to be receiving a signal. But they are. That expectation has clearly not been met.

    For you, what's been "falsified" is a a two- or three-layer cake: background assumptions, working theory, specific prediction. You think the next step taken is logical analysis, even if only implicitly: some member of the conjunction of the members of the set of beliefs held at the moment is false, making the conjunction false, preserving the truth of the conditional with a false consequent. Any falsehood will do for this to work, and in a sense this saves you from having to, per impossible, enumerate the background assumptions, because you can just examine them as they come up: if this one's still true, fish the next one out of the bag and check it.

    While this is more or less fine from a logical point of view, it leaves out a lot of what we know about how people actually go about this, and how they can do so successfully, in a way that is worth the rest of us considering a model of rationality. You'll tend to shrug off some of this as if it's okay to have a general theory and a practical way of applying it -- but that's not okay in this particular domain, as ought to be obvious.

    For instance, how are the background assumptions and theoretical commitments in your big conjunction ordered? Order doesn't matter for conjunctions. In what order are they examined? Is there a method, or is it more like the random 'fishing a belief out of a bag' I had above?

    And what does it mean to examine a background assumption and see if it holds? Is that a logical process or is investigative, gathering more information? For instance, you could take an assumption, once somehow identified, and ask, could this be true and my original expectation fail? Swell, but the list of assumptions that will pass that test is uncomfortably large and most of them aren't helpful for what it sounds like they're trying to help with: not explaining the failure of the big conjunction, but the fact of the new observation, which happens to differ from what we expected but is a positive fact in its own right.

    In real life, we don't churn through the big conjunction; instead we hypothesize explanations for the phenomenon it turns out we are observing, though we didn't expect to be. Penzias and Wilson look at the readout and are surprised. The question they will now try to answer is obviously, what caused this? Candidates include a fault or even a design flaw in the equipment, or maybe something obstructing the antenna. They're looking for a particular sort of thing that would cause a constant signal to be reported.

    As you would have it, they consider statements like this: "If assumption A is false, that's consistent with prediction P failing." But in real life, people consider candidates like "If A2 is the case, that would cause P2" -- where A2 is one of the ways A could be false, and P2 is the observed way that P is false. There's an asymmetry here that cripples the formalist approach: "2 + 2 = ___" has one way to be true but a literally infinite number of ways of being false. That applies both to the prediction and to the so-called assumptions. We don't need a way of corralling those infinities because they're not real for people dealing with real problems.

    You can try to layer on more formalism to bring your theorized process of belief revision closer to what we know people do and to what we know works -- rather in the style of talking about measuring the distance between possible worlds -- or you can just accept that the model you started with is actually getting in the way of understanding what really goes on and what is known to work.
  • Critical liberal epistemology
    In Srap's example, "C" is the set of all of the background assumptions made when first making the observation, which include that the dish is clear of debris. Upon seeing an unexpected signal, a possible revision to the beliefs to account for that is "maybe there is dirt on the dish". Because "there is no dirt on the dish" was one of the beliefs within C, positing that maybe there is dirt on the dish is a change to C, a change away from the old C to some new set of background assumptions very much like C but different in whether there is thought to be dirt on the dish. That constitutes a rejection of C.

    (Of course, in the actual case of Srap's example, that replacement for C in turn was quickly falsified itself, as the observations expected from the hypothesis that there is not dirt on the dish soon failed to materialize, when they didn't see any dirt on the dish. Sure, they could have still hypothesized invisible dirt instead of abandoning that hypothesis, but supposing there's a CMB was less of a huge change to the accepted view than everything that would be required to suppose there's invisible dirt on the dish).
    Pfhorrest

    This might be the right point to confront something @Isaac is always reminding us about: the stories we tell about our beliefs are post-hoc. They are rationalizations. That needn't mean they are bad or untrustworthy or invalid or indefensible, but it's worth bearing in mind.

    What is the situation when our boys "switch on" the radio telescope? What "set of beliefs" do they hold? There's no reason to think they believe there are no pigeons nesting in the antenna; I believe they discovered them when they checked the antenna, and they thought this explained their results. Do they hold some more general belief that they antenna is unobstructed? I don't know, and I doubt you do either. So far as I can tell, they would have no reason to hold a belief either way about it being obstructed. They probably observed its construction or installation, and would have memories of seeing at that time that it was unobstructed; does that mean they held a continuing belief that it continued to be unobstructed? I doubt it, but we'll come back to this in a minute. (Btw, pictures show the radio telescope not to be on the roof and not exactly a dish either, both mistakes of mine.)

    Similar remarks about the equipment in the lab: did they hold a belief that it was all in working order? More likely, but again there's a temporal issue: did they believe it was a-ok as they got the readings that puzzled them? Surely, else they wouldn't have been taking readings. Maybe in preparation for taking first readings, they did some tests. What if they didn't? If you grab a jug of milk out of the fridge, do you hold a belief that it won't split open? What about a belief that a hole won't spontaneously appear in the bottom?

    We're accustomed sometimes when doing philosophy to talk about "belief" this way, as a sort of abstract mental correlate of the actions we take. (I have defended talking this way on this very forum.) Sitting "implies", in some sense, a belief that the chair will hold our weight, that it's real not an illusion, that it won't turn out to be made of some other material than it appears to be, that it won't spontaneously move or even disappear, and so on.

    One reason this attribution of belief feels okay is our experience of finding that an assumption we've made was incorrect. But what does that mean exactly? What is an assumption like? An awful lot of assumptions, including the ones that turn out to be incorrect, are not held explicitly; does it help to describe them as being held implicitly? Some we might be inclined to attribute to people in order to make sense of their behavior; if you fish a coin out of your pocket and put it into a vending machine, you must be assuming the coin is legal tender the machine won't reject. You're not holding such a belief explicitly, but you're assuming it's the case, and even that only implicitly.

    How does that actually help us? Suppose the coin is accepted; does that justify our assumption that it was legal tender? There's no logical reason not to say that, I don't think, but it's not the first thing I'd reach for in describing the situation. What if it's rejected? We try again, and it's rejected again -- sometimes they just don't quite catch right. What would you do next? You'd have a close look at the coin. Is it damaged? No. Maybe it's fake, doesn't have the right weight.

    What's going on here? Have you found out you must reject your belief that the coin was genuine? Maybe, kinda. But when did that happen? And how? You expected the coin to just work, that much is clear; when it didn't, you could shrug it off and try another coin (vending machines are a little unpredictable) and never think about it, or you could look for an explanation.

    I suspect cases where the natural next step to take is the logical analysis of the set of beliefs you held right at the moment when things started going wrong are pretty rare overall. The natural step is often going to be investigating, at least a little, looking at stuff. And some theorizing, or hypothesizing. I think this is the moment where you might identify an assumption that the coin is genuine, but only because it is now suddenly in question whether that's true. In other words, it might occur to you (or not) that the coin being fake would cause the machine to reject it. "The coin is not genuine" would appear in your world not as the negation of some belief you actually held, implicitly, but as an hypothesis that could explain why it was rejected. Implicit assumptions seem generally to show up this way -- not in themselves, and not in the form we are claimed to have held them, but negated, when the converse might be the explanation we need.

    So in Holmdel, New Jersey, did Penzias and Wilson assume the equipment was still working having checked it out at some earlier time? Why not just say that it occurred to them that a malfunction might cause the readings they were getting. Did they assume nothing was obstructing the antenna? In particular that there were no pigeon nests in it? Of course not. But it might occur to them that some kind of obstruction might cause the results they got.

    You can patch these things together after the fact into a logical structure -- we're really, really good at rationalizing, but so what? I hope it's clear, I'm not trying to reform how we talk about assumptions and so on, but I do think trying to formalize this way of speaking into a logical system that allegedly explains how people come to believe what they do or how they change what they believe -- it's a mistake. I think its mistakenness shows up in part in its inability even to do what it claims -- eliminate false beliefs. It also fails to account for the fact that investigating actually works -- it shouldn't, because you can always just reject the new observation, or you can find some way to take it on-board without falsifying anything, always.

    That's my sense of things. I think the whole approach (and it used to be mine too) is a mistake, just the wrong way to think about beliefs.
  • Critical liberal epistemology


    I think the idea of falsification as a way of narrowing the range of what could be true is really appealing, it's just not the whole story. One of the things that's wrong here, I think, is that the set of beliefs under consideration is treated as if it's frozen; it's entirely retrospective. There's some sense to this for the start of diagnosis -- I'm in an epistemic pickle, how did I get here? -- but there doesn't seem to be a way out if all you do is re-evaluate and re-arrange and re-classify that frozen set of beliefs. There are always ways to do that.

    As Dewey would insist, we live constantly projecting into the future. The resolution to this kind of problem is going out and getting more data, which is what we naturally do anyway. We are never properly represented as having a fixed set of beliefs to play with; the contents of that set -- insofar as there even is such a thing -- is constantly shifting, in large part because we make it so. But that means the freedom you think you have to re-arrange your web of beliefs however you like is probably illusory, because taking action and gathering more and new types of data actually works, and there's no reason it should, if it's just a matter of choosing, even arbitrarily, what to keep and what to jettison.

    I'll give an example, one that I always thought kind of illustrates the implicit existence of the web of beliefs, but will make Dewey's point as well: the discovery of cosmic microwave background radiation. I always loved this story. Penzlas and Wilson weren't even looking for it, but they had a nice radio telescope on the roof and had done a good job isolating what it should pick up. But there's a hum. At the very beginning, you count this as completely unexplained. What's helpful about the web of beliefs thing is that you can take a step back -- what other assumptions are we making? Top of the list is that the equipment is working properly. In the movies, at this point someone (or the machine itself) will "run a full diagnostic". So they did that. Maybe there's a problem on the roof -- in other words, something we can't even standing here in the lab. They climb up on the roof and find their beautiful dish full of pigeon nests and pigeon poop. Chase 'em off and scrub the thing clean, then check again. It's still there.

    Sussing out your assumptions is helpful, because it could be one of those assumptions (that the machines are working, that there is no obstruction in the dish) could be wrong. But then you take action. You check. And once you've "run a full diagnostic" or maybe two, you count the equipment as working. You're done there. It doesn't stay forever in epistemic no-man's land as maybe still not working. Same for the roof. Once you've climbed up there and taken the action to nail down this assumption -- that the dish is in working order -- you're done. Now you have actually ruled out the hum as being an artifact of your equipment in some way, and you conclude that it is real and worth thinking about.

    But in none of this are we just playing a formal game with a frozen set of beliefs and making choices about which to keep and which to discard.
  • Nothing to do with Dennett's "Quining Qualia"
    Somehow that updating of the model has to lead to colors, painsMarchesk

    What I want to say is, I hope not!

    If objects are discriminated in my modeling of the world based on visual information, that aspect of object formation or object identification just is what I think of as "color". @Isaac probably has a way better handle on this than I do, but I would guess there's awareness as we understand it just in case something goes wrong and system 2 needs to get involved. Awareness is just kept in the loop, so to speak. Stuff system 2 can't do anything about anyway is never included in that briefing (autonomic functions), and on occasion things might be going so predictably that system 1 skips some updates -- like when I'm driving home from work and sometimes find that I have successfully driven a few miles with my thoughts elsewhere.

    Okay, so back to "I hope not". Since color is at most a side effect of my unconscious model-building, there's no reason to think I have any way of processing colors themselves, if I somehow received them as input. Do you see what I mean? Color as I know it, is part of the object discriminating process I'm constantly engaged in while awake, completely bound up with the functioning of my visual systems. There's nothing there that deals with colors, just stuff that, throughout my interaction with my environment, leaves behind what I think of as colors.

    This is my argument but even I'm having trouble imagining it! Maybe I can describe it this way: you know how people like to color-code things? Or how a kids toy piano does each of the keys in an octave a different color to make it easier to spot them and remember them. Color as we think of it should turn out to be itself a sort of color-coding. We combine the "testimony" of our various senses in our object models, but since the sources combined are orthogonal and can give rise to different sorts of possible actions, we tag the contribution each source is making, and the part of the object sourced visually is tagged by what we think of as color. But what matters for color-coding is just the system of differences, and that it is orthogonal to the tagging used for the other senses, so we can match appropriate possible actions to our environment. (Something's color doesn't tell us how much it weighs, for instance; we'll use our proprioception for that, gauging the resistance we experience when we try to lift or move it, that sort of thing.)

    computer models don't have sensations. They don't see a colored in world, feel the coldness of the wind, smell the fragrance of flowers, feel the heaviness of a long workout in their joints.Marchesk

    I think maybe we don't really either, not in the way typically imagined. I want to say what has to be avoided to start with is an image of experience that is at all static. Empiricists have this model of experience as chopped into a long string of instants -- your visual field is like this, then this, then this, and you have to make these inductive leaps to tie it all together into any kind of coherence. But there's nothing like this really going on, is there? We are, while awake, in constant multifaceted contact with our environment and processing an unending stream of data which we constantly project into the future and take action on. All of these point-like experiences we seem to construct retrospectively, I'm not at all sure anything quite like that is ever actually happening. Feeling the sun and the wind is bound up with all the rest of the process of living, testing, responding, projecting. We never stand still long enough for any sort of input to become present to us in this revealed sort of way; we're already involved with whatever it is, expecting it, seeking it, avoiding it, using it, regretting it, whatever. We're really nothing at all like cameras, you know?

    Bleh. I'm way out of my depth but felt like rambling. Maybe you'll see something in here somewhere worth responding to.
  • Critical liberal epistemology


    Adding them doesn't help because they can't be properly specified. Roughly I took@Banno's point to be that, since you are putting an essentially undefined set of beliefs on the table, you have far too many options for disconfirmation. It's the same as what goes wrong with c.p. clauses.
  • Nothing to do with Dennett's "Quining Qualia"


    I think he's saying the taste of tea isn't like a coin you drop in the slot on a machine and then it does something. The way we interface with and interact with our environment is way more complicated than that.
  • Critical liberal epistemology
    In which case the simple belief that dancing makes it rain is false, and needs to be modified with something else that takes into account the violinist's performance or people's beliefs too.Pfhorrest

    But now you're talking about ceteris paribus clauses and that's a whole 'nother minefield, as Nelson Goodman showed.
  • Nothing to do with Dennett's "Quining Qualia"
    Somehow you have to show how the act of discriminating becomes a conscious sensation.Marchesk

    But maybe that just is consciousness, not immediately, not straight from the senses, but the continual updating of your model of a world of objects. That sounds pretty close to what we'd expect a conscious organism to be doing, responding to change in a way that enables planning. Is there an alternative that doesn't require a Cartesian theater?
  • Nothing to do with Dennett's "Quining Qualia"


    Do you think the taste of the tea is an experience that stands on its own? By "on its own", I mean not in distinction from all the other taste experiences you've had or expect.

    I'm just thinking that taste, like color, is this little system that allows distinguishing one facet of your experience from another, one sort of experience from another. We label different ways of making these distinctions, but it could be we're not so much noting "red" as a sort of plenum, complete unto itself, but just tracking differences, changes in what's going on with us.

    I know I'm not expressing this well, but I think you've spent more time on this stuff than I have, so maybe you can see what I'm getting at.