In Srap's example, "C" is the set of all of the background assumptions made when first making the observation, which include that the dish is clear of debris. Upon seeing an unexpected signal, a possible revision to the beliefs to account for that is "maybe there is dirt on the dish". Because "there is no dirt on the dish" was one of the beliefs within C, positing that maybe there is dirt on the dish is a change to C, a change away from the old C to some new set of background assumptions very much like C but different in whether there is thought to be dirt on the dish. That constitutes a rejection of C.
(Of course, in the actual case of Srap's example, that replacement for C in turn was quickly falsified itself, as the observations expected from the hypothesis that there is not dirt on the dish soon failed to materialize, when they didn't see any dirt on the dish. Sure, they could have still hypothesized invisible dirt instead of abandoning that hypothesis, but supposing there's a CMB was less of a huge change to the accepted view than everything that would be required to suppose there's invisible dirt on the dish). — Pfhorrest
This might be the right point to confront something
@Isaac is always reminding us about: the stories we tell about our beliefs are post-hoc. They are rationalizations. That needn't mean they are bad or untrustworthy or invalid or indefensible, but it's worth bearing in mind.
What is the situation when our boys "switch on" the radio telescope? What "set of beliefs" do they hold? There's no reason to think they believe there are no pigeons nesting in the antenna; I believe they discovered them when they checked the antenna, and they thought this explained their results. Do they hold some more general belief that they antenna is unobstructed? I don't know, and I doubt you do either. So far as I can tell, they would have no reason to hold a belief either way about it being obstructed. They probably observed its construction or installation, and would have memories of seeing at that time that it was unobstructed; does that mean they held a continuing belief that it continued to be unobstructed? I doubt it, but we'll come back to this in a minute. (Btw, pictures show the radio telescope not to be on the roof and not exactly a dish either, both mistakes of mine.)
Similar remarks about the equipment in the lab: did they hold a belief that it was all in working order? More likely, but again there's a temporal issue: did they believe it was a-ok as they got the readings that puzzled them? Surely, else they wouldn't have been taking readings. Maybe in preparation for taking first readings, they did some tests. What if they didn't? If you grab a jug of milk out of the fridge, do you hold a belief that it won't split open? What about a belief that a hole won't spontaneously appear in the bottom?
We're accustomed sometimes when doing philosophy to talk about "belief" this way, as a sort of abstract mental correlate of the actions we take. (I have defended talking this way on this very forum.) Sitting "implies", in some sense, a belief that the chair will hold our weight, that it's real not an illusion, that it won't turn out to be made of some other material than it appears to be, that it won't spontaneously move or even disappear, and so on.
One reason this attribution of belief feels okay is our experience of finding that an assumption we've made was incorrect. But what does that mean exactly? What is an assumption like? An awful lot of assumptions, including the ones that turn out to be incorrect, are not held explicitly; does it help to describe them as being held implicitly? Some we might be inclined to attribute to people in order to make sense of their behavior; if you fish a coin out of your pocket and put it into a vending machine, you must be assuming the coin is legal tender the machine won't reject. You're not holding such a belief explicitly, but you're assuming it's the case, and even that only implicitly.
How does that actually help us? Suppose the coin is accepted; does that justify our assumption that it was legal tender? There's no logical reason not to say that, I don't think, but it's not the first thing I'd reach for in describing the situation. What if it's rejected? We try again, and it's rejected again -- sometimes they just don't quite catch right. What would you do next? You'd have a close look at the coin. Is it damaged? No. Maybe it's fake, doesn't have the right weight.
What's going on here? Have you found out you must reject your belief that the coin was genuine? Maybe, kinda. But when did that happen? And how? You expected the coin to just work, that much is clear; when it didn't, you could shrug it off and try another coin (vending machines are a little unpredictable) and never think about it, or you could look for an explanation.
I suspect cases where the natural next step to take is the logical analysis of the set of beliefs you held right at the moment when things started going wrong are pretty rare overall. The natural step is often going to be investigating, at least a little, looking at stuff. And some theorizing, or hypothesizing. I think this is the moment where you might identify an assumption that the coin is genuine, but only because it is now suddenly in question whether that's true. In other words, it might occur to you (or not) that the coin being fake would cause the machine to reject it. "The coin is not genuine" would appear in your world not as the negation of some belief you actually held, implicitly, but as an hypothesis that could explain why it was rejected. Implicit assumptions seem generally to show up this way -- not in themselves, and not in the form we are claimed to have held them, but negated, when the converse might be the explanation we need.
So in Holmdel, New Jersey, did Penzias and Wilson assume the equipment was still working having checked it out at some earlier time? Why not just say that it occurred to them that a malfunction might cause the readings they were getting. Did they assume nothing was obstructing the antenna? In particular that there were no pigeon nests in it? Of course not. But it might occur to them that some kind of obstruction might cause the results they got.
You can patch these things together after the fact into a logical structure -- we're really, really good at rationalizing, but so what? I hope it's clear, I'm not trying to reform how we talk about assumptions and so on, but I do think trying to formalize this way of speaking into a logical system that allegedly explains how people come to believe what they do or how they change what they believe -- it's a mistake. I think its mistakenness shows up in part in its inability even to do what it claims -- eliminate false beliefs. It also fails to account for the fact that investigating actually works -- it shouldn't, because you can always just reject the new observation, or you can find some way to take it on-board without falsifying anything, always.
That's my sense of things. I think the whole approach (and it used to be mine too) is a mistake, just the wrong way to think about beliefs.