• My Kind Of Atheism
    not sure your point here, but I wasn't making any argument, just giving you the correct Catholic teaching.Rank Amateur

    I'm an Atheist who grew up and still lives amidst Catholics, and what you say is certainly what they preach. But it's also, generally, what they do. Nobody's ever told me I'll go to hell for not believing in God. I generally come away with the impression that all "good" people go to heaven, and when I ask what "good" might mean, I'm a lot more likely to get counter questions than a sermon (which is consistent with the idea that your relationship with God is personal). There are probably regional differences when it comes to practise, though. I live in Austria. How about Ireland? Brazil? Indonesia?

    I'm a relativist, so I'm fairly sure what millieu you start out in is rather important. Nobody's an atheist because the cornerstone of their worldview is the non-existance of God. Generally, an atheist has a world view of his own, like any other person, and that world view does fine without God, but they only ever notice that when they consider theists, and so your atheism will likely have certain focus, depending on what theist intrusion in your life looks like.

    I grew out of God (the Roman Catholic variety) together with the Easter Bunny, so the image of God I have inside is rather childish. Other people grew up and their concept of God grew up with them, but they have problems making themselves understood by me, because it all looks equally childish to me. But, see, the childishness is mine. I'm aware of that. None of the people around me believe in that childish God who is the only one I can imagine. Curiosity? At that point, people very rarely tell me things I haven't heard before. The likelihood that I spend a lot of time listening to things I've heard multiple times before is high, and the likelihood that I finally get it now is low. That puts a dampener on my curioristy, to be honest.

    I'm lucky in that people who talk to me, generally don't try to convert me, so I don't have much in the way of an aversion to God talk. My childhood God memories are full boredom and repetitiveness and unenlightening religious education, so the concept of God is vaguely associated with boredom. I'm not really curious about God at all, but I do want to understand theists, and that's a minor interior conflict that can at times escalate.

    Generally, there are points of friction in daily life. My mum, for example, thinks I'd be happier if I could talk to God. Well, that may be true, but I can't, since I don't believe He's there. Now, when I'm visibly depressed is when she most wants to bring it up, and when I least want to hear about it.

    It's also a little grating, when you're trying to figure out the details of what you believe in (emotional reactions and self-observations are the cue), and all the present theists have to contribute is "What about God." Being an atheist is, as strange as it may sound, already a concession to theists. On my own, I'm fine just being primarily a relativist, a not-quite naturalist, a hardly-at-all-but-maybe-a-little humanist, and so on. It's not easy to figure out what I believe, so, dear theists, please don't distract me with God. We'll talk later, yes?

    Those aren't grave problems, but they do provide little hiccup in the daily praxis of theist/atheist interaction. Now imagine, if an atheist were to face grave problems (say, legal persecution), wouldn't they have more of a baggage with the concept of God than I have?

    Atheism isn't a philosophical position; it's a way to classify various philosophical positions (from naturalism over secular humanism to nihilism). And often for an atheist to talk about God at all is to abandon their home-territory: God just isn't a very important concept in their native believe structure. (Ex-theists may have it easier, at least, if their memory is good. And I imagine for some I-don't-believe-in-God-but-I-used-to is a rather important mental gestalt.)

    Would I believe in God, if I had evidence? To me that line is a red herring. No theist I know personally is waiting for evidence for God. One once told me that everything is evidence for God. Nobody's trying to set up God experiments with the hope of creating a miracle machine. Prayers suffice (and are more respectful to God, too). If there were such a thing as "scientific evidence for God", I'd expect theists to tell me what it is. "God" is their concept not mine. Nobody I know personally has ever put forward such thing. Nobody's ever seemed in interested in such a thing. I can't accept evidence for a blank concept in my mind, and even theists would laugh at me if I were to look for evidence for that childish God-concept I have inside. If I ever have a change of heart, it's going to have to come from some personal experience, rather than empiricist reasoning.

    I've typed up and deleted replies to some of the other threads that float around. It's fiendishly hard for me to come up with posts that I wouldn't immediately regret after clicking "post comment". It's probably easier here, since this thread is more about what being an atheist is like than it is about proving or disproving things that aren't very relevant to my day-to-day business.
  • Gender-Neutral Language
    Maybe "Jo ate the cake, but that one didn't like it" would be better.

    When I took Latin, we routinely said "that one" in translations. I just like it because it doesn't contradict other grammar.
    Michael Ossipoff

    Well, there's a reason I said "I much prefer..." rather than a more convinced "...is better." But "this" vs. "that" doesn't make much difference for the problem at hand. The only difference is perceived distance. I'm not sure I'd use "this one" for Latin translations either, unless you have something in the original that merits it like (like some instances of, say, "ipse/ipsa/ipsum", my Latin's a little rusty).

    The problem I have is that "this" is demonstrative. It's the language equivalent of pointing. It feels, to me, like an act of singling-out, if you use it in cases where you'd usually just use personal pronouns.

    It's true that it doesn't contradict grammar, but if non-binary felt called out be the usage (which would undermine the intended courtsey) I wouldn't be surprised.The only way to really know is try and see where it goes. It's certainly not up to me to make that judgment.

    It's interesting, really. Many langauges still have infelctional endings to nouns and adjectives that indicate gender, so it's a lot more difficult to dodge gender, than just replacing a pronoun. (And with the neuter endings usually being reserved to things, that's usually not something a person wants to claim...)
  • Gender-Neutral Language
    By the way, would you use a singular verb with "They"? That doesn't have the long-established usage we spoke of, and it's a further direct contradiction in a sentence.Michael Ossipoff

    No. As far as I can tell, you treat singular they gramatically as plural with one exception: "themself" instead of "themselves". As it happens, that's exactly what happened around the 18th century with "thou" being replaced more and more by "you" (which was a plural form). Grammatically, "you" is actually a plural form for a singular referent, except nobody sees it like that, because there's no alternative and people have just internalised it as a singular form.

    Using singular "they" for a specific referent is going to be a little odd at first, but I'm seeing evidence from people who use it a lot that there's actually a good chance to internalise it, if you give it a chance. Reading will at first also be a little slower since you're not yet automatically connecting "they" + plural verb with a singular referent, but I can tell from experience that it's a transitional problem.

    Forms like "this one" are "stylistically raised" and not equivalent to pronouns. "Jo ate the cake, but this one didn't like it." vs. "Jo ate the cake, but they didn't like it." I much prefer "they".
  • Mathematical Conundrum or Not? Number Six
    I pointed out a few times that the sample space of event R and the sample space of event S are equal subsets of each other, which means mathematically we can treat them the same. I also pointed out that as soon as you introduce Y they are no longer equal subsets and therefore mathematically cannot be treated the same.

    Here is an example. If Z=M and N=M then Z=N.
    Jeremiah

    I'm hopelessly confused.

    I read your [[10,20],[5,10]] as: "Given that one envelope has the value 10, either [X = 10 and 2X=20] or [X=5 and 2X=10]". And that describes the sample space of both envelopes A and B.

    A sample space of A[X, 2X], B[X,2X] gives you the following possibilities:

    A=X, B=X
    A=X, B=2X
    A=2X, B=X
    A=2X, B=2X

    A=X, B= X (never selected, due to setup)

    A=10, B=10
    A=5, B=5

    A=X, B=2X

    A=10, B=20
    A=5, B=10

    A=2X, B=X

    A=10, B=5
    A=20, B=10

    A=2X, B=2X (never selected due to setup)

    A=10, B=10
    A=20, B=20

    So if look into A and discover 10 inside, all I have to do is to look through all possible constellations, which are both in the sample space you defined, and selected by the set-up:

    A=10, B=20
    A=10, B=5

    This is the result of [[10,20],[5,10]] under the stipulation that A=/=B.

    If this is not the case, I have read you wrong, and I can't for the life of me figure out where or how. Have I missed possible constellations? Have misread your sample space? What?
  • Mathematical Conundrum or Not? Number Six
    If you came at the problem from here, you'd realize at some point that the clever thing to do is introduce a single variable X that is orthogonal to your choice and orthogonal to which envelope has which value. |X - 2X| = X, no matter the rest. It gives you an invariant description of the sample space so that you can properly measure the consequences of your decisions.Srap Tasmaner

    But you have to remember if you go one envelope has X and the other 2X, then you're defining as the envelope that contains X as the one with the smaller value. So if you look into an envelope, you can't know which envelope you've opened, so the other must contain either twice that of the one you've opened, or half that of the one you've opened: it's the neutral value, split over an either/or situation.

    X and Y are commensurable. It's the same thing. I don't see a difference.
  • Mathematical Conundrum or Not? Number Six
    If you can show me how to respect this difference within a subjective framework, I'd be all for it.Srap Tasmaner

    This is how I see the problem:

    Objectively, you're in one game, where one envelope contains X and the other contains 2X.

    As soon as you pick an envelope, though, you have a potential value Y, which is, again, either X or 2X, but that's a bifurcation point: you have now two games. That should be obvious, because saying that Y could either be X or it could be 2X would mean X=2X, and that would be nonsense in an objective framework. What this means is that you now have two subjectively possible games, only one of which you're actually in. This holds for both values, so you have three possible game in the meta-system, one of which - the objective one you're in - is selected twice: once for each envelope.

    So, if you were to dimensionalise your variable for the three games, you get: X(1), X(2), and X(3) - where X(2) is the game you're actually in, and it's the only of the three games that's selected no matter what envelope you pick.

    So when you're saying that Y could be either X or 2X, you're not talking about the same X. You're either talking about [X(2), 2X(1)] if you pick X(2), or [X(3), 2X(2)]. if you pick 2X(2).

    You know that if you pick X(2) you win by switching, and if you pick 2X(2), you lose by switching. Objectively, the amount you're losing or winning can only be X(2). But all you know is the proportion: if you picked the lower amount you win Y, and if you picked the higher amount you lose Y/2. Even subjectively, the amount you win or lose is always X(2). But your frame of reference differs: If you picked the lower of the two values, the amount you could have lost appears to be X(1) [=X(2)/2], and if you lose, the amount you could have won appears to be X(3) [=2X(2)].

    Both those values don't exist in the objective game, but you're problem is that - while playing - you don't know whether X(2) is Y or Y/2. You could be in any of two games, one of which game 2, the real one, and the other is either game 1 (the smaller-sum game), and the other is game 2 (the bigger-sum game), but from value Y alone you can't tell.

    Because you can't tell, you have two options: take Y into account anyway, or ignore it. These are two perspectives on decision making, and neither really causes unpleasant surprises, because all that changes is the reference system. You either work with an indefinite certain value (in which case it doesn't matter whether you look into an envelope or not), or with two definite but uncertain values (if you look into one envelope and make that the basis of your decision), [or, for completeness sake, with three indefinite and uncertain values (if you don't look into any envelope and set the value Y as the envelope you currently have - this is the switch-back-and-forth constellation)]. In all three cases, the only value you can win is X(2), but depending on your reference system the value may look proportionally smaller or bigger.

    I think the core difference between people here lies in the different ideas of what we should with context, or maybe even what should count as context, when it comes to real-life decisions. And that's something buried fairly deeply in our worldviews, so it's not that easy to untangle.
  • Mathematical Conundrum or Not? Number Six
    So how's this:

    A, B = two envelopes; X = the smaller of two values, 2X = the greater of two values; Y = the known value of one envelope

    P (A=X and B= 2X) + P (A=2X and B=X) = 1

    Corollary: P (A=X and B=X) + P (A=2X and B=2X) = 0

    This merely describes the set-up.

    P (Y=X) + P (Y=2X) = 1

    This describes the fact that if we know one value, we cannot know whether it's the samller or the bigger value (but it has to be one).

    From this we get:

    P (A=Y and B=2Y) + P (A=2Y and B=Y) + P (A=Y/2 and B=Y) + P (A=Y and B=Y/2) = 1

    Corollary: P (A=Y and B=Y) + P (A=2Y and B=2Y) + P (A=Y/2 and B=2Y) + P(A=2Y and B=Y/2) = 0 [At least one value is by definition Y, and because of the set-up, both can't be Y.]

    Now we look into envelope A and discover Y. This renders all the probabilities 0 where A=/=Y, so we get:

    P (A=Y and B=2Y) + P (A=Y and B=Y/2) = 1

    Corollary: P (A=Y and B=Y) + P (A=Y/2 and B=2Y) + P(A=2Y and B=Y/2) + P (A=2Y and B=Y) + P (A=Y/2 and B=Y) = 0

    Did I make a mistake anywhere here? To me, this proves that saying both envelopes have to include either X or 2X and that if one envelope contains Y the other has to contain either Y/2 or 2Y are the same thing from a different perspective.
  • Mathematical Conundrum or Not? Number Six
    Okay, we have envelopes that contain a certain value. This thread has used X for the values in the envelope and [X, 2X] for the sample space of an envelope. This thread has also used Y for the value of an envelope. Here's the thing:

    We can define the value in the envelope in relation to each other, and we get A [X, 2X], B [X, 2X], where X is the smaller of the two values. (Should we decide to make X the bigger of the two values we get A[X/2, X], B[X/2,X].)

    But we can also define the envelopes in relation to each other. We get:

    A [Y], B[Y/2, 2Y]

    Note that this defines the relationship of the envelopes, in a way the other notation doesn't:

    A[X, 2X], B[X, 2X] allows:

    A = X, B = X
    A = 2X, B = 2X.

    We need additional restriction (such as A=/= B) to rule these out. We need no additional restrictions if we're looking at the contents of the envelope directly, rather than looking at the values first and then wondering what is in which envelope.

    A [Y], B[Y/2, 2Y]

    is a shorter and more complete way to look at "One envelope contains twice as much money as the other", than A[X, 2X], B[X,2X].

    We're not making additional assumptions, we're just using different variables as our basis.

    A[X, 2X], B[X, 2X] -- The values in the envelope defined in relation to each other.

    A[Y], B[Y/2, 2Y] -- The envelopes defined in relation to each other, according to their relative value.

    If we know that one of the values is 10, but not in which envelope it is, we get:

    A[X[5,10],2X[10,20]], B[X[5, 10],2X[10,20]]

    or

    A[10], B[5, 20]

    It's exactly the same thing, looked at from two different perspectives. There are no new assumptions. In both cases, we don't know whether 10=X or 10=2X. In the former notation we have to enter 10 in both envelopes and wonder which one picked. In the latter we just enter 10 in the letter we picked (obviously, since it's the one we've seen), and wonder what's in the other. And both notations have three values: 5, 10, 20. They're just organised into different either/or structures, because the notations define the letters differently (interchangable; defining in one in term of the other).
  • Mathematical Conundrum or Not? Number Six
    The distribution in the other letter cannot be [Y/2, 2Y] as one of those values simply does not exist. You have still created a sample space with impossible outcomes. The truth is that Y is not usable information. The error is making new assumptions based on Y.Jeremiah

    No, I have created a sample space with one impossible and one necessary outcome. It's an either/or situation, and that's appropriate because expectations are based on information rather than on what's actually the case. The statement that for every Y the other envelpe has to contain either Y/2 or 2Y is correct, and remains correct even after you check the other envelope and discover one or the other value inisde.

    The distribution will be one of the two scenarios:

    Y/2 = 100 %, Y = 0 %
    Y/2 = 0 %, Y = 100 %

    Y is not a random variable and doesn't have a sample space. That's why I said the random variable is a binary. Y = X? Yes/No. The other envelope has to take one of those values, based on two values:

    - The value of this envelope (a fixed value)
    - Whether you picked the envelope with X, or the one with 2 X (a random variable)

    You need to know the latter to calculate the value of the other envelope, but you won't have the information until you check the other envelope, at which point the calculation becomes pointless.
  • Mathematical Conundrum or Not? Number Six

    Sorry about the correction. My head is swimming.
  • Mathematical Conundrum or Not? Number Six
    ...which of 5 and 20 is even a possible value of XSrap Tasmaner

    I'm not quite done yet thinking, but 20 is definitely not possible value of X. It's like this:

    For Y = 10:

    5 is a possible expected value for X (alternative to 10=X).
    10 is a definite value, either for X or for 2X
    20 is a possible expected value for 2X (alternative to 10=2X).

    This symmetry is systematic:

    Y/2 = possible expected value for X
    Y = definite value, either for X or for 2X
    2Y = possible expected value for 2X

    Y is definitely in the sample (because you're looking at it). If it's X the other card is 2X (and thus 2Y), and if it's 2X, the other card is X (and thus Y/2).

    Or differently put: [5, 20] is [X, 2X] but not of each other - of the respective alternative of 10.
  • Mathematical Conundrum or Not? Number Six
    Let's call the two envelopes A and B. Now envelope A could have X or A could have 2X and likewise B could have X or B could have 2X. Those are all the possible outcomes so by the definition of a sample space our sample space is [A,B] where A is the set [X,2X] and B is the set [X,2X], which means our sample space could also be written as [[X,2X],[X,2X]].Jeremiah

    I think I got it. We've got two variables, a numerical value X and a binary variable that tells us which letter we picked, the one containing X (smaller value) or the one containing 2X (the bigger value).

    Let me explain step by step:

    We have a value X. It's a numerical variable, and it describes the value of two letters in such a way that one letter has X, and another letter has 2X.

    The second, the binary variable is E, for envelope, and it's values are yes/no. The question it answers is: Did we pick the envelope that contains the value X?

    The sample space for X = N (natural numbers); the sample space for E is [yes, no].

    Now we pick a letter and open it. We find it has the value 10. We now have new information about X. The sample space for X has shrunken from N to [5, 10], because 10 has to be either X or 2X.

    We have no information on the "yes, no" question, but we do know that the letter we picked has the value 10. That leaves us with the following:

    X [5, 10], E [yes, no]

    We can use a if/then relation to connect the variables:

    If E = yes then X = 10 (because if E is yes, then the letter we didn't pick is the larger one, 2X)
    If E = no then X = 5 (because if E is no, then the letter we didn't pick is the smaller one, X)

    Just for completeness sake:

    If E = yes then 2X = 20
    If E = no then 2X = 10

    And in words:

    If we picked the smaller letter X = 10, that means this letter has 10 (X), and the other is 20 (2X).
    If we didn't pick the smaller letter X = 5. That means this letter has 10 (2X), and the other letter has 5 (X).

    That means that if this letter is 10 (X | E = yes, or 2X | E = no) then the other letter has [5 (X | E = no), 20 (2X | E = yes)].

    That's exactly the same situation as your [[10,20],[5,10]], viewed from a different perspective. Let me write it out: [[10 (X | E = yes), 20 (2X | E = yes)], [5 (X | E = no), 10 (2X | E = no)].

    It's indisputable that if we pick up a letter and look inside and find a 10 we have:

    X [5, 10], E [yes, no]

    Everything else is just different groupings:

    If we uncover a letter and it has the value Y, then the other letter has a value of [Y/2, 2Y].

    Y ... [X | E = yes, 2X | E = no]
    Y/2 ... [X | E = no]
    2Y ... [2X | E = yes]

    I believe this covers all our bases.

    What this means for the switchers and the conundrum is currently beyond me. There's certainly something strange going on.
  • Mathematical Conundrum or Not? Number Six
    no matter what you imagine it might be or how much you know about the envelopes.Jeremiah

    Well, with full knowledge of the situation there's a 100 % chance that one envelope contains $ 10,-- and the other $ 20,-- and there's no need to invoke probablility. The reason we invoke probability at all is simply because we have incomplete knoweldge of the situation. And that's why a bet makes sense at all. I don't understand how can say that knowledge doesn't matter. It's the entire point of it.
  • Mathematical Conundrum or Not? Number Six
    Sorry no, that was not my intent. In the event of R you have A=10 and B=20. In the event of S you have A=10 and B=5. These are mutually exclusive events, which means in the case of R the amount 5 does not exist at all, and in the event of S the amount 20 does not exist at all. So one of those sample spaces is feeding you false information. The only way to avoid this is to treat X as the unknown variable it is.Jeremiah

    Hm, thinking about it a bit more, I think we're making a basic mistake, here. X/X2 is the relationship of the variables, not the sample space. I'll go at it step by step so we can see if I've made a mistake somewhere:

    1. We have two envelopes with two different amounts of money:

    Envelope1 = X $
    Envelope2 = Y $

    At that point E1 and E2 do not refer to specific envelopes, nor do X or Y refer to specific amounts. It's simply two variables with two values, and we have no more information. If we have 10 $ in one envelope and 20 $ in another, it doesn't matter whether we set 10 X and 20 Y, or the other way round. It's completely arbitrary. Both constellations describe the same event.

    Both X and Y have the same sample space: any number that makes sense of $. Both sample spaces might be, for example, the natural numbers (weight, space, etc. are complications we don't need).

    2. If we learn that one envelope contains exactly double the amount of the other, that tells us more. We now can get rid of one variable. But the sample space isn't X and 2X. It's the natural numbers for one (let's call it X), and depending on one of two assumptions we make about X, the sample space of the second is a transformation of X.

    Two assumptions?

    2. a) We assume X is greater of the two numbers.

    Envelope1 = X $ (natural numbers)
    Envelope2 = Y $ (X/2)

    2. b) We assume X is the lesser of the two numbers

    Envelope1 = X $ (natural numbers)
    Envelope2 = Y $ (2*X)

    These two assumptions both validly describe situation. We still don't define a real envelope; we merely define wether X is the greater number and Y is the lesser number, or the other way round.

    In both these assumptions we don't know the actual value of X. So if someone tells us that one of the envelopes contains 10 $, then we don't know whether X is the greater or lesser number. With regards to the above, we don't know where to put it. But we do know it has to be one of the two: 10 $ is either the greater or the lesser number. This gives us two possibilities:

    2. a)

    Envelope1 = 10 $
    Envelope2 = (10/2) = 5 $

    or 2. b)

    Envelope1 = 10 $
    Envelope2 = (10*2) = 20 $

    But what we've done here is twofold: we've set X = 10, and we've set envelope1 as the envelope that contains X. We do not know whether X is the greater or the lesser number. The question we care for is what's in envelope2, and the answer to that is:

    If X is greater number, envelope2 contains 5 $.
    If X is the lesser number, envelope2 contains 20 $.

    The sample space for envelope1 was all the natural numbers, and the event is now 10. Since the sample space of envelope 2 is dependent on the sample space of envelope 1, there are only two possibilities: X/2 or 2X. We simply don't know whether X is the greater number or the lesser number.

    It doesn't matter which envelope we open first, we never know which is the greater or the lesser. Because of this, we can set any of the envelopes as 1 or 2, and we always have the same situation:

    E1 = 10 and E2 = 5 (X > Y, Y = X/2)
    E1 = 10 and E2 = 20 (X < Y, Y = 2 X)

    If we only open one envelope, we might open the other envelope first. We wouldn't know about 10, in that case, but either about 5 or 20, depending which is true.

    For 20 we'd get:

    E1 = 20 and E2 = 10 (X > Y, Y = X/2)
    E1 = 20 and E2 = 40 (X < Y, Y = 2X)

    The ratio remains a constant, no matter which number you draw, and that's why you alway stand to win twice as much as you would lose. This is a function of what you know about the ratio. However the natural numbers that make up the individual sample spaces differ.

    If the envelopes contain 10 and 20 dollars, and you set E1 as the envelope you pick first you get:

    For E1 = 10:

    the sample space for E2 is [5, 20].

    For E1 = 20

    the sample space for E2 is [10, 40]

    That's because the sample space for E1 is not X. The sample space for E1 is the natural numbers. X is the event.. However, once we know the event for E1, we know that the sample space for E2 is [X/2, 2x], and that's because of the ratio. We can't reduce the sample space to 1 item because we cannot know whether X is the greater or the lesser number until we look at E2. But E1 is chosen at random.

    So we get the following:

    Envelope1 = [X | € N]
    Envelope2 = [Y | € [X/2, 2X]]

    I'm not an experienced mathematician, so I might have gotten the notation wrong. But does the reasoning make sense?
  • Mathematical Conundrum or Not? Number Six
    The sample space is [[10,20],[5,10]]. Notice how there are two 10s. Now show me the math which allows you to eliminate both of them.Jeremiah

    I haven't read past this page and only skimmed the the next two, so if I'm repeating what someone else said, or if that's irrelevant by now, please ignore. But this is complicated and I don't have much time, and I'm sure I'll forget if I don't reply now.

    The square brackets represent envelopes - I'm sure of that. In a sample space of [[10,20],[5,10]] you're not defining the envelope we picked; you're defining the envelopes according to "contains 2X [10,20]", "contains X [5,10]". If you defined the envelope as "the one we picked" you'd get "the envelope we picked [10 (=2x), 10 (=x)]", and "the envelope we didn't pick [5 (= X), 20 (= 2X)]". That's because the two envelopes are interdependent, and that's why the events are order-sensitive, i.e. [[10 (=2X), (10 =x)] and [5 (= 2 X), 20 (= X)]] wouldn't work.

    Tha sample space in your post is: [[10 (=2 X), 20 (= 2 X)], [5 = X, 10 = X]]. That is the sample space is only correct, if we're not picking an envelope at all, but defining the envelopes according to whether or not they contain X or 2 X. But then the values are arbitrary.

    In any case, there is only one "10", and that's the one in the envelope we opened, whether that's X or 2X. The interdependence between the two envelopes determines that the other envelope has either a 5 or 20, and which of those is in there in turn determines whether 10 = X or 2 X (such is the nature of interdependence).

    ***

    Also, the bet is inherently not avaragable, since repetition either immediately makes each subsequent repetition a win (by revealing X, if we check the result), or (if we stack the results without checking the wins) reveals X as soon as we get a dollar bill other than the one we get first (50 % at the start of the first repetition, 75 % at the start of the second repetition, ... = the likelihood of figuring out X).
  • Many People Hate IQ and Intelligence Research


    I'm not sure if, or how much I disagree with you here. A simplification: if we have (taking my rough definitions as a base) antonym pairs of:

    simple-minded -- intelligent

    and

    foolish -- wise

    We get four combinations:

    A simple-minded fool
    A simple-minded wise man
    An intelligent fool
    An intelligent wise man

    Since I don't have problems coming up with stereotypical fictional characters for either of those types, the distinction is meaningful for me. How? That's a difficult question.

    In your anecdote, I see the homesteader as a simple-minded wise man who sees Einstein as an intelligent fool (and who doesn't make the distinction I make).

    None of that says what intelligence or wisdom actually is, much less that it is a single trait, or a latent ability. If I take the anecdote at face value, though, and I only have "intelligence" to work with, I find the anecdote much harder to read. "Intelligence" turns into a measure of success, and success is abstract enough that it encompasses both coming up with the theories of relativity and finding contentment in life. I'm not sure what to do with that reading.

    Part of my motivation to reply in the first place, is a problem I had with many posts in this thread: a focus on success as a measure of intelligence. I'd like a definition of intelligence that allows me to ask questions like "Under what circumstances does higher intelligence make you more successful? When does higher intelligence become an obstacle?"

    For example, if intelligence does have something to do with complexity, then an intelligent person is more likely to mistake a simple problem for a complex one than a simple-minded person, which makes the simple-minded person more likely to successfully solve a simple problem, or be more efficient at solving that problem (because no unneccesary thoughts get in the way).

    Now, if we view intelligence as a measure for problem-solving success, we can't meaningfully address these questions. That's my prime problem with IQ tests: they predict success, but don't allow me to look at the relationship between intelligence and success because of that.

    Of course, the problem might be that my conception of intelligent is... highly ideosyncratic to begin with. Take language: never mind being a "good" writer; even using language the way every five-year-old does is a highly complex activity. If I ever get serious about "intelligence having something to do with handling complexity" I have to address this distinction between using a complex system and holding its representation in your mind - praxis vs. analysis. It's definitely not a simple task. I'm not convinced yet it's a worthwhile task.

    To the topic at hand, I'm highly skeptical of IQ tests, but I've never got the impression that the IQ was supposed to be a metric scale, more like an ordinal scale with huge overlapping categories. I mean, IQ tests come in modules, and everyone who's ever taken such a test has probably found some of those modules easier (I suck at the ones which require spatial perception). Two people with the same score do not have the same abilities in the same way that two people of the same height are equally tall. And I don't think anyone's ever pretended it did. So even if we're talking about IQ tests as they are, we're not talking about a single measure - at least not in the same sense as height or weight.
  • Many People Hate IQ and Intelligence Research
    A little story which I'm sure you will have heard in different guises but I think is apt here. A homesteader being told about Einstein commented that whilst he (the homesteader) had lived a long and happy life, working outdoors and enjoying whatever life handed him, having a loving wife and three happy children, Einstein had worked at often menial jobs, could not sustain a marriage, had little or no relationship with his children and died racked with guilt about his part in the atomic bomb. Who's the most intelligent?Pseudonym

    Isn't that the difference between intelligence (~ the ability to "work with complexity") and wisdom (~ the ability to make things "work out fine for you")? You don't need to be intelligent to be wise, and intelligence certainly doesn't guarantee wisdom.

    If people agree with the rough definition that intelligence has something to do with handling complexity, then we could also move away from testing intelligence via success at tasks. That always sort of bothers me, because there are types of mistakes you only make when you're smart enough for them ("overthinking"). Similarly, someone determined to believe a very simple thing can resist being convinced more easily, if their thought patterns can outmaneuver those of the people who are trying to convince them: Intelligence allows for successful rationalisation of appealing nonsense.

    One might also predict that the more intelligent you are, the more easily bored you get by performing simple tasks. Things like that.

    Basically, intelligence isn't always an advantage and can often work against you in terms of wisdom. I think any definition of intelligence should allow for self-defeating intelligent behaviour.

    So, basically, the Einstein of that anecdote is definitely intelligent, but maybe not that wise, while we have no information whatsoever about the homesteader's intelligence, we could learn a thing or two from his wisdom.

    I'm not sure that's entirely how I see it, but it definitely goes in that direction.
  • Frege's Puzzle solved
    Heh. I don't know much about sports and very little about baseball, so I was staring at your post and didn't really understand it, until the edit. So basically, if you're already ahead you can make a go-further-ahead run, except that nobody says that. (My confusion stemmed from an instinctive association of "go ahead" with "give the go-ahead", so I was sort of expecting some sort of rule bound complication...)

    The thing about language is that it's complex, but easy to use. Easy to use, hard to analyse - to the extent that sometimes analysing it can make you less effective at using it because you start seeing problems that should be there but aren't. Describing language as a formal system is still useful (especially in second language learning), but, IMO, it's important to realise that language isn't actually a formal system: it's a type of social behaviour that uses cues from outside to resolve formal ambiguities (famous example: "We saw her duck.") and in this way uses these ambiguities for versatility. On writing boards, they tell you that "you must know the rules to break them." But I think that's the wrong way round: "you must know the rules to follow them" - if you're literate you can write. When it comes to language, right/wrong is often open to negotiation, and the things that are not open to negotiation are usually not talked about (if you're ever bored look up the difference between nominative-accusative languages and ergative-absolutive languages and see how often that comes up).
  • Frege's Puzzle solved
    Whether the use of a particular name (or nickname or description) is appropriate may not change the truth conditions of sentences it's used in, appropriately or not. I think if my son pointed at Venus of an evening and said, "Look, the Morning Star has risen," that would be true if a bit arch.Srap Tasmaner

    Well, true. But focussing on the sentence's truth condition may itself be missing the point. I don't know your son (or even if you have one), so I'm not trying to get personal here. But your son might say the sentence in full knowledge of this thread and intend it as a rib, because you care about things he does not, in which case the word choice "Morning Star" indicates irony, but you won't find the irony if you focus on the referential object and truth condition. That could cause a hiccup in the conversation, which could have been expected (or even intended) - a sort of private in-joke thematising differences in outlook.

    It's not that easy to do something like this in a formal system like maths.
  • Frege's Puzzle solved
    But something about that isn't quite right. The reason we feel there are different uses for "Hesperus is Hesperus" and "Hesperus is Phosphorus" is precisely because we feel they don't contain the same information. So it is with "4 = 4" and "2 + 2 = 4". It's that sense that these two equations carry different information -- they "say different things" -- that drives their different uses. So the semantics drives the pragmatics here.Srap Tasmaner

    I don't think it's quite that simple.

    Take your sentence (3): "Hesperus" is another name for Phosphorus.

    First, look at the word "another" and it's reference. If there's "another" name, there has to be a default name that the speaker would expect the hearer to know (pragmatics). But given a context such as "What's Hesperus?" the speaker won't know which name this is until s/he arrives at "Phosphorus". That is "Phosphorus" in that sentence does pragmatic double duty: it tells you which object it is, but it also raises the topic of that planet having the name "Phosphorus". That's exactly why ["Hesperus" is another name for "Hesperus".] is weird. It has everything to do with "another", and little with Hesperus/Phosphorus. The weirdness can go away if we take care of this in context:

    A: What's "Hesperus"?
    B: "Hesperus" is another name for Phosphorus.
    A: Hm? So what's another name?
    B: Huh?
    A: Other than "Hesperus".
    B: *flat tone of voice* "Phosphorus" is another name for Phosphorus.
    A: *embarrassed* Oh.

    Note that the word "another" no longer refers forward, now. The default name ("Hesperus") has been brought up in the exchange before and is the obvious referent for "another" here. The syntax is different from the earlier sentence. And the misunderstanding in this exchange is derived from A not noticing the double duty "Phosphorus" is doing in the early sentence.

    **

    A second (but to me less interesting) complication is that the planet Venus has the respective names in specific contexts only: even if you see the same object, you can't see the Evening Star in the morning - if you activate all its denotations and connotations (which you don't have to, but which you activate is a matter of pragmatics). Even though "Hesperus" and "Phosphoros" refer to the same planet, they're not complete synonyms, though they may be functional synonyms in many context (in most of which, we'd use "Venus" these days with overwhelming likelihood).
  • Deluded or miserable?
    I always wondered what the point of the blue pill was. Isn't it just erasing memories? What about a person who took no pill at all? Wouldn't such a person have to learn to live in the matrix with knowledge he shouldn't have?

    Personally, I'm interminably curious and would have liked to swallow both pills just to see what happens. But I'm a bit of a coward, too, so I'd probably have liked to say no thanks to both. With a choice forced upon me, I'd probably be sitting between both until they lose patience and I make a choice at random.

    That sounds like a cop out, but this is a situation where people who care very much try to force a dichotomy on me that may not mean much to me. At that point, I certainly don't have enough information to make any sort of meaningful decision. Swallowing both pills is the wait-and-see-but-accelarate-the-process option.
  • Mathematical Conundrum or Not?
    An answer is correct if and only if its value matches the chance that an answer with that value will be selected.Michael

    I think that formulation is incorrect, because if this truth condition yields "true" for more than one value, the chance to be correct <i>overall</i> is greater than for any of the individual values.

    Take for example:

    a) 25 %
    b) 50 %
    c) 50 %
    d) 60 %

    A has a value with a probability of 25 % to be chosen, so it's correct. B and C both have both have a value that has a chance of 50 % to be chosen, so they're correct, too. But that would render the chance to be correct overall at 75 % and according to the problem's formulation, none of them would be correct. But if none of them is correct, then the way we arrived at the correctness of the individual values isn't valid, as it doesn't address the problem.

    I don't even know how to formulate this problem in mathematical terms. I don't understand the truth condition.
  • Belief
    I'm still waiting for you to explain the problem mentioned in the first sentence above. It does not follow from the fact that we have all sorts of knowledge about bricks that that knowledge is problematic for treating a brick as a physical object.

    I really have difficulty with the way you're employing the notion of perception. Perception is not equivalent to understanding. We perceive a brick. We understand it as "a brick". The dog perceives the same brick. He doesn't understand it as(something called) "a brick".
    creativesoul

    Sorry for making you wait. I'm too slow a writer, reader and thinker - and this thread outpaces me. Also sorry that my answer's likely going to be unsatisfactory since simply catching up with the thread takes up most of my forum time.

    I think percpetion is a complex mental activaty that involves understanding what we see at various steps. Every individual, whether human or dog, faces the same stimulus: a brick. But we're not perceiving something and then interpeting it; our interpretations don't come after perception; they run simultaneously to the point that by the time the brick enters our consciousness it's already a brick - fully integrated into our full perceptory state (that includes everything we see, hear, feel...). It's not that we see a physical object that is a brick, it's that we end up seeing a brick and sometimes it's relevant that it's a physical object. Our interpretations of what we see guides what we pay attention to and sometimes supplements what's not there (I'd need to find evidence in experiments for that and don't have the time) - by the time we "see" an "object" a lot of interwoven mental activity has taken place, so that you simply can't say (other than analytically) that by the time you've isolated a brick as an object what you see is merely a representation of what's there in the physical world.

    Seeing isn't just "burning the image into the retina", and if it is what you're seeing is not yet "a brick". And perceiving isn't just "seeing" - integrating various input, I think, is already a meaningful activity guided by interpetation.

    I don't think that's all that different for dogs either, maybe a ted less complex (but maybe not).
  • Belief
    The object of belief can't be a physical object anyway. I believe that brick. That makes no sense. I believe that the brick is red. That makes sense.frank

    What's your criterion for what makes sense? Grammar? And what is it to be the object of belief? Must it be limited to what would render the sentence grammatical?Sapientia

    I'll start from here, because it's easiest for me.

    Whether or not a physical object can be the object of belief cannot be determined by saying that "I believe that brick," makes no sense. "I believe that brick," is ungrammatical and has no immediate meaning (though you might guess at any itended meaning depending on the context of the utterance). It's ungrammatical because the pattern is "I believe that [clause]", but here you have "I believe that [noun]". A noun doesn't always describe a physical object. ("I believe that justice," is equally meaningless.)

    The usual phrasing when you make a single entity the object of believe is with "in": "I believe in bricks," or "I believe in a/the/this/some brick." These result in grammatical sentences. What remains is a question of meaning. What would I be saying, if I said "I believe in bricks?"

    If you say something like "I believe in bricks," do you have to be able to analytically detail what it is that you believe in? If we take the definition of this thread of believe as a propositional attitude, "I believe in bricks," would be a blanket formulation that references but does not spell out a bundle of propositions. But do you have to be able to provide an exhaustive list, before you can be said to believe in "bricks"?

    In my first post, I used the example "I believe that God exists," rather than "I believe in God," precisely to avoid this problem. But it's sort of important.

    If you can believe in single entities without being able to detail an exhaustive list of implied propositions, what does this mean for the act of believing. Is "believing" this way the commitmental equivelant of blank checque?

    Once again, the question is whether a proposition is a sentence, or a special type of meaning expressable in varying degrees of success by varying sentences. If it's the latter, you might well "believe that brick" (a non-native speaker, for example, or a very small child might not know better), but you'd be advised to actually come up with a better formulation. If a proposition <i>is</i> a sentence, rather than simply being expressed by one, "I believe that brick," is ungrammatical nonsense, and not a proposition at all.

    This is also the tie-in with belief in non-human entities (from dogs to thermostats). If a proposition is not a sentence, but expressed by one, then maybe propositions can be expressed also by actions, or maybe even by mere behaviour.

    And finally, there's a problem with treating a brick as merely a physical object. When you see a brick and recognise it as a brick, you activate knowledge about bricks you have. The knowledge about bricks that you have also prevents from seeing the brick as it is: brick-naively, so to say. What you see is always already an object-subject relation. This is especially the case with human artefacts, like bricks, which are made to purpose. Seeing a brick as a brick is not so different from understanding the meaning of a word, or not understanding the meaning of a word but recognising it as a word whose meaning you don't understand. So in that sense believing "that brick" could be affirming your learned world view, while centering your attention on a brick. Whether or not it's useful to stretch the term "belief" this far, again, is a question of what you're intending to do with the word. I could designate that sort of meaning to "I believe that brick," using an ungrammatical and thus unintutive phrasing to highlight an unintuitive concept.

    If that's too long and confusing, my central point here is this: You can't just assume that a proposition is identical with its phrasing. Saying that a proposition has stable meaning, no matter how you formulate it, and saying that proposition is identical with its phrasing has different implications.

    Physical objects are out there in the world and can be perceived by anyone (capable of perceiving physical objects), but you can only perceive them as a specific type of object (say, as a brick), if you have that type already in your mind. If you come to an object naively, you'll still have a world view, and your attempt to deal with an object will eventually create a type. As soon as we have a type, there's potential for calling that belief. I wouldn't, but it's not absurd.
  • Welcome to The Philosophy Forum - an introduction thread
    Thank you for the welcome. I'm mostly hanging back and reading: I'm a slow writer, and by the time I have something to say threads usually have moved on and someone else has said what I would have.

    It's nice to have a forum where people talk to each other rather than at each other.
  • Belief


    "Semantic field" is a term used in structural linguistics and anthropology, and it's simply the range of meaning associated with a word or a set of closely related words. It's not the most precise concept out there, and it's theoretical in the sense that you cannot meet a pure semantic field "in the wild", because it's always already organised (say into a word, a set of words, a taxonomy...). It's a useful concept, I think, when comparing things like languages. I found it personally useful when figuring out the technical terminology of linguistics and sociology, since the same "sign body" (say "adverb", or "social role") doesn't always cover the same things (i.e. it depends on who uses the term).

    You say a definition can be wrong, but before you can determine whether or not a defintion is wrong, you'd need to know what it is you're talking about, and that's sort of the problem in a thread titled "What is belief?" What I also meant to say, but what I probably buried a bit too much in excess verbiage, is that I think "A belief is an attitude towards a proposition," is an operational definition - not a theoretic one. It drives at methodology rather than meaning. Normally, such a line is connected to a theory that sheds light on all the short cuts in the operational defition. For example, the question of whether a belief needs to be linguistic or if it can be pre-linguistic would have been addressed in the theory. When I first replied to the thread, I probably took it to be a shortcut something like "A belief is an attitude towards something that's expressable as a proposition," but I didn't properly think this through until you brought it up (even though other people have been talking about pre-linguistic beliefs and I nodded in appreciation when I read 's post, here).

    It's a bit premature to say a definition is "wrong", when we can't even be sure yet, whether we're talking about the same thing. Some people might indeed only use "belief" for propositional attitudes in its most literal sense, and whether that's sound or not depends on what other words they use and when and how. It's not like we can encounter unmediated beliefs and ask what they are: we encounter things that imply belief - behaviour, linguistic and otherwise. Or artefacts that represent language (like a forum post).
  • Belief
    It only follows that there are no pre-linguistic and/or non-linguistic belief unless propositions existed prior to language. That alone is more than enough ground to warrant our dismissing the above belief statement, because there most certainly are such things.creativesoul

    That depends on how we organise the semantic field, though. In an experimental set-up, for example, I could see "A belief is a relation between an individual and a proposition," as an operational definition derived from a theoretical definition - you'd need a well-founded theory of how the linguistic faculties connect to the pre-linguistic faculties of the mind. That is: we believe a lot of things we never formulate, but it is possible to formulate them and test them this way.

    That there are different ways to organise the semantic field is a key problem in this thread. If we're interested in a semantic field that we might describe as "taking things for true", we may come up with different words: knowledge, belief, assumption... But even if we have the same words, they don't necessarily relate the same way in different people's usage.

    JTB for example sees "knowledge" as a subtype of "belief", but it's equally possible to see them as distinct cognitive behaviour - two flags planted on a continuum so that one either knows or believes, even if there are cases where it's hard to tell which applies.

    The more I read this thread and think about it, I lean towards a definition that keeps knowledge and belief separate and that has us generate "belief" to the extent that knowledge becomes problematic. What got me thinking more along these lines is ' specific example:

    When I enter the room and see the pens and papers I know there are pens and papers. Once I start thinking in terms of belief, then doubt enters.Janus

    I tried to think about this in terms of 's diagram of JTB, and failed, precisely because of a basic difference in the way the terms are used. If we see "belief" and "knowledge" as distinct cognitive behaviour, with belief arising out of problematised knowledge, I think we need to broaden the context.

    One of the things I think is important is the relationship between meaning, truth, reality, and motivation:

    I walk into a room, and there's an apple on the table. It's not a fruit, but a wax-simulacrum. If I never found out that fact, did I "know" that there is an apple on the table. Rather, if I notice the apple in passing, but it's not in anyway relevant to me in the situation, then the proposition "There's an apple on the table," might be true on the abstraction level relevant to me: that is the differentiation between a fruit and the wax simulacrum of one is irrelevant to what the proposition means to me.

    But that means that all knowledge, belief, and truth - as it occurs in the world - is context bound. And since contexts can change, truth is not a stable thing, and it gets complicated to figure out whether "There is an apple on the table," is true or not. Complicated, but not impossible. What we have is an intricate truth system attached to a proposition.

    And this is where the linguistic nature of propositions come in: the sign body of the proposition "There is an apple on the table," remains a constant, even if context changes. In real life, we re-contextualise all the time. Socially, we negotiate meanings, and as our own motivational structure changes, so might the elements of the truth system "There's an apple on the table," that we pick out as relevant. That is a photographer might be fine with a wax simulacrum in a way a hungry person decidedly will not.

    Now, as soon as we topicalise the proposition "There is an apple on the table," we enter the meta realm. We might be arguing just for the sake of being right, or we might have motivations that make it important that the proposition be true (e.g. I might win a game, if it is, and the rules haven't foreseen the ambiguity). That is: "belief" can, in situations like this, rescue a proposition from being false, by ordering the semantic field in a way that makes it true. (Side note: This is only hypocritical if the semantic field was ordered in a different way, not if you differentiate from an unspecified level of abstraction.)

    So, basically, "belief" has two general meanings:

    a) Belief that facilitates action in the face of uncertainty: Belief in P to interact with the reality that P represents, or

    b) Belief that takes P as symbolic for some related goal: deciding the outcome of a game, group membership...

    I think (a) can reasonably be pre-linguistic; (b) can't.
  • Belief
    I find this thread extremely interesting, but since I'm no experienced philosopher, I also find it hard to follow, since I don't always understand the terms an expert would. I apologise if I don't always follow up on posts, but I sometimes need to take the time and read up on related concepts, and by the time I'm done there's almost always something else to read up on.

    For me, having a belief about believing is motived directly from social interaction, where different people are comfortable with different levels of certainty, and if you can only take one course of action some people might prefer to minimise risk while others might prefer to maximise (potential) reward, and this in turn is dependant who feels what outcome the most. So "belief" might be a factor that gives people advantages through various avenues: less anxiety, less time spent thinking...

    Now, the degree to which belief needs to be justified in the first place is a matter of social negotiation, too. I'm not quick to make up my mind. The result is that not only do I not often get my way, by the time I get any way I'm usually not sure what my way would have been, and in a sense this means I always have to deal with other people's decisions. This can lead to frustration and motivate a world view that suggests that "all belief is unjustified". But I'm not sure I actually belief that, see?

    But I do see a continuity here: belief about belief is not that different from the belief that the sky is blue or that sandwiches are nutritious. It's just that the more abstract terms become, the harder it is to describe and circumscribe the referential objects as well as the concepts in our minds. And this is why we have this thread to begin with. What is belief?

    So, if we talk about animals in terms of expectation and frustration of expectation, as Janus suggests, then I have to ask why we don't do the same for humans? Do we reach limits? Is there something we can't express? And if so, is the same true for animals, but in different ways? It's very hard to imagine what human language use would like from a different system, perhaps one we can't understand. When we hear a word, we hear a word. When we hear a language we don't understand and whose prosody we're not used to, we may not know where one word ends and another begins, but we still recognise language. The less "like us" things become, the more meaning disappears, but how do we deal with it?

    When we talk in terms of expectations for animals but belief for humans, what becomes difficult is comparison. In a sense, making that distinction is a comparison in itself - but what it means isn't clear other than that humans are different from all other animals, which is trivial (and true for all other animals as well).

    So when we move away from humans towards thermostats on belief-similarity slide, how do we map the journey semantically? What about belief do we share with apes? With mammals? Vertebrates? Life? Inorganic Matter? At what point does the comparison stop yielding results.

    One thing about the thermostat discussion that's drawn my interest is the formulation "The thermostat believes it is cold." What struck me here is the word "cold". The thermostat activates at any temperature we set; the distinction between warm/cold doesn't come into it. This is a judgement that goes away from the very specific temperature. It's an abstraction, and one that has different implications. There's a hidden should-proposition in that word: the thermostat should activate because it is cold. And it's not a should-proposition we can lay at the thermostat, because it's us who set the temperature. All the thermostat can "belief" is that it is time to activate (according to its setting).

    But if we set activity pairs (activate/don't activate in this case) as an indicator for belief (and the thermostat has two belief settings: it's time to activate/it's not time to activate) - then what does that mean for the distinction between value judgements and facts. Theoretically, the thermostat can be wrong about the specific temperature, but to what extent can it be wrong about "when to activate"? Does the origin of the setting matter?

    When a dog who misses its previous owner refuses to eat, have the should-settings changed? When *I* refuse to eat, because I miss someone I can directly detect whether my should settings have changed: I should eat, but I can't bring myself to. When it's you, I can ask. With a dog? There is no shared language, but does that mean there is no dog-internal language whatsoever? How would we know?

    What I wonder is whether we need an inside-view to talk about belief, and if so, when we stop granting an inside-view. I'd say the gradiant is one of similarity with the only inside-view we know directly (our own) as the initial point of comparison.
  • Belief
    Wouldn't we distinguish instinct by the fact that it doesn't link up to any proposition? Or would you say that it actually does?frank

    No, I agree. Instinct is just an impulse to execute a specific behaviour. I think belief is more complex than that.

    It's just when I go back to the edited original post and read:

    So, that John is hungry, and that John believes eating a sandwich will remove his hunger, we have a sufficient causal explanation for why John ate the sandwich.Banno

    I personally run into a problem, because I think both eating and believing are component actions that branch off the same development. We recognise a sandwich as edible the moment we see it; it's not an instinct, because a sandwich is an artefact we create. That is, if I follow the section about action I end up with belief as an internal modelling of the world (a concept already brought up in this thread) rather than with belief as a propositional attitude. But at that point it's not much more complex than an instinct, sort of the flip-side of one: if the instinct is to eat food, the associated belief would be simply the ability to recognise food. That precedes any proposition, though.
  • Belief


    Clarification question: Are "Belief X causes action A," and "Instinct causes action A," two mutually exclusive propositions?

    I'm asking because different definitions of words lead to different slots in a causal explanation: under some definitions "belief" and "instinct" can occupy the same slot.

    I have this little narrative in my head:

    A: I'm hungry. There's an apple on the table. I eat the apple. I'm no longer hungry.

    B: I'm hungry. There's something on the table that looks edible, but I'm unsure. I either choose to take a risk, or I form an ad-hoc belief that surely this is edible (to avoid paralysation from anxiety).

    But that would result in it's own definition that has something to do with the bracketing of risk. You might - under such a scenario - model belief as the deciding factor in a battle of basic emtions (e.g. fear of starving vs. fear of poison). It's not that you think A or B is true: if you're completely honest you have no idea. You've just decided to chose A over B, because inaction is disastrous either way and psychologically unable to face the risk head-on. Belief mitigates the risk of inaction and drives you to act. (In a slightly different take, the ability to form believes might keep Burridan's Ass from starving.)

    If you think that blief is something more basic, though, this won't work - for example, what decides which "belief" you form? The belief that what you see is nutritious? The belief that what you see is poisonous? Certain learned cognitive preconditions might come into it (in addition to the relative strength of the respective fears), and you might want to call those part of "belief". But in that case, they wouldn't be just "propositional attitudes".

    Am I making any sense?
  • Belief


    Well, there is a problem here.

    "X is hungry" restricts X to objects that can have the attribute hungry. This includes both humans and dogs. This isn't controversial.

    But if we then ask why being hungry leads to eating certain things and not others, we look for explanatory principles. What motivates us turn towards "belief" when we talk about humans, but "instinct" when we talk about dogs?

    There are quite complex discussions on that with regards to leaning and coming equipped with the knowledge; it's not the details that matter here. Rather: for our purposes,what we're doing is to position "belief" and "instinct" as rival explanations. So what is the relationship? If "hunger" is roughly the same humans and dogs, why would the underpinnings for eating be so very different?

    That is: can we assume belief in human actions, when the behaviour is learned, automatic, uncontroversial, and usually not formulated? My default assumption is that when chosing what to eat, we're not that different from dogs, where it doesn't actually matter whether we had to learn what is "good to eat" or came equipped with it.

    I think guessing at beliefs from behaviour, we might actually be overextending the reference for "belief". Or differently put, I'd probably reverse this: "I am hungry. I believe eating X will satiate my hunger. Therefor I eat X." to "I usually eat X to satiate my hunger. Therefore I believe X satiates my hunger."

    What makes us do things? Instinct, habit, etc. Belief is a factor, but usually only when we actually contemplate our actions. My hunch is that the belief gets activated only when someone or something casts doubt on the things "we usually do". (Under quotes because I consider thought-habits a form of doing, and I'm not quite sure of the range of referential objects I'd associate with that.)

    This would also solve the question of taste, here: if you set an apple and a banana before me when I'm hungry, I'll always go for the apple, because I don't like bananas. No belief comes into it, but there's no significant thought going into that decision either. If you replace the banana with a brick, my mind's not going to be busy thinking "Well, I'll have trouble digesting the brick, so I go for the apple." My mind's going to be busy questioning your motives for offering me a brick. Is this a Monty Python's skit? If I take human agency out of the equation, I'll just ignore the brick completely and take the apple. Basically, my semantic register doesn't tag the brick as food, and doesn't tag the banana as "good", and there's a decision hierarchy in place that makes me pick the apple. Belief might come into it with "brick vs. apple", while taste might come into with "banana vs. apple". But it's essentially the same process of elimination.

    I think beliefs are attached to actions, and may sway decisions in the presence of doubt, but they don't motivate decisions. I think it makes more sense to place "belief" into a sort of feedback-control system rather than a motivating system.

    Whether or not it's a category error to place "instinct" and "belief" as rival explanations for action depends a lot on how we define things. But my default reaction is to treat it as a category error. In simple terms: I don't think "belief" is something as basic as "instinct"; they operate on different levels.
  • Descartes: How can I prove that I am thinking?
    Maybe I should stay out of this thread, because I've never read Descartes myself, but here's a reply based on what I've read about this:

    • Thinking isn't the basis of your existence. It's the only thing you can't doubt (and that makes sense to me, since doubting is also a form of thinking: if you don't think you don't doubt, and there's no problem left to discuss - not that that's any sort of argument; it's just a good place to stop.)
    • You don't prove that you're thinking, you just intuit it. And unlike many other things you intuit, you can't doubt it away. (If you can, I'm immensly curious to learn how.)
    • It doesn't matter whether or not anyone controls your thinking. If you're not thinking there's nothing to control. I always sort of assumed this was about direct experience, and about thought in particular because radically doubting things is a thought process.

    Again, this comes from someone who's never read Descartes, so take this post in accordingly.
  • Belief


    Under these definitions: do I have to understand the proposition "God exists," to be an agnostic? Or differently put, is not understanding the proposition "God exists," sufficient to make me an agnostic? Is the difference between not understanding a proposition, and understanding a proposition but believing it to be undecided (or undecidable) relevant?

    When faced with a proposition, how do I find out what it is that I blieve? If I believe that two contradictory propositions are true, but I am unaware of the contradiction - do I hold at least one mistaken belief or am I wrong about at least one of my beliefs? Is this meaningful distinction in the first place?
  • Belief
    Banno
    • A belief is a relation between an individual and a proposition.
    • The individual must understand the meaning of the proposition in order to correctly be said to believe that proposition.
    • The individual thinks the proposition is true.


    Given this formulation, how would you distinguish a belief from a working hypothesis?

    For example, I'm an atheist. I intuitively reject the proposition "God exists," and so it's not that hard to maneuvre me into situations where I commit myself to saying "God does not exist," is true. Is this already a belief, or is it a clue that I hold a belief that is incompatible with the proposition "God exists?" and it is politically expedient to claim that I believe "God does not exist."

    Am I rejecting the proposition "God exists," without commiting to its negative? Is what I'm really rejecting the relevance of the proposition, rather than it's truth value? That is, I don't care and don't want to spend the time to figure out what I believe?

    "God does not exist," works well enough for me as a working hypothesis: I act as if God does not exist. But acting as if God does not exist is not the same thing as believing that God does not exist. Imagine that theists don't exist. Obviously, I would not have to be an atheist. In many cases I would act the same as I am now, but in situations where the theism/atheism divide is relevan, I do act differently. A working hypothesis like "God does not exist" is only of use, because theists exist (I'm not motivated to invent theism just to deny its existance).

    If we define "belief" as a propositional attitude, I have a problem, here: I wouldn't be able to hold an intuitive belief that I find to express in words, but that's pretty much what I experience. I'm uncertain about a lot of things, and that I react more vehemently against theism than say materialism is at least partly down to a defense mechanism against perceived social control. If it's possible to figure out intuitively held beliefs by making propositions and observing your reactions towards them, then beliefs must precede propositions in some way - that is rather than a belief being an attitude towards a proposition, a belief would have to be something more foundational - something that gives rise to your attitudes to propositions.

    I find "belief" harder to define that way, but it addresses a second problem I have, here: namely that you have to understand a proposition to believe in it. Intuitively, I don't think so. You can believe that a proposition is true, because you trust the person who utters it. Now, you can easily rephrase things to make it fit: for example:

    I do not understand propisition A.

    I understand the proposition "Person B understands propostion A," and think it is true.

    I understand the proposition "Person B thinks proposition A is true," and think its true.

    With these addtions, I could believe a lot of things to be true without understanding them. All I need to do is "trust an expert".

    But I think if I do this something gets lost. I have an ill-thought-through hunch that we generalise "trusting experts" from childhood on (the first probably being our parents), so that there's always some sort of social component already included. That is: "belief" may be a mechanism to restrict doubt, so that we don't find ourselves eternally unable to make decisions.

    In other words, maybe by judging "propositions" we tag as "important" we're really picking our team; maybe "beliefs" are prepositional predispositions rather than attitudes? The likelihood to respond to a certain preposition either favourably or disfavourably? That way, you wouldn't form an ad-hoc belief everytime you say "that's hogwash!"

    I apologise if this doesn't make much sense. It's just that if I see my shoe laces come untied I bend down and tie them. If someone were to formulate that in propositions, like "Your shoulaces are untied" (fact: true/false), "You should tie them," (value judgement: true/false) I can have attitudes to those propositions, but I have a hard time to consider them beliefs just on the ground that they've been formulated. However, when you formulate those propositions beliefs do come into play. So I sort of think that beliefs are pre-linguistic and valuable even if not (fully?) understood.

    (I've actually considered that we substitute understanding for belief - that is, we ignore things we don't quite understand in order to contain doubt enough to render us capable of decisions - people with a greater tolerance for doubt would need less belief [tautology?], and we would be predisposed to defend our beliefs because losing them would render us incapable of decisions. The tolerance for doubt might differ not only by person but also by topic. But all that's even more tentative than the rest of my post.).
  • A question about the liar paradox


    True, you can rephrase this in many ways. What I'm addressing is the connection between syntax and self-reference that TheMadFool is trying to establisch here:



    See Number 3.

    The difference between your example and the single-sentence versions lies in the type of reference, I think.

    Your example is endophoric (1. is cataphoric and 2. is anaophoric). The single-sentece versions are exophoric: you reference an object in the real world, which just so happens to be the sentence in question. I'm not sure any of this makes a difference, but if it does, that would be *very* interesting, though.
  • A question about the liar paradox
    I'll accept that because ''this'' may be defined to self-refer.TheMadFool

    You can rephrase the liar sentence:

    "The sentence I am uttering right now is false."

    "What I'm in the process of saying right now is false."

    What matters is that the subject of the sentence refers to the sentence it occurs in. No single component of the sentence need be self-referiatial by itself for that to happen.

    I don't understand why you want to define "this" to self-refer.
  • A question about the liar paradox
    But, ''this'' isn't like ''I''. If we stay true to the definition of the word then ''this'' doesn't apply to itself and it should for the liar paradox to be one.

    Of course we could invent a self-referential word e.g. ''thes'' and define it as such and the paradox would appear.

    If one were to be as exact as possible the definition of ''this'' doesn't include self-reference. It is grammatically incorrect (I'm not a linguistic expert).

    However, people do use ''this'' as you have (''this Australian needs a bath'' :D) but note that such forms of language are classified as referring to oneself in the third person. It isn't completely an instance of self-reference. People would find it odd to hear someone refer to himself in the third person.

    So, I still think the liar sentence is grammatically incorrect.

    However, as I mentioned above we could invent a self-referential word like ''thes'' and the liar paradox still is a problem.
    TheMadFool

    "This sentence is false," is only self-referential on the sentence level. "This" on its own refers to nothing at all; it's a determiner in the noun-phrase "This sentence", and that nounphrase is also not self-referential (It can't be because a noun-phrase can't be the referent for "this sentence").

    Finally, the syntax can only tell you that "this sentence" refers to a sentence that the speaker indicates. The sentence is not inherently self-referential. You could point to any other false sentence while saying this. There's nothing in the syntax, though, that prevents you from picking the sentence the nounphrase occurs in, making that sentence (but not the nounpharse itself, much less "this" alone) self-referential. The liar sentence is perfrectly grammatical, and the syntax is pretty much irrelevant, except that it allows the sentence to have a self-referential interpretation.

    Formally, "This sentence is false," is self-referential under the liar-interpretation because the sentence's subject refers to the sentence it is a subject of. To be able to do this, the subject cannot refer to itself (and thus be self-referential on its own).

    The liar sentence is perfectly grammatical.