Comments

  • Is anyone here a moral objectivist?
    There was an excellent thread a while back on Lisa Feldman Barrett's way of looking at emotions as socially mediated categories for raw affects.Isaac

    Thanks for the pointer. I'll check it out when I have the time and inclination.

    I just don't think it's possible to privately interpret one's mental states to an extent where one can form propositions about them without recourse to social modes of interpretation. So for me to say that my hitting old ladies is moral would require that I am first fluent in the social activity of interpreting some behaviours as 'moral' ones. This is an activity like any other, they do not arrive pre-labelled. The act of labelling (and this goes for any of our thoughts) is a piece of socially learnt behaviour.Isaac

    I'd agree to this. Just to be sure: I don't think of "society vs. person" as a dualism. Society is the result of lots of people interacting (when looked at from below), and "identity" (In a more basic sense than current identity politcs would have it) is a process of positioning yourself (when looked at from above). Because of this, I'd have to add that it's <i>also</i> impossible that social structures and artefacts exist if they're not being enacted/interpreted by knowledgable agents. There's something reflexive going on here.

    When I convince you that the earth is flat, this is just as wrong as it was before. When I convince you that hitting old ladies is morally good, that's still wrong, but not in the same way. The entire system has just shifted a little to it being right. (Of course, it's very, very hard to convince people to begin with, and because of that it's unlikely to ever gain "critical mass", even in a subculure.) It's very likely always going to be wrong. But the dynamics involved make change possible in principle.

    I've sometimes used 'moral' as shorthand for 'morally good' so hopefully this shouldn't get in the way too much.Isaac

    So have I. It's hard to shed everyday usage.
  • Is anyone here a moral objectivist?
    No, we do different things with some of our desires than we do with language. Calling those desires 'moral's is a linguistic event. It's you talking to me at the moment, It's a social interaction and so it has to involve only social meanings for us to be able to communicate.Isaac

    Okay, after around 1 1/2 hours of trying to puzzle out this paragraph, I think I might actually start to understand where you come from. Is you take on this issue derived from or at least compatible with Skinner's Behaviourism? How public events teach us to tease apart a holistic private experience into lingistic concepts.

    When I hear "linguistic" my linguistics side takes over, so I was constantly looking in the wrong direction (if I'm right here).

    The answer for me is that society has labelled certain types of objective 'moral' ones, just like it's labelled certain wavelengths of light 'blue'.Isaac

    The way I use "moral" it's more akin to "colour" than to "blue". "moral" =/= "morally good".

    I know you'd like a definition from me, but there a lot of things I haven't figured out yet, and I have no "research goal" to guide a provisional one. For example, I've hinted in this thread that I think psychopaths can't act morally, but I'm not actually sure I really think that (for example: does morality necessitate perspective taking, or is a consequentialist approach sufficient?). As a result, I may be inconsistent across posts. Were I to attempt a definition at this point, the problem would get worse.

    I'm not quite sure what you're asking here, but I'll have a go at answering it.Isaac

    I disagree with nothing you said in the following paragraphs. I had the impression that you're taking the public sphere for granted, as if it weren't made up of lots of private experiences we face in behavioural aggregate. But if my Skinner epiphany is in any way getting me closer to your perspective, I have a direction to think in now. I'll need to let this settle for a while.
  • Is anyone here a moral objectivist?
    Ah, OK. Then yes, I'm saying there isn't a difference. In short, morality is a social concept, the language used to describe it is social too and so private meanings make no sense. One can only speak about one's morality using the public definition of what morality is and that definition cannot refer to a private feature otherwise it's not a useful word. Wittgenstein's beetle and all.Isaac

    This isn't a beetle-in-the-box situation. We do different things with morals than with language. which was my first post in this thread was meant to demonstrate. "Using a wrong word" is not a moral failure.

    How would they know? As per the private language argument, unless their behaviour is publicly acknowledged to be labelled 'moral' how would they privately maintain a criteria for their behaviour to class as moral and still expect the word to play a meaningful role in communication?Isaac

    I do think you have a point there somewhere, but I also think my focus is somewhat different and we're not entirely talking abou the same thing.

    Any real-life decision is utlimately private, and only through lots of private decisions is there something like a public sphere. I don't think action points only upwards, so to speak. It's no more warranted to impute a public sphere than it is to impute private experience.

    And even private experience is partly socially formed. My conscience is a home-grown trace of my social history, for example, but it's also partly informed by my personality (I don't like conflict, for example, and that would certainly have an influence on what I'd feel bad about; "I should confront this person, but I don't have the energy."). Some basic urges are socially formed. Toilet training comes to mind. Walking on the sidewalk, too.

    Few actions are purely moral. Most have an instrumental aspect, too. Whatever we theorise about the socially accepted moral goods is abstract, anyway, and needs to filter through your private decision making process to become an action (or a tragically long hesiation).

    On the other hand, when it comes to meaning I'm not looking for similarity between people so much as compatibility: as long as our actions proceed without a hich it doesn't matter what the beetle-in-a-box is like (if it's even there). But, well, incompatibilites do occur, and at least for me it's not always easy to spot whether there's a misunderstanding or a disagreement. See this discussion for plenty of examples.

    How do you know? I mean how do you know it's a 'moral' compass, and not just any old compass?Isaac

    Okay. I'm hungry. There's a banana on the table. I don't like the banana and decide to hold off on eating it. There isn't a moral component in the decision I can find.

    I know my little sister is looking forward to eating the banana. Now a moral component enters my decision. I have one more "excuse" not to eat the banana. I like to think well of myself, so I'd like to frame it as a moral decision. But this also makes me de-emphasise that I dislike bananas. Then I can ask why I'd think better of myself if my motives aren't "being picky" (oh hey, there has been a potentially moral angle on it all along, and I didn't notice) but being "considerate". My motivation is a sort of compound, though, so whatever I wish to think about myself isn't all that important. A panel of disinterested observers could tell me how I consistently act, though...

    Yeah, how do I know? Maybe I'm just not hungry enough to eat a banana. But there is a constellation, and the ways to arrange the pieses are, to an extent similar, and extended observation can get you a clearer picture. I don't think purely moral actions exist, and I also think completely amoral actions are rare. So the question is most likely "how do I know the ratio?"

    I probably don't, but I can guess and feel hurt when other people laugh at my guess and guess again.

    Yeah, that's actually where I'm going with this. Once we accept that 'moral' is a publicly defined term, we simultaneously accepted the mess and the dynamism (like your definition here, by the way), we have to accepted that one a thing is 'moral', that's alk there is to it. There's 'moral', not 'moral', and 'sort of moral, fuzzy at the edges'. But there's no way if working out that fuzziness, there's nothing most moral, it just us what it us, a messy, community defined group.Isaac

    So where do you place protests, criticism, and conflict, if the moral realm is all public sanction? Don't forget that every single one of us is part of each other's context, even if only in some very minuscle way. How do topics (like, say, trans rights) enter the public discourse? I can't imagine explaining any of that without morally interested agents. (Meme theory maybe?)
  • Is anyone here a moral objectivist?
    Not entirely sure what distinction you mean here.Isaac

    Easy things first. I'm talking about the distinction between being wrong about language, and being wrong about morals. I can't figure out how to read you and still be able to tell the difference.

    I think we agree 'bad' doesn't mean anything on its own beyond a vague indication toward a negative. One can be a bad actor, but a good person. One can be a bad person but a good actor. So bad and good only mean anything relative to some objective or ideal. Something which is morally bad is bad relative to ideals of morality (behaviour, character...). If I've understood you correctly, we're on the same page here.Isaac

    Yes, as far as I can tell, we're on the same page here.

    The word 'moral' has to have some public meaning for it to be useful. It has to identify some publicly available set of behaviours or ideals IR characteristics, otherwise it would serve no purpose and be impossible to learn how to use. So I don't see how it can mean 'whatever behaviours you think fit'. That would be a private meaning.Isaac

    That one, I think, needs some unpacking. First, I think this is the place where I should lay open my bias. I've studied sociology on university, but the discipline I fell in love with was linguistics. So while I'd roughly agree that the word "moral" has to be useful when referring to the public, I also think it has to be applicable on all social levels from the individual, upwards, since a person has morals, and any grouping has morals, and there's no guarantee that they're the same, since not every behaviour that differs from public morals is immoral or amoral.

    A person's bahaviour that doesn't conform to the public set of rules, for example, can be classified in three distinct ways:

    a) moral (person acts according to private moral compass)
    b) immoral (attempts to act according to a moral compass, but fails, maybe due to a lack of will power)
    c) amoral (psychopaths see morals as an external imposition)

    Now those are psychological terms, as they pertain to the way individuals make choices. There's a social level, too:

    a) moral (in accordance with some superindividual set of rules - a culture or subculture)
    b) immoral (deviant)
    c) amoral (actions that have no moral import; chosing to eat a hotdog over a burger)

    The easiest way to resolve this via separate lexical entries. (During analysis we'd be calling only one of those sets "moral", but we'd have to decide beforehand which one, to avoid confusion.)

    I've noticed about myself that I when I say someone acts morally, I mean that the person acts according to an inner moral compass, regardless of whether that compass is aligned with the morals of a greater group. When I mean to say that someone acts in accordance with a group's morals, then I say it like that. So a psychopath may act in accordance with his cultures morals, but he doesn't act in accordance with any inner moral compass. My speech habit is to say a psychopath doesn't act morally, even he chooses to stick to his culture's rules.

    If possible, I'd like to find a way to use the word moral on both the personal and social level, via some coherent theory, but... it's hard. I believe that people recreate social structures in their daily conduct, and by that I mean that a culture usually incorporates not only typical moral rules, but also typical moral conflict (e.g. pro-choice vs. pro-life). As such parameters shift, but some rules are more stable than others. Lines like "abortion is murder" or "(online-)piracy is theft" are emotional appeals to less controversial rules, but you can craft rational arguments about why this should be the case. And these discussions are part of the environment in which we develop our personal morals, the younger the more potent, I think.

    I lean towards a dynamic meaning of moral that has something to say about all the levels. Individuals who move through space-time as social vortices who accumulate and disseminate morals through their behaviour. And any analysis should account for all levels, if possible.

    So a person who personally thinks he should follow all of society's rules, and has not particular confidence in his own judgement, would act in accordance with society's morals, but he's also likely to encounter plenty of criticism as a "stickler for rules", and will be asked to lighten up. You cannot analyse this under the aegis of morality, if moral only has the public meaning.

    I feel like I've been rambling, but I'll leave this as is, or I'll never finish this post. To summarise, I definitely think that the term "moral" needs to deal with the public sphere, but I think that ideally it should deal with the entire social spectrum.

    You might want the public meaning to be something more than just an arbitrary set of behaviours, maybe publicly available membership criteria such that our violent student could make an argument that his behaviour fits the definition. But, as I said to SophistiCat, it seems highly unlikely to me that the meaning would be so pure, given the language's history, but even it was, it would still have to have boundaries in order to be a useful word at all.Isaac

    Nah, I'm perfectly fine with it all being messy. As I said above, though, I think we need to be careful about the word's scope. I consider morality to be some sort of never ending process where specific rules are both input and output of thinking-feeling agents' actions. It's going to be messy (not sure about the extent to which it is arbitrary).

    Hitting old ladies is far from any of the ideals or standards within the general public definition of moral, so doing so is morally bad.Isaac

    In moral discussions, people tend to chose non-controversial rules, so controversies are going to feel implausible. You need to suspend disbelief, though, if you're going to use such examples for thorough study of what the concept could mean. If my personal morals demand to hit one old lady per week, and I do that, I'm not acting amorally. I'm not acting immoraly with respect to my own moral compass, but I am acting immoraly with respect to society's standards, and I'm going to have a hard time hitting old ladies in prison. (Maybe I'm secretly relieved, because I don't like hitting old ladies?)

    There's really nothing you can say beyond that, if you're not aiming for universalism.
  • Is anyone here a moral objectivist?
    Hopefully the former, especially as I wrote 'work' where I meant to write 'word' (new phone, different keyboard).Isaac

    I didn't even notice the typo (so much for careful reading...). And I'm still not sure what you're saying here.

    In order for the student to merely 'disagree' here, rather than be wrong about the meaning of the term 'morally bad' he must have his own private meaning of the term 'morally bad', one which is in disagreement with the one the rest of the language community uses. If, on the contrary, he does not have a private meaning of the term 'morally bad', then he must acquiesce to the meaning determined by the language community, and that does not include hitting old ladies.Isaac

    "Morally bad" represents a "negative moral evaluation". "Hitting old ladies" is a state of affair prone to moral evaluation. A person who doesn't evaluate hitting old ladies negatively would be using the term "morally bad" incorrectly if he said from his perspective that hitting old ladies is morally bad, but he would be using "morally bad" correctly if he said from his society's perspective that "hitting old ladies is bad". Moral evaluations are always tied to a perspective. The meaning of "morally bad" isn't private; the personal evaluation tied to the word is, independently of whether there's agreement or disagreement (or indecision, or indifference).

    You're seem to be getting rid of a useful distinction, and I can't figure out why? What do we get in return?
  • Is anyone here a moral objectivist?
    You haven't answered the question though. I wanted to know why you confidently allowed the student to have his own private meaning for the term 'morally good', but you're deeply suspicious if he tries to claim his own private meaning for the term 'hitting'?Isaac

    I didn't anser that question, because that's not what I intended to say, and - to be honest - I don't think I I did. I called it a linguistic failure. Being wrong about "good" (he's not wrong about morally; the adverb's appropriate to the situation) and being wrong about "to hit" are both instances of linguistic failure. (I do allow him a private meaning for both words in some limited context - say, a diary written in code.)

    Basically, I misinterpreted your question, and I'm still not sure why you'd think I allow a private meaning for the term "morally good".

    You do seem to mingle language and morals at a deep level, in a way I don't quite understand. Sure, they're entwined, as you say, but it's generally not hard to follow the distinct threads, horrid tangles notwithstanding. Also, both language and morals involve social rules, so if you abstract enough you may end up in a place where they're the same, but they also use a lot of their usefulness at terms.

    For example:

    When the grocer delivers potatoes, you 'ought' to pay him because that's the meaning of the work 'ought'.Isaac

    This seems needlessly hard to parse or outright wrong. I don't know which.
  • Is anyone here a moral objectivist?
    So if, in the first example, the student says"I understand that you think it's 'hitting' to push my fist toward an old lady this way, but I disagree," why does no one treat it as a disagreement? It's not, he's just flat out wrong about what hitting is.Isaac

    Well, he's certainly flat out wrong. Whether or not he, in addition, disagrees is an empirical question. Personally, decontextualised like in this thread, I'm more likely to imagine irony designed to dismiss your intervention.

    If it really is a disagreement about the word "to hit", I'd be inclined to think that he's trying to find a "loophole in the law" rather than to act morally. But that, too, is an empirical question. Very unusual people do exist.
  • Is anyone here a moral objectivist?
    If a foreign student learning English pointed at someone hitting an old Lady and said "stroking", you'd be inclined to say "no, not 'stroking', that's 'hitting'". If they then said "morally good", why would you not similarly correct them and say "no, 'morally bad'"?Isaac

    If a foreign language student sees someone hitting an old lady, intervenes, and says "No, no. Morally good," we have a likely a language problem. - A linguistic failure

    If you see a foreign language student hitting an old lady, intervene, and he says "I understand that you think it's morally wrong to hit an old lady, but I disagree," we likely do not have language problem. - A moral disagreement

    If you see a foreign language student hitting an old lady, intervene, and he doesn't understand why, we likely do not have language problem. - A moral failure

    There can obviously overlap, but that's the gist of it.

    The relation between discursive ethics and pratical morals is a rather interesting topic on its own, I'd say.
  • What School of Philosophy is This?
    Avery
    • don't believe that objective moral facts exist, because I haven't seen any compelling evidence that they do exist.
    • believe that objective morals are not required for a logical model to explain the world.
    • things start to make a lot more sense when you remove objective morals from a model of the reality.


    Have you tried out social relativism?

    A social relativist would say there are "objective moral facts", but they're probably not what you think of when you say the above. A social relativist would say that moral facts are a form of social structure. For example, when you walk down the street and notice an open door, you're unlikely to walk in. There's a range of likely reactions, but you're only going to make an overt moral decision if your situational curiosity comes into conflict with "this is not my house; I have no business entering". But even if you're just walking by, maybe without much curiosity, there's a habitual moral layer to your behaviour.

    An accidentally time-travelling cavement would have a very different reaction, because he'd been socialised in a very, very different moral environment.

    Wanting moralist right/wrong rules I think is very common in humans, because we use abstraction to navigate our enviroment. The problem with fixing moral rights and wrongs is, though, that our theories about what's right and wrong are a crucial part of our moral upbringing, and to the extent that they influence our behaviour, gaining new moral insight keeps the moral environment changing. Moral decisions are only perceived as such when they're problematic, and if a particular group of people solidify a typical problematic situation into a norm, then there's a change in the moral environment for these people. And that change causes new unforseen problems. There can be no universal right/wrong rules for this reason, but some constellations of rules can be more stable than others.

    So I think there are objective moral facts, but they're not about what's "really" right or wrong; they're about complex moral behaviour.
  • Privilege
    And if you hadn't said what I quoted right now, I'd probably not have realised it but: the headstart you get isn't your priviledge. You get the headstart because of your (unnamed) privileadge (I'm guessing it's shoe size, am I right?).Dawnstorm

    Eh, I'm not sure that's quite right. I'm having second thoughts. It's mostly a terminology problem, but I'm pretty sure priviledge is attached to a factor, but it's not quite the individual advantage either.

    (What's the etiquette for such a case, where I have amendments to make, and I'm still the last post. Editing my post? Quoting myself?)
  • Privilege
    But focus on only one race, one game if you like game theory.Alejandro

    I'm not sure how to deal with that example, though. Sharing the cash is a stop-gap measure at best, and flaunting your priviledge at worst.

    A few other things:

    Eliminating my headstart can only be done by me right?Alejandro

    Generally, no, you can't eliminate your head start. It's baked into the system. In your example, you can walk back to the starting line, sure. In real life, you can't stop being a male, white, or straight.

    And if you hadn't said what I quoted right now, I'd probably not have realised it but: the headstart you get isn't your priviledge. You get the headstart because of your (unnamed) privileadge (I'm guessing it's shoe size, am I right?).

    However, if I do it, I will lose the race because I do not have the same abilities that the others may have.Alejandro

    Right. Now you're underpriviledged, and the headstart is "affirmative action". You're doing the right thing by investing in your skills. Affirmative action will cease once you catch up (or so the theory goes). Except it's only one race, so it's pretty pointless to invest. (And anyway, it's your example. You could have been an excellent runner with a headstart, who can afford to take it easy.)

    Finally, if it's only one race, and you're slow (and thus underpriviledged), and you don't get a headstart, that's all right, too, since one of the winners might give you 50 dollar to teach him you're expertise.

    It's not about one race. It's never about one race. It's about a repeated and systematic pattern across many, many races. So I don't really know what to say about solution, other than it doesn't change much (and in some cases it might be better to keep the money so you're not the well-meaning but cluesless guy who rubs salt into a sore wound by offering a trifle).
  • Privilege
    What if me, a privileged and slow individual, trade $50 with anyone that teaches me how to run faster and win? I still have my privilege, I am not renouncing it, but now I am using it as a tool from which someone else may benefit.Alejandro

    So, you, a slow runner, keep track of the guy who does not have your advantage but still almost beats you, and give him $ 50,-- you wouldn't have had without your headstart so he can teach you how to run faster, and so that in the next race, he'll have an eaven harder time catching up. He may get bitter, and since he now has $ 50,-- his motivation to run hard drops a little. However, all the others will race each other (not you), so the fastest of them can have $ 50,-- from you. You get $ 50,-- with each race, and expand your advantage, until you're good enough that your investment is just a charity holdover to flaunt your status, with lots of guys behind you to think: "That should have been me."

    That's not a solution; that's the problem.
  • The idea of "theory" in science, math, and music
    What I meant was that the idea of a "mode of limited transposition" didn't exist as something worth naming until exactly the moment when it was used by Messiaen (at least that's my understanding).

    I don't doubt that there's math in music, but I think the process by which math becomes musically relevant is fascinating and potentially understudied (or I just haven't found the right resources).
    Halley

    It's... difficult. I'm not a philosopher actually, but music theory has always reminded me somewhat of linguistic theory (something I know a little better). What they have in common is that we have lived systems we learn, and theories about how they work that feed into learning.

    So basically, you have this theory and it can be used in different ways:

    1. Describe what people are doing
    2. Use the theory as a method of learning (which impacts what people are doing and creates a feedback loop)
    3. Scour the theory for logical possibilities and see what people could be doing but aren't.
    4. Use it as a set of rules to judge the value of the product

    So there's a question in what ways a descriptive music theory is useful. For example, the following video discusses basically what theoretical framework is useful when describing the harmonics of "Sweet Home Alabama": Tonal Harmony? Modal Harmony? Or something else (with Tagg's tonic-outgoing-medial-incoming loop as an example)?



    The problem here is that what sort of description you find most useful often depends on what you hear, and what you hear may be influenced by the theory you've been exposed to.

    Similarly, you can write a piece of music with a particular theory in mind. And then you can make an adjustment because it sounds nice, but it's not entirely clear what you've done in terms of theory. Now, someone else comes along and doesn't like the adjustment you made. That person could easily try to use the obvious non-conformity as a legitimisation for why it doesn't work, but that would involve dismissing that it works for others, and dismissing the question of why it works for others as unimportant (since they're wrong).

    So you have the same theory as a set of regularity interpreted as (a) a description of what's going on, (b) a generative aid, or provisory template, and (c) a system for judgement of good and bad. I see that all the time in both composition theory and linguistics.

    Scientific theories for behaviour-external fields such as physics don't have (b) and (c). If you see an apple floating up into the sky you won't punish it for violating the laws of gravity; you'll try to figure out what's going on. And gravity, unlike language or music, isn't something you meaningfully participate in. (You do participate in gravity, obviously, to the extent that you're a physical object).

    I'm not familiar with Messiaen. It seems to me, he's been scouring the formal properties of the theories for things that are logically possible, some of which have been done before (the wholetone scale is identical to his first mode as far as I can tell?), and some maybe haven't. You could, in theory, compose musics with a random scale (random variables: number of notes (1 -12), location of notes on the chromatic 12-tone scale). Even then, there a still elements we're taking for granted (for example the primacy of single notes a half-step apart), and they're ingrained in culture (for example in the form of instruments). That's not a problem if all you want is a generative method you can deviate from at pleasure, or if you want judgement standards. As a descriptive set of rules, though, you might encounter music you can't describe or will "misunderstand" if you use that particualr theory, and that's something you need to be aware of.

    Not an expert in anything, but these are my thoughts about the topic.
  • Moore's Puzzle About Belief
    As you say, it's a silly statement, but also a true statement. That's the puzzle.Michael

    For me, the puzzle is why this is a puzzle, but then I haven't read Moore and know little about him and the context of this puzzle.

    Here's the thing: "It's raining, but I don't believe it's raining," isn't necessarily a silly statement. What if we're talking narrative present (present tense for past events) with an intrusive narrate. "It's raining (as I, the narratar), but I (my past self) don't believe it's raining." When speaking, you need to take perspective into account in a way you don't have to if you consider well-defined philosophical propositions. If you ignore perspective, you can create plenty of absurd situations:

    Bill: I'm Bill.
    Joe: No, I'm not.

    Both are correct, and yet B seems to contradict A. What a puzzle! This constructed situation is nonsensical, because it entails that Joe can use "I" correctly but can't parse it when someone else uses it. (Not sure if cognitive impairments exist that make such a situation plausible.)

    Similarly, Moore's puzzle is the result of assuming things about Macintosh's knowledge and then decontextualising him so that his knowledge is only partially relevant.

    It's just not a puzzle that you can't truthfully speak the truth about whether or not it's raining if you don't know whether or not it's raining. Macintosh could gamble on it, though, if his intention is to speak a true sentence, rather than to speak the truth about rain. Basically, Macintosh would be betting on himself being wrong about rain. That this leads to real-life absurdity doesn't automatically cause a philosophical problem. It depends on what problems you want to explore (and this is where my ignorance of Moore limits me).

    I'm not surprised they say this puzzle helped develop pragmatics. It's definitely relevant.
  • Belief in nothing?
    Okay. But if you were saying, "So far no one has been able to convince you that World War II actually occurred"...where would that leave us?Frank Apisa

    "World War II" is a valid value for "occurring". Even if we had no evidence, the meaning is fairly straightforward. "God", the creator god of the monotheistic religions at the very least, is different from that. If "God" created everything there is, then existance is a product of that process, and to say that "God" exists either sends me into an Escher painting equivalent of meaning, or it's an incomprehensible mystery for which I have no intution.

    In any case, the logic for the empirical world, which I'd be prone to apply to things like "World War II" doesn't apply. If it did, most theists I know wouldn't be able to believe in God; as it is, when I outline what sort of God I don't believe in ("bearded man in the skay") then they say they don't either.

    God concepts are manyfold, and Shintoist kami are very different beings from the monotheistic Gods, but there's also this spiritual, transcendental whiff to it that I have trouble understanding. I always end up at a point where there is no discenible difference between any one God existing or not. The only difference I can see is the word-behaviour of the believer.
  • Belief in nothing?
    This is true for everything. X either exists or does not exist. It is a mutually exclusive proposition.Frank Apisa

    My entire point, though, is that so far, no-one's been able to convince me that "God" is a valid value for X in that instance.

    A lot of this question comes down to sentence structure and which words are used. “All Gods are fictional” and “Gods don’t exist” are synonymous, but add “I believe” to the front of each and (perhaps?) one negates itself, while the other does not. Hence why I say it comes down to sentence structure and essentially the logic behind grammar.Pinprick

    Language isn't that logical, though, when used in the wild. If you insist on thorough grammatical logic within philosophy, you either have to be very careful how you phrase things, or you create a insulated bubble, where your conclusions have little to do with the world we live in.

    Under the assumption that "All Gods are fictional," and "Gods don't exist," are synonymous (which is not a given in every context), you could lead someone to commit to the positive phrasing and thus have them have a belief. Intuitively, I'd consider that move a rhetorical trick rather than anything philosophically meaningful.
  • Belief in nothing?
    They do not "believe" C...they KNOW C.Frank Apisa

    ?

    You are supposing that A is a positive statement...and B is a negative one. But that is not so. Both are positive statements. If made as assertions...BOTH would bear a burden of proof from the person making the assertion.Frank Apisa

    This isn't about the burden of proof. It's a negative statement, because it negates a positive statement. I brought this up precisely because the relation between the syntax and the semantics isn't as straightforward as it appears.

    If I were to claim that the platypus doesn't exist, that would be negative statement, but the burden of proof would be on me. Whether or not a claim is positive or negative in syntactic structure doesn't really impact the burden of proof.

    I'll demonstrate why I brought this up with my reply to Pinprick.

    Perhaps?

    All Gods are fictional.
    No Gods are fictional.
    Pinprick

    That may work. The question, then, is if "All Gods are fictional," are semantically tied together with "Gods don't exist," phrased once with a positive and once with a negative structure. If so, can you say that there is a "believe in nothing"?

    I don't really have an answer to this myself, except that I think it pays to make a difference between intutive concepts, semantics tied to word structures, and the structures themselves.

    I think this because 'theism' is defined - definite - insofar as it's a 'conception of divinity' that consists of distinct truth-claims about g/G, and therefore, to my mind, are not "meaningless" ontologically, epistemically or ethically. g/G, I agree, is meaningless, but what we say about g/G - if it's proposition - is not. (Obviously, I exclude noncognitive theism, for instance, from consideration and give the mainstream / classical theists their cognitive due.)180 Proof

    I tried to address this in a longer post, but I talked myself into a corner and got confused.

    Basically, I view statements like "God exists," to have the structure of a statement, but its social function is appellative rather than referential. A complex of behaviours is tied to ritualistic verbiage. Except that's clearly not how the theists around me see it. And that's where my confusion enters.

    Note that I make a difference between "undefined" and "meaningless". It's my impression that God being "undefined" (or "undefinable") is part of the mystery and thus meaningful to theist. I can't mine meaning that way. I don't know how that works. And that's what makes me an atheist.
  • Belief in nothing?
    A: God exists.
    B: God doesn't exist.
    C: God may or may not exist.

    Some people believe neither A nor B, because they believe C.

    For me the concept of God holds no meaning, so I reject A, B, and C as the same sort of alien stuff. I reject the topic itself as meaningless in both theory and practise, not the propositions as untrue.

    Also, how you phrase things often determines what the negative is:

    To use Cobens example (abridged because I'm too lazy to type it all):

    A1: There is a feline that hasn't been catalogued by science.
    B1: There is no feline that hasn't been catalogued by science.
    C: There may or may not be a feline....

    A2: Not all felines have been catalogued by science.
    B2: All felines have been catalogued by science.
    C2: All felines may or may not have been catalogued by science.

    Can anyone here think of a way to phrase "God doesn't exist," as a positive, to which "God exists," would be a negative? I can't.
  • About This Word, “Atheist”
    But, if you want to suppose someone saying that either "yes" or "no" is correct for a "yes or no" question...go with it.Frank Apisa

    That's not what I'm saying, though. I'm saying that my hunch is that it's possible to be fooled by the grammatical structure of sentence. Just because you can formulate a yes/no question for gods' existance doesn't mean that this formulation is a valid treatment of the concept of God.

    For what it's worth, I do think I'm overshooting my mark by treating all god concepts the same. Even translation is difficult. A monotheistic God is rather different from the Greek lot, and they're both pretty different from Shintoist Kami. I'm shaky on this all, because I'm generally not bothered by any of this in my daily life.

    I mean what about:

    Does the Mellow-winged Staggerthwart exist? (Can you answer the question with yes/no, before figuring out what this is supposed to be? I just arranged random words, here. There's no meaning to it.)

    Or self-referential: Does existence exist?

    Not all sentences of a certain structure are necessarily valid representations of... well, anything meaningful. It's an empty phrase that traps people in an uresolvable conflict and sorts them into two sides, where emotional intensity is substituted for content. The divinity aspect allows people on either side to shift goal-posts at will. People can be umpires in the game, but they can't do anything about the goalpost shifting, because it's in the rules.

    Goalpost shifting is easily possible about nearby aliens, too, but it's not in the rules. I realise that the burden of proof, here, is on me, and since it's just a hunch (with ever-decreasing certainty about different God concepts), I don't quite know how to do this or if I can at all.
  • About This Word, “Atheist”
    One of the things I "got" there, though, was a predisposition toward "There probably are no gods"...which is a perfectly fine take to have on the REALITY. Fact is, either there is at least one god...or there are none. So the hard atheist and the hard theist have at least a 50% chance of being correct. And the use of "atheist" as a descriptor for someone with that disposition MAKES SENSE.Frank Apisa

    Hm, maybe. It's entirely practical, though. I definitively behave as if there are no gods. Now, I'm a rather cautious person, and I even have a tendency towards anxiety. I'm fairly sure if I were, in the back of my mind, considering the possibility that there are gods, I'd be worrying about that, and it would be a hindrance in making decisions. What if I angered a god? Things like that. I have no such worries, so it'd be probably more a pre-disposition towards "There are no gods," without the probably. Which would be even further down the atheist road, under the three-category-model.

    My position, though, is better described that the hard atheist and hard theist have a 0 % chance to be correct, because their respective claims aren't meaningful enough to trigger correctness conditions. Both claims can be disregarded. (This implies that an agnostic who believes that either the hard atheist or the hard theist is correct, would also have 0 % chance to be correct.)

    This is further complicated by the fact that I'm a relativist, though. I can only say this some degree of confidence within the confines of my own worldview. I strongly suspect that theists at least do attach some sort of meaning to the proposition, but I since all my perspective-taking exercises in that direction have failed, I can't behave as if. In a sense, this makes my atheism mostly performative, with no content.

    There are strange flies in your country. In mine they are not so easily frightened off. Declaring yourself an atheist is the easiest way to get bitten by flies.David Mo

    I'm Austrian. Upper Austria to be precise. It's a very secular life around here. You won't even talk about religion at all until you know each other a bit better (or the context warrants it; e.g. you're talking about news). I'm pretty lucky in that respect. Pretty much the only people trying to convert me are Jehova's Witnesses.
  • About This Word, “Atheist”
    Do these terms mean that you have observed the stimulus prior to its description, or, you heard its description prior to your observation, respectively?CeleRate

    Yes.

    If the order is the distinction, I'm still unsure how that would be the critical variable. Wouldn't the extraordinariness of a claim be more pertinent?CeleRate

    It's a framing issue. When you see a thing and part of its aspects surprise you, you'll want to integrate it into your worldview. It changes via direct experience. If you learn about a concept via a word, then you assume the word is meaningful, and you'll try to figure out what it means. You'll first have to try to figure out what the word means, because it's possible that the "thing" is already part of your worldview, but the other person uses an unfamiliar word and sets different accents. If that's the case, you'll have a oh-you-mean-X type of experience. Basically, an unfamiliar word and a description that doesn't trigger recognition doesn't necessarily introduce you to new concept. It might introduce you to an unfamiliar perspective on a known concept. That is: you get knew information about the person you're speaking to via a familar concept (but only if you figure out that they're referring to a familiar concept).

    But if you're really introduced to a new concept, you'll not be "naive" about the concept when you encounter the thing in the wild. From the get go, your take on that thing will be influenced by the perspective of the person who introduced you to the concept. Part of the world-view integrational work has already been done. The more abstract the concept, the more pronounced the effect is.

    At some level of abstraction the concept itself might actually be an interpretative mold to organise several disparate perceptions and/or feelings into a "comparative matrix". I think words like "love" and "justice" fall into this category. Anything that's culturally specific you usually learn about during childhood, a time when you're still consolidating new concepts into a world view. A lot of these things feel very basic later in life, but you actually absorb them early on by imitation, trial and error. When there's a concept you feel is vital to others, you might be motivated to actively seek out clues. A series of Is-this-it? experiences until you're satisfied. If you fail to acquire too many culturally specific abstracts, you're going to have find other ways to deal with it. I didn't acquire the God concept properly, I think, because I sort of tagged with make-belief, like the Easter Bunny, who supposedly coloured and hid Easter eggs (it was clear to me that bunnies can't hold brushes, and all those pictures were cartoons, that my parents were smart enough to know this, too, yet they'd never admit that they were responsible - I thought God was a similar sort of game; I remember the surprise when I found they were actually serious about that).

    One important question about word-first concepts is this: how do we satisfy ourselves that this thing or this constellation of things corresponds to this concept? (Conversion experiences should be interesting.)

    Maybe it would help me to understand the epistemology you use to develop an understanding of things contained in the universe, and what is meant by level.CeleRate

    Sadly, that's a mostly intuitive process, and I'm not so sure how to describe this myself. I'm not even quite sure what I mean by level. When I look at the word "God", then I'm trying to figure out what that could mean in a way that would make sense within the confines of my world view. Since I have functional world view that does fine without the concept, this is difficult. So it's mostly an exercise in taking another persons perspective. But the God-concept is opaque.

    Unicorns, for example, are comparatively easy. There physical objects, for example. There can be things that look like unicorns, and they then either are Unicorns or not. I don't need complete information. For example, I don't need to know the gestetation period of unicorns, unless if that were the easiest feature with which to distinguish them form mere single-horned horses.

    Basically, I'd need some way to check for evidence of God, or some sort of perspective that allows me to interpret stuff that's there as evidence for God. I've developed the unsystematic intuition that if you have faith in God, everything is potential evidence, and if you don't nothing is. And that's a bit of a road block. I don't think there's a specific direction my God concept has to... concretise?... before I can really tackle the question of existance.

    That's precisely the area where I confuse myself the most, though, so I doubt I can explain myself very well here.

    One's world-view is ultimately what a given individual believes is understood. But people's worldviews can undergo conversions.CeleRate

    I'm not sure I'm reading you right, here, but I think the bulk of one's worldview is unconcious, and it's less a finished product, and more an ongoing progress. Crises will lead to restructurings, and things like epiphanies may not be as sudden as they seem to your conscious self (on account of a sudden trigger). I think I may be using the term a little more broadly than you do in this paragraph (and also a little less precisely as a consequence). There's nothing I disagree with here, though.
  • About This Word, “Atheist”
    The easiness is something I experienced. Trial and error. It would have been different for Huxely. For example, not believing in God, it seems to me, was quite a bit harder in the 19th Century than it was a hundred years later, and that's likely not the only difference. People tend to leave me alone, when I say I'm an atheist. It's just not a big deal. Saying I'm an agnostic is morely to invite discussion, and I'm not always in the mood. Trial and error helped me to find out that I was happier if I generally said that I was an atheist and clarify that I was actually an agnostic when already in conversation on the topic. Nobody took offense, or thought I'd been lying to them. Also, when time was of essence, "atheist" was simply a more reliably known word. In my day-to-day life, I'm very pragmatic about this.

    I'm being difficult in this thread, mostly because it's about the term's meaning. I'm stating my preference, but what I'm actually advocating is to know and accept all the definitions out there, at least passively, when hearing or reading.
  • About This Word, “Atheist”
    How about if someone says "unicorns don't exist". Would one be unable to not believe in unicorns if one understood (maybe even imagining renditions seen) what is meant by the question? Or, is there a different point I missed?CeleRate

    In this paragraph, I was using the "atheist" definition that says you need to believe that God doesn't exist. If I don't know what "God" is supposed to be, I can believe neither that he exists, nor that he does not exists. This means that I mean the standard for "does not believe God exists," but I do not meet the standards for "believes that God exists." There are higher standards for believing a negative statement than there are for not believing the corresponding positive statement.

    But there are complications here; the short version is I understand the concept of unicorns well enough to believe it very likely that unicorns don't exist. I cannot say the same for God. But what's the difference?

    First something obvious: Do I believe sparrows exist? Yes, I do. I've seen sparrows before I even learned to speak. I can point at the bird and ask, "What is this?" It's a thing-first concept.

    But if you tell me about the platypus, I might be skeptical. Does such a creature really exist? It's a word-first concept. You describe the creature, and it sounds really unlikely. Maybe you've tried to sell me on drop bears in the past and laughed at me when I was gullible? It's a word-first concept for me, but there's a hierarchy of ever more convincing evidence: pictures, videos, seeing the real thing in a zoo.

    A unicorn is word-first concept, too, for me, but the word's cultural status is "mythical creature" rather than "animal", and that complicates things. The unicorn sounds unlikely, but maybe it's not impossible. I might believe it exists, the way a crypto-zoologists would: somewhere out there is an animal that fits the description more or less closely. Maybe it's a hidden species? Maybe it's an occasional mutation of a known species? But if we're sufficiently influenced by myths or fiction to think of it as "magical" in some form (say, it's not really a unicorn, if it's horn doesn't have healing powers), then a real life horned horse simply won't count as a unicorn. But the concept is still understandable. I'd have to say that it's unlikely a unicorn exists to begin with if we expect an animal, but exponentially more unlikely if we actually expect a magical creature.

    Things complicate even more if the myth in question is alive and well in the culture you operate in. A word-first concept believed on faith has dubious evidence requirement. Once you reach the level of the Christian God, you have an entity where nearly everything in existence can count as evidence, simply because you have faith. I don't think that people relax their requirements for evidence; it may be just that different sorts of entities require different sorts of evidence. But if I don't understand what sort of entity God is supposed to be, I'm not sure how to look at the world to find evidence. I can dismiss the concept as making no sense (which is what people do when they parody the concept of God with "invisible pink unicorns"), but I can't "believe that God doesn't exist". The concept never reaches a high enough epistemic level within the confines of my world view. Cultural practise is important here, because I know people who believe in God, but who are neither gullible nor idiots.

    However, I'm not sure I understand what distinction you were alluding to in the comparison of the two propositions "God exists," and "God doesn't exist". Thanks
    Options
    CeleRate

    That's without a doubt the hardest concept to explain, not the least because I haven't actually worked this out myself. It's more a hunch than anything, and it ties in with the above: what counts as evidence for God, and how do you have to look at the world to see those... things? as evidence. To what extent, am I just using language differently from someone else? Does God have a clearly demarked reference in the real world (as a unicorn would have were it to exist)? At some point in the process I abstract so much that I suspect the difference between existance and non-existence might disappear you it were possible to compare worldviews directly (it isn't; world views other than your own are only available via interpretation through the lense of your own, and how much - if anything - of human worldviews are human universals isn't clear.)

    I've never come to clear understanding on this myself, so I'm really struggling to put intution into words.
  • About This Word, “Atheist”
    Is there anyone here who uses “atheist” as a descriptor or part of a descriptor…who falls outside of that parameter? I’d love to discuss the issue with anyone who does.Frank Apisa

    That's me. Or at leat that's my self-perception; I'm not sure you'd agree.

    I definitely think that "God exists," and "God doesn't exist," have the same epistemological status. They're both undecidable in my world-view, because I don't know how to order things in a way for the concept to make sense. There are simplistic concepts of God that I do believe don't exist (e.g. old man with a beard in the sky), but neither do most theists, so these simplistic concepts don't count.

    Pondering the question of God is a bit like trying to run a piece of software that won't run on my OS on a shoddily written emulator. The functions the programs fulfills are either not very important to me, or I have programs that actually work fine on my OS (not without the occasional bug) that do it for me. The only reason I'm bothering with the program at all, because many people say it's a must have and keep asking me what I think of it. What I think of it is that it's a nuisance, because the emulator sucks, and I'd rather not bother with it at all, when I have workable alternatives.

    My daily life experience back when I self-identified as an agnostic was that it was still easier to call myself an atheist, because not everyone the term "agnostic". The question I used to encounter most is "Do you believe in God," to which a yes/no answer was usually a sufficient answer. The line isn't just a question about the existance of God; if you grow up in a Catholic household and go to church on Sunday, you're intimately familiar with the Apostle's Creed ("I believe in God, the Father almighty..."), and at least that sort of contextualises the question. It's a question about faith, not about whether you believe a proposition. In context, I can talk about why I don't really fit in. It's a social question.

    Most of the time I used the term "atheist" (while calling myself an agnostic in a more technical context), it was in a really banal context. ("Oh, it's nearly time for church. You coming?" - "Nah, I'm an atheist." - "Gotcha. See you later." -- I wouldn't have been giving them information here. They're fine with a nonbeliever coming along, but by emphasising that I'm an atheist, I'm telling them nothing's changed)

    To me the question "Do you believe in God," loses all meaning when I take it out of its social, lived context. And in isolation "Does God exist?" is even worse, because then you'll have to take into account the possibility that people - being fallible - are mistaken about His attributes, and once you go down that rabbit hole nothing remains to make a proposition about. You have to wait until understand the concept enough before you can even start to ponder it. At this point, I'm not holding my breath. But conversion experiences do happen, so who knows?

    For me, the word "God" derives its meaning entirely from its lived social context. And as such, I found the grid-based approach makes it easier for me to organise the social environment, for example, because there are theists who share my sense of the unknowability of God, but are somehow able to endow mystery with metaphysical significance, something I fail to do. Basically, I don't know what it's like to believe in God.

    Personally, I've never seen an argument for God that's convincing, and I've never seen an argument against God that's convincing. The ontological argument sounds silly, the problem of evil isn't a problem, etc. Now, I'm basically a relativist. We create our worldviews as we live in the world. So if I grew up with my worldview, but at some point my concept of God just stopped growing along with it, it's no surprise that all the God-concepts I can muster are childish. Basically, when the ontological argument looks silly to me, it's just a symptom of the underlying underveloped concept.

    This sort of relativism is not without its problems though. Crucially, it's very hard to figure out how much about the differences in worldviews is down to personality differences, how much to personal experience/history, and how much to semantics and usage.

    The difference between "atheist/agnostic" in different usages is pretty transparent to me. I can translate between the concepts, but since I've been using the grid-based approach for around 15 years, now, I'm biased towards this one - by habit. The difference between "God exists," and "God doesn't exist," is semantically opaque to me, though the logical structure suggests they're opposites. And at this point I have to remember that all the meaning I can assign comes from the terms social context. I'd expect for a theist the difference between "God exists," and "God doesn't exist," is clear as day, and they may suspect at this point I'm just bullshitting around. I'm not. This sort of stuff really does go on in my head.

    If you need to understand how this world would change if a God existed to be an agnostic, then I can't be an agnostic. And if you have to understand what it is that doesn't exist when you say "God doesn't exist," I can't be an atheist. There are a lot of questions like these, and none of them mean much to me. A binary like "believes in God/doesn't believe in God" is about social behaviour, which is observable, and easy to understand. Thus it's more useful as a comparative, social term to me.

    So if I have to choose between "God exists," and "God doesn't exist," I'll definitely choose the latter, though I'd rather not choose. This is not an expression of likelihood, though; it's that if I said the former in the context of my day-to-day life people will have expectations about my behaviour that won't pan out. I don't go to church, I don't pray, the "Word of God" carries no weight with me, etc. As a proposition, "God doesn't exist," is simply more compatible behaviour. None of this says anything about what I actually do believe, except what you can glean from what I have to deal with, and how I deal with it.

    I worry that this amounts mostly to meaningless babble, but I'm not sure I can do better.
  • About This Word, “Atheist”
    That MISTAKE is the entire reason for the controversy...a reason you seem willing to simply disregard, Dawn.Frank Apisa

    Yes, I'm perfectly willing to disregard this "mistake". First, and foremost, I'm willing to disregard this "mistake" because etymology isn't destiny. I'm willing to use the word atheist this way, because a lot of people use the word this way, and because I like it.

    Whether or not this is an actual honest mistake, or whether it's a series of little mistakes, or a politically motivated deliberate re-interpretation, or whatever else might have led to the current usage doesn't matter much to me at this point.

    But apart from this, I'm really not sure how you think language works, or what etymology does. When
    you're saying this in a follow-up post:

    "Anti" has a specific meaning. The letter "a" at the beginning of a word does not. Agreed?Frank Apisa

    I just don't know how you can say this. Anti- is a prefix with a determined meaning, and a- is also a prefix with a determined meaning (although there's more than one "a-"; from the etymology site you're linking to).

    Of course, a word-initial "a" isn't always a prefix. It's not in "aardvark", to use your example in my reply to me. The a- in atheist and the a- in agnostic are the same prefix.

    "Agreed" very obviously has nothing to do with greed, since the uninflected verb form is "agree". The a- is definitely a prefix, though. Etymologically a variant of "ad" as the etymology site tells me.

    The part of grammar that deals with wordformation is morphology. It's important to understand morphology if you're going to do etymology.

    (a) theist, resulting in a meaning of "without a belief in any gods" IS A MISTAKE.Frank Apisa

    Yes. When the word was originally coined, we didn't tag "a-" onto "theism". But according to the link you provided it's from Greek "a-theos", and the site even specifies the "a-" as "a (3)", which is referring to their own site and the linke I provided above. So it basically meant "without god" rather than "without theism".

    It never happened.

    It didn't happen when the word was coint. Something happened later, or nobody would be using it like that now. You can call it a "mistake" if you like, but we'd have to go through the history of the word to see what really happened. Langauge is, has been, and will be messy.

    It couldn't happen, because the word "atheism" came into the English language BEFORE theism. It is an etymological construct that makes as much sense as supposing "abate" means without "bate" or "aardvark" meaning without "ardvark" or "abridge" meaning without a"bridge."

    Once again, if you're going to argue from etymology you should demonstrate a better sense of morphology. "a-bate" is the same prefix as "a-gree", and not the same prefix as "atheist". "Bate" doesn't exist, I think, as a standalone English verb, but it does survive in phrases like "with bated breath". The "a-" in "abridge" is the same "a-" again, as in "abate", and "agree". But "bridge" (romanic) in "abridge" is unrelated to the noun bridge (germanic). The a in "aardvark" isn't a prefix at all.

    The more you talk about etymology, the less persuasive you actually become.
  • About This Word, “Atheist”
    "Rusty"? Why?David Mo

    I haven't used the "theist->agnostic->atheist" partition in years (I'd guess around 15 years, but I don't remember exactly), and I'm a creature of habit. There were some transition hiccups, but I don't remember them that well either.

    And if it reaches a deadlock, I would suggest no longer discussing it. No one can force anyone to change.Coben

    That's not the problem, really. I don't much like conflict. I've typed up replies I chose not to post pretty much since the beginning of this thread, because I was dissatisfied with them. A discussion I don't engage in can't reach a deadlock. It's more a matter of feeling like contributing but finding no opening. I'm aware it's really a personal problem of mine. But under such conditions letting it go also feels wrong. Disrespectful? Patronising? I don't know. Something in this direction.

    I replied to your post because I found it easier to open up the thread for me, but pretty much immediately after replying I felt it was maybe a bit impolite to talk about Frank Apisa rather than to him. I sometimes think I worry too much.

    This argument has just come up because some people in this forum are INSISTING that I...all other agnostics...and all babies and toddlers...

    ...must accept the descriptor ATHEIST, because some dictionaries describe it that way.
    Frank Apisa

    See, I find this terribly confusing. If I use the grid-based definition (a)theist/(a)gnostic, then of course you are an atheist under that definition. I'm aware you reject that definition, and that's fine with me. But you seem to be so vehemently against being called an atheist, that it's nearly impossible to even posit that definition. If that's the case, though, why make such a thread?

    The grid-based approach is a different discriptor attached to the same label. You're being labeled an atheist, not described as one the way you understand the term, and I'm fairly sure you understand that. So if, beyond rejecting the label, you reject the underlying descriptor - then you invalidate any opposing point of view from the get go, and conversation is impossible.

    So:

    I do not know if gods exist or not;
    I see no reason to suspect gods CANNOT EXIST...that the existence of gods is impossible;
    I see no reason to suspect that gods MUST EXIST...that gods are needed to explain existence;
    I do not see enough unambiguous evidence upon which to base a meaningful guess in either direction...
    Frank Apisa

    Yes, this is an agnostic position, because it's about knowledge. I doubt anyone would disagree. However, the grid-based approach doesn't see agnosticism and atheism as mutually exclusive, so at this point people who use the grid-based approach don't have enough information to label you an atheist. You're definitely an agnostic, though.

    It's when you add:

    ...so I don't.Frank Apisa

    that we can start to make a guess. One of the reasons I do remember why I made the switch from the three-category to the four-category (grid-based) approach is that quite a few of the Roman Catholics around me also subscribe to the position that they don't know whether or not God exists. But they react differently to this: it's that lack of knowledge, they tell me, that gives meaning to their faith. Under the three-category model, they'd be theists, because they believe in God. The four-category (grid-based) approach accomodates for these similarities with the categories itself, though: agnostic atheists and agnostic theists have something in common.

    Of course, there's a trade-off: "atheist" is no longer a label for a positive belief. To get that back, you add subdivisions like "hard atheist". But there's no reason I couldn't do the equivalent under the three-category model, by subdividing theists. Which you choose will depend partly on what you're used to talking about more.

    So:

    The question ended up being: Which is the more sensible, more useful definition of the designator “atheist”…Frank Apisa

    I'm not that interested in the "more sensible" part, but the "more useful" part depends on the person and context. I personally made the switch from the three-category definition to the grid-based model, simple because I like variable based grids. You can simply expand them by adding another variable should one become relevant, for example. I like them. They fit the way I think, and so I expand less energy thinking. That's what makes them useful.

    Unfortunately, when they realized they were about to be blown out of the water in that argument…the EVERYONE people abandoned ship.Frank Apisa

    Is this a debate? If so, I'll abandon ship, too.
  • About This Word, “Atheist”
    I don't think so. Language is not a machine.Coben

    Language definitely isn't a machine. But if I use the definition of atheism that says "no belief in God," than having no believe in God is sufficient to be an atheist (aside: I don't think it's very useful to extend the term to include babies; "no believe in God" is incomplete - it's "capable of beliefe, but no belief in God"). So when I'm saying I'm an atheist under that definition, then I'm implying he's one, too, under that definition. I'm not insisting he use this definition. But if he's insisting that he's not an atheist period, I just don't know how to respond to that. Basically, I would have to grant him the right to use his definition, while he doesn't pay me the same courtsey. I can't call myself an atheist.

    When we're talking every-day pragrmatics, how is this fair?
  • About This Word, “Atheist”
    I don't think anyone should or really can make him take that label.Coben

    I agree, but that's not the problem. If the term's going to be descriptive, it will have to apply to people according to the term's definitive traits. According to Frank Apisa's preferred defition, I'm an agnostic, but not an atheist. I'm fine with that. From around age 15 to age 35, I used that definition myself. I'm a little rusty with the term used like this, but i'm sure I can adapt. The point is, though, that I have to adapt and he doesn't. If we want to use the term as a descriptive lable, we can't both use the terms as we'd naturally be inclined to. Someone has to give.

    Now, if we were talking about a particular topic, that wouldn't be problem. Adapting is easy, because I have a context to tailor my non-native usage of the word to. The term is the topic, though. Refusing the label outright is getting in the way of the topic. A descriptive label may be more useful for some people than others, and that's worth exploring. But if it's a win-lose debate about which term is more "rational", I'm not interested. Language isn't a formal system like maths, anyway.
  • Forrester's Paradox / The Paradox of Gentle Murder
    1. It's obligatory that you not murder.
    2. (a) If you violate 1., it is obligatory that you choose a manner of execution that is gentle.
    2. (b) If you don't violate 1., it is impossible that you choose a manner of exuction that is gentle.
    3. If you choose a manner of exuction (of the act of murder) that is gentle, it is necessary that you commit the act of murder. (This follows from 2.(b))

    I think it's just a natural-language confusion. Under the above "if you are obligated to murder gently, you are obligated to murder," is invalid. It ought to be: "If you are obligated to murder gently, it is necessary that you murder."

    Simply put, If faced with choice A(a1, a2) you choose a2 and only a2 triggers choice B(b1, b2), then choosing either b1 or b2 implies that you have chosen a2. This isn't an obligation; it's a necessity.
  • An interesting objection to antinatalism I heard: The myth of inaction
    Exactly what he asked for when I presented the hypothetical actuallykhaled

    That's being "entitled to someone else's suffering", then, no? A cure for cancer is good only in the sense that it removes a particular source of suffering; it's value is "reflief". I've furhtermore assumed that other people would be asked to do whatever they can to reduce the suffering of "my child" in this context. It's a morality of mutual relief, if you're not introducing something that makes it all worthwhile. There's a hidden variable here somewhere. It's not really about action/inaction. To an anti-natalist curing cancer must look like pointless busywork when you look at the big picture. In the particular situation - i.e. now that I'm already here - curing cancer can look like worthwhile in comparison to other activities. But the "now that I'm here" is rather important to an anti-natalist, and I don't see what a consequentialist argument from inaction says about this.

    I don't actually know how important the now-that-I'm-here aspect is in this context. Thought experiment: You're an anti-natalist. You come across an unconscious man in a wintry street who'll freeze to death if you don't intervene. Obviously you can't ask for consent. Should you save his life? In what ways is this situation different from a non-existent, potential child. What difference does the now-that-he's-here aspect make? I have no answer to that question, but it's intuitive to non-antinatalist me that not giving birth isn't the moral equivalent to letting someone die.
  • An interesting objection to antinatalism I heard: The myth of inaction
    Could you elaborate. I just don't get what you're saying. Where did entitled to someone else's suffering come from?khaled

    Sorry for being unclear. That's what happens when I edit my post too much. Normaly I just close the window, but this time I somehow posted it. I'm not sure I can do a much better job explaining myself, but I'll try.

    It's easiest, I think, to start from an example, so let's go with this:

    My interlocutor went so far as to say that if I knew my child would cure cancer and didn't have said child then I am a direct cause that cancer is still around and thus, have done something wrong.khaled

    There is, I think, a fundamental difference in world view between what this person said, and what an anti-natalist would say, and this difference remains unaddressed.

    Cancer is a form of suffering, but not the only one. Your interlocutor sees suffering as a problem to be solved, but an anti-natalist sees cancer as a symptom of larger problem that cannot be solved. Anyone who would choose to live despite such suffering is making a hypothetical choice; and a choice that someone who would forgo being born under such conditions would not make (maybe; I'm not an anti-natalist, and I'm not an expert on anti-natalism either).

    So what your interlocutor and my imagined anti-natalist have in common is that they view cancer as a form of suffering. Cancer as a form of suffering has a different status in their respective world views, though. For your interlocuter, for example, the struggle against cancer might be a goal that gives meaning to their life. But for an anti-natalist it might be part of the package of suffering that comes with cancer: a tedious necessity, something to do. And it's also sisyphean task, not because you can't cure cancer, but because even if you cure cancer, there's plenty of other forms of suffering to take its place.

    From this point of view, an anti-natalist could accept that he's partly responsible for the continued existence of cancer ("direct cause" is a stretch, but I don't want to address this here) without missing a beat. It's not an argument against anti-natalism. While you're around, you might as well cure cancer. But all you accomplish is shift the balance of suffering around a little. Most suffering isn't anything as extra-odrinary as cancer - suffering is a banal fact of existance, and your interlocutor might look a little like Don Quijote. On the other hand, my imagined anti-natalist would look like a defeatists to your interlocutor. Someone who gives up way too soon, dignifies his laziness as a sort of philosophical suffering, and so on. There is no common ground on which they can have an argument.

    I'm hoping that the concept if we focus on the concept of responsibility, we can create a common ground. Responsibility is always responsibility to someone (someone else or yourself). It's a way of talking about demands, negotiating consensus, and so on. For example, if an anti-natalist might have to commit to the proposition that they're primary responsibility is to their child, and that's something you can talk about. This opens up questions about how to abstract (for whom is "getting rid of cancer" good, both in particular and in general?). It becomes a discussion about who makes what demands from whom.

    So for the sake of argument I (roleplaying an anti-natalist) know that my child would cure cancer. What else do I know? Let's say I know that my child's attitude towards life would be such that he wouldn't have chosen to be born if such a choice were possible. What then? Are you asking me to put the cure for cancer over my child? Are you asking my child to suffer through a life he doesn't want just so he can cure cancer?

    So someone's suffering from cancer. My non-existent child would have cured cancer. So I share a part of the responsibility to that person for their suffering of cancer. But so do the parents who gave birth to that person. Your interlocutor doesn't address that latter part at all, and in consequence there's no way to talk about the balance of values involved. Conceding responsibility turns an anti-natalist into a villain with little recourse to appeal. It's a judgment, not an argument.

    (Note that I'm having as much of an issue with "hypothetical consent before birth" as I do with your interlocutor's "direct cause". I'm not really taking sides, here, even though I have to admit that my sympaties tend more towards the anti-natalist position.)

    I hope I'm making a little more sense in this post. It's not an easy topic for me to discuss as I'm not confident that I represent consent-based anti-natalism correctly in the first place, and so I keep second guessing myself, which makes it hard to keep my thoughts straight.
  • An interesting objection to antinatalism I heard: The myth of inaction
    .
    Antinatalism, at least most versions I have seen, rely on the assumption that not having children is a net neutral act. As in it cannot harm or benefit anyone. But then someone made the case that there is no such thing as "inaction". By choosing to not have children, I become a causal factor in harming people my child would have helped so one cannot say that by not having children I am actually not doing anything wrong. While this does imply that there are situations where people would be wrong not to have children (which I find ridiculous) it does pose an interesting question in my opinion about what "inaction" exactly is.khaled

    Doesn't anti-natalism focus on the responsibility of a parent to a child? An unborn child is obviously not capable of consensus, so you're responsible for any harm that comes to your child by the act of making said harm possible.

    I'm not an anti-natalist myself, but I think the argument doesn't quite work, as it's about your child's responsibility to others, and I'm fairly sure that under anti-natalist tenets this would amount to a "chain of suffering", or a morality of mutual relief: you should suffer so as to reduce someone else's suffering, and in turn you're entitled to someone else's suffering to reduce yours. You could just cut out all the suffering at the root and simply not be born. I don't see the argument working. At best it amounts to a stalement between two unexpressed "life is/isn't worth living" points of view. If life isn't worth living than any pleasure is a temporal stop-gap; if life's worth living than suffering is an opportunity for growth. Two people seeing the same world in very different terms would have a different view on action/inaction, too.

    If there's responsibility, it's always responsibility to someone, and if there's no-one, responsibility can't trigger. The argument from inaction doesn't change that, and it sounds like people should suffer so they can ease each other's suffering.
  • Probability is an illusion
    Your comments are basically about practical limitations and these can be safely ignored because, as actual experimentation shows, even a standard-issue die/coin behaves probabilistically.TheMadFool

    On the one hand, you say that practical limitations can be safely ignored, and on the other hand you wish to appeal to actual experimentation. You have to choose one. Practical limitations may not be important to the law of large numbers when it comes to an ideal die, but they're certainly vitally important to actual experimentation. That's a theoretical issue, by the way: the universe we live in is only a very small sample compared to the infite number of throws, and what any sample we throw in the real world converges to is the actual distribution of the variable, and not the ideal distribution (though the sets can and often will overlap).

    More importantly, though, since you're talking about determinism, you're actually interested in practical limitations and how they relate to probability. It's me who says practical limitations are unimportant to the law of large number, because it's an entirely mathematical concept (and thus entirely logical). Not even a universe in which nothing but sixes are thrown would have anything of interest to say about the law of large numbers.

    I'd say the core problem is that without a clearly defined number of elements in a set (N), you have no sense of scale. How do you answer the question whether all the die throws in the universe is a "large number" when you're talking about a totality of infinite tries? If you plot out tries (real or imagined, doesn't matter) you'll see that the curve doesn't linearily approach the expected value but goes up and down and stabilises around the value. If all the tries in the universe come up 6, this is certainly unlikely (1/6^N; N = number of dice thrown in the universe), but in the context of an ideal die thrown an infinite number of times, this is just be a tiny local devergance.

    That ‘law’ states that the average of outcomes will converge towards 3.5, not towards 1/6 times the number of trials (that wouldn’t make sense).leo

    The two of you work with different x's. Your x is the outcome of a die throw {1,2,3,4,5,6}. His x is the number of odd die-throws in a sample of the size of T. He's using the probability of throwing an odd number as the expected value. Explaining the particulars, here, is beyond me, as I'm out of the loop for over a decade, but he's basically using an indicator function for x (where the value = 1 for {1,3,5} and 0 for {2,4,6}).

    As far as I can tell, what he's doing here is fine.
  • Probability is an illusion
    My latest post seems to have come out more technical than I meant it to. I went through a lot of drafts, discarded a lot, and ended up with this. But there's a point in there somewhere:

    A. The usual way we throw the die - randomly - without knowing the initial state. The outcomes in this case would have a relative frequency that can be calculated in terms of the ratio between desired outcomes and total number of possible outcomes. It doesn't get more probabilistic than this does it?

    B. If we have complete information about the die then we can deliberately select the initial states to produce outcomes that look exactly like A above with perfectly matching relative frequencies.
    TheMadFool

    The scenarios A and B in my previous post was to explain that deterministic systems can behave probabilistically and I think it accomplished its purpose.TheMadFool

    It's clear to me that you think scenarios A and B explain why deterministic systems "behave probabilistically", but as leo pointed out "behaving probabilistically" isn't well defined, and in any case the maths works the same in both A and B.

    You use terms like "the initial state", and "complete information about the die", but those terms aren't well defined. "The initial state" is the initial state of a probabilistic system, but that's pure math and not the real world. We use math to make statements about the real world. The philosophy here is: "How does mathmatics relate to the real world?"

    The mathematical system of the probability of a fair die has a single variable: the outcome of a die throw. There is no initial state of the system, you just produce random results time and again. The real world always falls short of this perfect system. You understand this, which is why you're comparing ideal dice to real dice. "Initial states" aren't initial states of ideal dice, but of real dice. (I understand you correctly so far, no?)

    Now to describe a real die you need to expand the original system to include other variables. That is you expand to original ideal system into a new ideal system, but one with more variables taken into account. This ideal system will have an "initial state", but it's - again - an ideal system, and if you look at the "initial state", you'll see that the variables that make up the initial state can be described, too. This is important, because you're arriving at the phrase "complete information about the die" and you go on to say that "we can deliberately select the initial states." But there are systematic theoretical assumptions included in this in such a way that what initial states we pick is not part of the system we use to describe the die throw. (But, then, is the information really "complete"? What do you mean by "complete"?)

    So now to go back to my original post:

    A variable has an event space, and that event space has a distribution.Dawnstorm

    Take a look at a die. A die has six sides, and there are numbers printed on every side, and it's those numbers we're interested in. This is what makes the event space:

    1, 2, 3, 4, 5, 6

    The distribution is just an assumption we make. We assume that everyone of those outcomes is equally likely. This isn't an arbitrary assumption: it's a useful baseline to which we can compare any deviation. If a real die, for example, were most likely to throw a 5 due to some physical imbalance, then it's not a fair die. The distribution changes.

    In situations such as games of chance we want dice to behave as closely to a fair die as possible. Even without knowing each die's distribution, for example by simple rule: never throw the same die twice. The idea here is that we introduce a new random variable: which die to throw. Different dice are likely to have different biases, so individual biases won't have as much an effect of the outcome. In effect, we'd be using many different real dice, to simulate an ideal one.

    And now we can make the assumption that biases cancel each other out, i.e. there are equally man dice that are biased towards 1 than towards 2, etc. This two is an ideal assumption with its own distribution, and maybe there's an even more complicated system which equals out the real/ideal difference for this one, too. For puny human brains this gets harder and harder every step up. But the more deterministic a system is, the easier it gets to create such descriptive systems. And with complete knowledge of the entire universe, you can calculate every proability very precisely: you don't need to realy on assumptions and the distinction between ideal and real dice disappears.

    Under prefect knowledge of a deterministic system probability amounts to the frequentist description of a system of limited variables. An incomplete frequentist description of a deterministic system will always include probabilities, because of this. If, however, you follow the chain of causality for a single throw of a die, what you have isn't a frequentist description, and probability doesn't apply. They're just different perspectives: how the throw of a die relates to all the other events thus categorised, and how it came about. There's no contradiction.
  • Probability is an illusion
    There is no confusion at all. A die is deterministic and it behaves probabilistically. This probably needs further clarification.

    A die is a deterministic system in that each initial state has one and only one outcome but if the initial states are random then the outcomes will be random.
    TheMadFool

    A variable has an event space, and that event space has a distribution. How you pick a value for the variable determines whether the variable is independent or dependent. An independent variable can be a random variable, and a dependent variable can depend on one or more random variables.

    How we retrieve the values for the variable in an experiment (i.e. if it's a random variable or not) has no influence on the distribution of the event space of the variable, but it can introduce a bias into our results.

    That the same variable with the same distribution can have its values computed or chosen at random in different mathematical contexts is no mystery. It's a question of methodology.
  • Understanding suicide.
    I really don't know what you mean by "facing suicide". Usually (in my case), there's a lot of anxiety when those thoughts appear.Wallows

    Not so much facing suicide, as facing the suicidal thoughts and the emotions that come with them - that anxiety, for example. What that means for you in praxis I don't really know. I'm not even saying that medication is a bad idea. Just make sure not to enter into an unhealthy co-dependt relationship with the pharma industry, maybe? I don't know.

    I'll probably be addressing what "facing suicidal thoughts" meant for me with some of your other questions.

    That's pretty dark, man.Wallows

    Not really darker than the underlying suffering, though. If it works it works, and if it doesn't it doesn't. There's probably no solution that works for everyone. Not even chocolate.

    What do you mean by "psychological disincentive"?Wallows

    When you think of doing something, some aspects draw you towards the action (incentives), and some push you away from it (disincentives). I call them psychological, because unlike real-life policy (such as, say, taxes), these (dis)incentives are just part of how you react to the world. Their basically your bundle of values.

    Please elaborate. I seem to be encompassed by fear lately.Wallows

    The difference between fear of dying and fear of death is actually a pretty good opportunity to demonstrate psychological incentives and disincentives:

    So I have these unpleasant emotions: anxiety, disgust with myself and parts of the world, exhaustion... I don't want to feel them. The way I imagine death is this: no feelings at all. Those are gone, too. That's an incentive.

    Now, logically I'd also get rid of good feelings. But back then that didn't function as a psychological disincentive. Rather than something I wanted to keep, that felt like an acceptable price to pay.

    However, to get to the desired state of death I have to die, and dying is messy. I can't help but think of it as pain. The least painful method is probably overdosing on barbiturates of some sort, but - apart from being unreliable - I was imagining messing up and feeling really quesy or maybe having convulsions. None of this was based on research. I just had this association of dying with pain (or just undergoing an otherwise unpleasant process as queasiness).

    So basically the state of death worked as an incentive, while the process of dying worked as a disincentive. It's not a cost/benefit calculation. Nothing that rational; it's a felt attraction existing alongside a felt repulsion.

    To this day I'm not afraid of death. If I look forward a milennium, I realise I'll no longer be around. That doesn't affect me in any way, really. If I knew I had a fatal, incurable illness, I'd adapt pretty quickly to the new deadline. However, the illness itself? The process of dying? It sort of depends on the particulars, but in general this sounds like a rather unpleasant stretch of life. (Note that I don't have a shred of believe in any afterlife. Things might be different, if I thought death was just life v2.0.)

    Did time or your age help you see the whole issue as some childish desire or fantasy?Wallows

    Not really, no. You see, I always, even back then, thought I was being childish. It didn't help. If anything it just added a layer to my self-loathing. If anything, I'm less judgemental about my younger self now than I was back then.

    Remember how I said near the start of this post that I'd adress the question of what "facing suicidal thoughts" meant for me with another question? Well, it fits here. As I said, I was pretty hard on myself for having suicidal thoughts. Why I can't I deal with life? Other people can live just fine, and I can't? What's with all those petty inner tantrums? Those anxieties of mine are so stupid! And so on.

    Facing my suicidal thoughts for me meant suspending that sort of judgement. It wasn't easy, but it was easier than to - for example - just stop being anxious. So instead of berating myself, I just thought I'd dry indulging myself. To varying results. On bad days, that would lead to inner hysterics that were even harder to bear. But on good days?

    I have the mind of a story teller. I dramatise everything. That's just how I work. But not all stories are realistic. On good days, allowing myself all those petty, nonsensical, negative feelings turned into a sort of game. If I'm going to be rediculous, I'm going to be really ridiculous. That's a hard-to-explain process. They way I'm writing about this now sounds a lot more deliberate than I was. It was a sort of emotional escalation. The self-judgemental part of me didn't go away, but it sort of transformed from judge to fiction audience. In a sense the process gradually estranged me from my suffering, until it felt like some absurd spectacle. It's a way to non-jeeringly laugh at myself, by ramping up the drama and making it less and less belivable.

    It's not something I tried to do. I think the bad days that ended in hysteria would have put me off that methodology, if it wasn't something that... just happened. And I'm saying all of this now, looking back, so a lot of it will look neater in memory than it actually was while living through it. But that's roughly how I remember it playing out. I've been trying to think of an illustrative example, but I can't seem to get it right anymore. Maybe I should be thankful for that.
  • Understanding suicide.
    I think the best way to avoid suicidal thoughts is to first take some antidepressant, and engage in therapy or some constructive endeavor if one has enough motivation to do so.Wallows

    Should you avoid suicidal thoughts in the first place? Wouldn't it be better to face them? What if someone uses suicidal thoughts for some sort of catharsis, like roleplaying, rather than as premeditation for an act? The role suicidal thoughts play in the genisis of a suicide is interesting and not necessarily as straightforward as "I have suicidal thoghts therefor I want to die."

    Some suicidal thoughts never lead to an actual suicide. But even suicidal thoughts that are not connected to an intention to kill oneself can lay the groundwork for a future suicide - as you familiarise yourself with the thought patterns. An example would be: "having a favourite hypothetical method" --> "being comfortable with the method, thus removing one psychological disincentive."

    I was a suicidal teen. I'm now nearly fifty and don't consider myself suicidal anymore, but I do still have the thought habits. I can tell a difference in the quality; I'm not serious. (They're more over-the-top, exaggerated; a bit like I'm parodying my younger self.) Btw, I don't have a hypothecical favourite method. All methods suck. I think that's one of the major reason's I'm still alive. Too afraid of the system shock that comes with dying (painful methods), and of waking up after an unsucessful attempt and having to deal with the fallout (unreliable methods). As a formerly suicidal person I can tell you that fear of dying and fear of death are not the same thing. I have the former but not the latter.

    Talking about my non-serious suicidal thoughts is difficult, because of the taboo that surrounds the topic. I can be pretty casual about it, and people often don't know how to react to that. I usually have to explain that, no, I don't intend to kill myself, and, no, I don't intend to make fun of the topic (even though it sometimes sounds like it). I've just learned to live through my suicidal phase, and now suicidal thoughts are some sort of cathartic tool (and that sometimes includes black humour).

    As a result, talk about suicide entirely in terms of prevention feels isolating. It did back when I was suicidal (it felt like people were more interested in preventing a suicide than in trying to understand), and it does now (because of the disconnect). When it comes to fiction I react best to stuff that depicts emotional difficulty without taking sides (e.g. the film 'night, Mother with Sissy Spacek and Anne Bancroft), or with absurd comedy set-ups (e.g. the suicide arc in the anime Welcome to the NHK). I react worst to shows that idealise a single solution.

    In terms of this thread, I don't think it's helpful to seek a single solution to the problem. I mean, suicides range from the guy who walks in on his family to demonstratively shoot himself, to the guy who kills himself and leaves behind a binder explaining himself, a lot of articles about dealing with loss, as well as a list of therapists and help-lines. Suicide is really just a single puzzle piece in a person's life and you won't understand that single suicide without understanding how it fits in. You can abstract, but that would involve multiple non-exclusive categories, I think.

    Basically, you can't understand a person's suicide without understanding that person's life. A life can have problems. Suicide doesn't solve those problems, but it does end them (and also prevent solution, though that's moot by then). Focussing on the suicide ("you shouldn't kill yourself, because...") can come across as priviledging the topic over the underlying problems (as in "It's fine if you suffer, as long as you don't inconvenience me with your corpse"). Not all suicides are problem-centred, though. My own phase was more akin to what Pfhorrest describes in his post above. Problems, here, are more nuisances - life's a struggle and there's no reward. Depression is actually welcome, because it's more comfortable than the anxiety of what sort of contradictory demands will come your way next. It's not big deal, really, you can push through that as you always have. But you become increasingly exhausted. People notice this, so they try to be nice to you, and through this process the things you enjoy turn into
    obligations, too, and eventually you just forget how to want things, even though you're an expert in how to not want things. Eventually you just feel empty. That's fine during a depression, since you don't feel any sort of vigour anyway. During bouts of depression it's easy to dismiss life. You're not going to kill yourself; it's not worth the bother. But as it recedes? Or if you feel it coming? That's when there's an inner tension that's nearly unbearable; it's a sort of unspecified can't-do-anything-but-have-to anxiety. During that phase you're not likely to make any preparations, though. Half-hearted attempts would be the most likely (though that was never my style). You prepare while your fairly calm and even cheerful. In my case it ended with research, since I never found a method I liked. (I also wondered whether I really was suicidal, or if that was just my inner drama queen. Now that I'm definitely not suicidal, I think I was.)

    Basically, I didn't want to kill myself because of a specific problem, but because I was just gradually losing my grip on life.

    Suicide can be mitigated by becoming more aware of other people or thoughts.Wallows

    This definitely helped during bouts of what I call the brooding spiral. Re-focusing helped by itself, and as a bonus I tended to find out that I was asking way more of myself than nearly anyone else (though that was a lesson that usually didn't stick).
  • Probability is an illusion
    .
    Probability, in my opinion, has to be objective or real. By that I mean it is a property of nature just as mass or volume. So, when I say the probability of an atom of Plutonium to decay is 30% then this isn't because I lack information the acquisition of which will cause me to know exactly which atom will decay or not. Rather, radioactivity is objectively/really probabilistic.TheMadFool

    I don't know whether I agree or disagree. I'm not sure what - in terms of the real world - it would mean for "probability to be real". Probability is maths, and like all maths it's applied to the real world, and so the question is whether it's useful or not rather than whether it's real or not.

    A operates with a very "small" probability system, and B with a very large one. A can expand to B, and B can conflate to A. When A expands, the likelihood for throwing a particular number increases until it drops to either zero or hits 1. That's just conditional probability. A's probability table would have to exhaust all probabilities.

    What if the universe doesn't have an initial state, just a string of causality that breaks at some point in the past, because stuff like frequency stops working? You could only approximately describe this with a mathematical system, right? Assuming mutliple possible initial states would work, but only if we can describe all those states and their relations such as that they are mutually exclusive.

    So, yeah, what does it mean for probability to be real?
  • Probability is an illusion
    Good point. Anything's possible in a game of chance. However, the issue is of predictability. Person B, given he knows the initial state of the system (person A and the dice) is able to predict every outcome; implying that the system is deterministic. However, the system behaves as if that (deterministic character) isn't the case.TheMadFool

    I'm trying to figure out what you think a "probabilistic system" should look like. "The initial state of the system" is different for A and B. For A, it's simply a game of dice. For B, it's the current state of the universe. For A probability only allows six outcomes. B could know that A will die of a heart attack before he ever gets to throw the die (and his hand cramps, so the die doesn't even drop). In my view you're comparing apples and oranges. A asks "What are the odds?" and B asks "What will happen?"

    B uses the chain of causality to compute the outcome. A uses probability to compute the odds. Take the following example:

    A bag contains only red balls. You draw one of them in the hopes of it being red.

    A will use probability theory and know immediately that given that he'll successfully draw a ball it will be red (because there's only one option).

    B will have to go through multiple computations to figure out which ball A will draw and then check its colour. B will know, though this process, if A will successfully draw a ball, if so which one, and by implication its colour.

    In this limited case, A and B will come to the same conclusion. Why? Because the probability to draw a red ball from a bag that only contains red balls is 100 %. B has a lot more information that pertains to the situation, though, including whether A will draw a ball at all.

    I'm not sure I understood you correctly, though. I'm right in assuming that B follows the chain of causality (taking into account all data he has) and doesn't encounter a truly random process (which would contradict determinism)?

    Of course, given perfect knowledge in a deterministic system, the question "What are the odds?" is superfluous, because it's always 100 %. But A has very limited knowledge.

    A and B have different perspectives: A's tends to be more efficient (but he'll have to contend with risk), and B's tends to be more accurate (but he'd probably die of old age before he finishes the computions).
  • Probability is an illusion
    This result is in agreement with the theoretical probability calculated (4/6 = 2/3 = 66.66%). In other words the system (person A and the dice) behaves like a probabilistic system as if the system is truly non-determinsitic/probabilistic.TheMadFool

    And if A threw a hundred sixes in a row it wouldn't be behaving like a probablilistic system?
×
We use cookies and similar methods to recognize visitors and remember their preferences.