• About This Word, “Atheist”
    I don't think so. Language is not a machine.Coben

    Language definitely isn't a machine. But if I use the definition of atheism that says "no belief in God," than having no believe in God is sufficient to be an atheist (aside: I don't think it's very useful to extend the term to include babies; "no believe in God" is incomplete - it's "capable of beliefe, but no belief in God"). So when I'm saying I'm an atheist under that definition, then I'm implying he's one, too, under that definition. I'm not insisting he use this definition. But if he's insisting that he's not an atheist period, I just don't know how to respond to that. Basically, I would have to grant him the right to use his definition, while he doesn't pay me the same courtsey. I can't call myself an atheist.

    When we're talking every-day pragrmatics, how is this fair?
  • About This Word, “Atheist”
    I don't think anyone should or really can make him take that label.Coben

    I agree, but that's not the problem. If the term's going to be descriptive, it will have to apply to people according to the term's definitive traits. According to Frank Apisa's preferred defition, I'm an agnostic, but not an atheist. I'm fine with that. From around age 15 to age 35, I used that definition myself. I'm a little rusty with the term used like this, but i'm sure I can adapt. The point is, though, that I have to adapt and he doesn't. If we want to use the term as a descriptive lable, we can't both use the terms as we'd naturally be inclined to. Someone has to give.

    Now, if we were talking about a particular topic, that wouldn't be problem. Adapting is easy, because I have a context to tailor my non-native usage of the word to. The term is the topic, though. Refusing the label outright is getting in the way of the topic. A descriptive label may be more useful for some people than others, and that's worth exploring. But if it's a win-lose debate about which term is more "rational", I'm not interested. Language isn't a formal system like maths, anyway.
  • Forrester's Paradox / The Paradox of Gentle Murder
    1. It's obligatory that you not murder.
    2. (a) If you violate 1., it is obligatory that you choose a manner of execution that is gentle.
    2. (b) If you don't violate 1., it is impossible that you choose a manner of exuction that is gentle.
    3. If you choose a manner of exuction (of the act of murder) that is gentle, it is necessary that you commit the act of murder. (This follows from 2.(b))

    I think it's just a natural-language confusion. Under the above "if you are obligated to murder gently, you are obligated to murder," is invalid. It ought to be: "If you are obligated to murder gently, it is necessary that you murder."

    Simply put, If faced with choice A(a1, a2) you choose a2 and only a2 triggers choice B(b1, b2), then choosing either b1 or b2 implies that you have chosen a2. This isn't an obligation; it's a necessity.
  • An interesting objection to antinatalism I heard: The myth of inaction
    Exactly what he asked for when I presented the hypothetical actuallykhaled

    That's being "entitled to someone else's suffering", then, no? A cure for cancer is good only in the sense that it removes a particular source of suffering; it's value is "reflief". I've furhtermore assumed that other people would be asked to do whatever they can to reduce the suffering of "my child" in this context. It's a morality of mutual relief, if you're not introducing something that makes it all worthwhile. There's a hidden variable here somewhere. It's not really about action/inaction. To an anti-natalist curing cancer must look like pointless busywork when you look at the big picture. In the particular situation - i.e. now that I'm already here - curing cancer can look like worthwhile in comparison to other activities. But the "now that I'm here" is rather important to an anti-natalist, and I don't see what a consequentialist argument from inaction says about this.

    I don't actually know how important the now-that-I'm-here aspect is in this context. Thought experiment: You're an anti-natalist. You come across an unconscious man in a wintry street who'll freeze to death if you don't intervene. Obviously you can't ask for consent. Should you save his life? In what ways is this situation different from a non-existent, potential child. What difference does the now-that-he's-here aspect make? I have no answer to that question, but it's intuitive to non-antinatalist me that not giving birth isn't the moral equivalent to letting someone die.
  • An interesting objection to antinatalism I heard: The myth of inaction
    Could you elaborate. I just don't get what you're saying. Where did entitled to someone else's suffering come from?khaled

    Sorry for being unclear. That's what happens when I edit my post too much. Normaly I just close the window, but this time I somehow posted it. I'm not sure I can do a much better job explaining myself, but I'll try.

    It's easiest, I think, to start from an example, so let's go with this:

    My interlocutor went so far as to say that if I knew my child would cure cancer and didn't have said child then I am a direct cause that cancer is still around and thus, have done something wrong.khaled

    There is, I think, a fundamental difference in world view between what this person said, and what an anti-natalist would say, and this difference remains unaddressed.

    Cancer is a form of suffering, but not the only one. Your interlocutor sees suffering as a problem to be solved, but an anti-natalist sees cancer as a symptom of larger problem that cannot be solved. Anyone who would choose to live despite such suffering is making a hypothetical choice; and a choice that someone who would forgo being born under such conditions would not make (maybe; I'm not an anti-natalist, and I'm not an expert on anti-natalism either).

    So what your interlocutor and my imagined anti-natalist have in common is that they view cancer as a form of suffering. Cancer as a form of suffering has a different status in their respective world views, though. For your interlocuter, for example, the struggle against cancer might be a goal that gives meaning to their life. But for an anti-natalist it might be part of the package of suffering that comes with cancer: a tedious necessity, something to do. And it's also sisyphean task, not because you can't cure cancer, but because even if you cure cancer, there's plenty of other forms of suffering to take its place.

    From this point of view, an anti-natalist could accept that he's partly responsible for the continued existence of cancer ("direct cause" is a stretch, but I don't want to address this here) without missing a beat. It's not an argument against anti-natalism. While you're around, you might as well cure cancer. But all you accomplish is shift the balance of suffering around a little. Most suffering isn't anything as extra-odrinary as cancer - suffering is a banal fact of existance, and your interlocutor might look a little like Don Quijote. On the other hand, my imagined anti-natalist would look like a defeatists to your interlocutor. Someone who gives up way too soon, dignifies his laziness as a sort of philosophical suffering, and so on. There is no common ground on which they can have an argument.

    I'm hoping that the concept if we focus on the concept of responsibility, we can create a common ground. Responsibility is always responsibility to someone (someone else or yourself). It's a way of talking about demands, negotiating consensus, and so on. For example, if an anti-natalist might have to commit to the proposition that they're primary responsibility is to their child, and that's something you can talk about. This opens up questions about how to abstract (for whom is "getting rid of cancer" good, both in particular and in general?). It becomes a discussion about who makes what demands from whom.

    So for the sake of argument I (roleplaying an anti-natalist) know that my child would cure cancer. What else do I know? Let's say I know that my child's attitude towards life would be such that he wouldn't have chosen to be born if such a choice were possible. What then? Are you asking me to put the cure for cancer over my child? Are you asking my child to suffer through a life he doesn't want just so he can cure cancer?

    So someone's suffering from cancer. My non-existent child would have cured cancer. So I share a part of the responsibility to that person for their suffering of cancer. But so do the parents who gave birth to that person. Your interlocutor doesn't address that latter part at all, and in consequence there's no way to talk about the balance of values involved. Conceding responsibility turns an anti-natalist into a villain with little recourse to appeal. It's a judgment, not an argument.

    (Note that I'm having as much of an issue with "hypothetical consent before birth" as I do with your interlocutor's "direct cause". I'm not really taking sides, here, even though I have to admit that my sympaties tend more towards the anti-natalist position.)

    I hope I'm making a little more sense in this post. It's not an easy topic for me to discuss as I'm not confident that I represent consent-based anti-natalism correctly in the first place, and so I keep second guessing myself, which makes it hard to keep my thoughts straight.
  • An interesting objection to antinatalism I heard: The myth of inaction
    .
    Antinatalism, at least most versions I have seen, rely on the assumption that not having children is a net neutral act. As in it cannot harm or benefit anyone. But then someone made the case that there is no such thing as "inaction". By choosing to not have children, I become a causal factor in harming people my child would have helped so one cannot say that by not having children I am actually not doing anything wrong. While this does imply that there are situations where people would be wrong not to have children (which I find ridiculous) it does pose an interesting question in my opinion about what "inaction" exactly is.khaled

    Doesn't anti-natalism focus on the responsibility of a parent to a child? An unborn child is obviously not capable of consensus, so you're responsible for any harm that comes to your child by the act of making said harm possible.

    I'm not an anti-natalist myself, but I think the argument doesn't quite work, as it's about your child's responsibility to others, and I'm fairly sure that under anti-natalist tenets this would amount to a "chain of suffering", or a morality of mutual relief: you should suffer so as to reduce someone else's suffering, and in turn you're entitled to someone else's suffering to reduce yours. You could just cut out all the suffering at the root and simply not be born. I don't see the argument working. At best it amounts to a stalement between two unexpressed "life is/isn't worth living" points of view. If life isn't worth living than any pleasure is a temporal stop-gap; if life's worth living than suffering is an opportunity for growth. Two people seeing the same world in very different terms would have a different view on action/inaction, too.

    If there's responsibility, it's always responsibility to someone, and if there's no-one, responsibility can't trigger. The argument from inaction doesn't change that, and it sounds like people should suffer so they can ease each other's suffering.
  • Probability is an illusion
    Your comments are basically about practical limitations and these can be safely ignored because, as actual experimentation shows, even a standard-issue die/coin behaves probabilistically.TheMadFool

    On the one hand, you say that practical limitations can be safely ignored, and on the other hand you wish to appeal to actual experimentation. You have to choose one. Practical limitations may not be important to the law of large numbers when it comes to an ideal die, but they're certainly vitally important to actual experimentation. That's a theoretical issue, by the way: the universe we live in is only a very small sample compared to the infite number of throws, and what any sample we throw in the real world converges to is the actual distribution of the variable, and not the ideal distribution (though the sets can and often will overlap).

    More importantly, though, since you're talking about determinism, you're actually interested in practical limitations and how they relate to probability. It's me who says practical limitations are unimportant to the law of large number, because it's an entirely mathematical concept (and thus entirely logical). Not even a universe in which nothing but sixes are thrown would have anything of interest to say about the law of large numbers.

    I'd say the core problem is that without a clearly defined number of elements in a set (N), you have no sense of scale. How do you answer the question whether all the die throws in the universe is a "large number" when you're talking about a totality of infinite tries? If you plot out tries (real or imagined, doesn't matter) you'll see that the curve doesn't linearily approach the expected value but goes up and down and stabilises around the value. If all the tries in the universe come up 6, this is certainly unlikely (1/6^N; N = number of dice thrown in the universe), but in the context of an ideal die thrown an infinite number of times, this is just be a tiny local devergance.

    That ‘law’ states that the average of outcomes will converge towards 3.5, not towards 1/6 times the number of trials (that wouldn’t make sense).leo

    The two of you work with different x's. Your x is the outcome of a die throw {1,2,3,4,5,6}. His x is the number of odd die-throws in a sample of the size of T. He's using the probability of throwing an odd number as the expected value. Explaining the particulars, here, is beyond me, as I'm out of the loop for over a decade, but he's basically using an indicator function for x (where the value = 1 for {1,3,5} and 0 for {2,4,6}).

    As far as I can tell, what he's doing here is fine.
  • Probability is an illusion
    My latest post seems to have come out more technical than I meant it to. I went through a lot of drafts, discarded a lot, and ended up with this. But there's a point in there somewhere:

    A. The usual way we throw the die - randomly - without knowing the initial state. The outcomes in this case would have a relative frequency that can be calculated in terms of the ratio between desired outcomes and total number of possible outcomes. It doesn't get more probabilistic than this does it?

    B. If we have complete information about the die then we can deliberately select the initial states to produce outcomes that look exactly like A above with perfectly matching relative frequencies.
    TheMadFool

    The scenarios A and B in my previous post was to explain that deterministic systems can behave probabilistically and I think it accomplished its purpose.TheMadFool

    It's clear to me that you think scenarios A and B explain why deterministic systems "behave probabilistically", but as leo pointed out "behaving probabilistically" isn't well defined, and in any case the maths works the same in both A and B.

    You use terms like "the initial state", and "complete information about the die", but those terms aren't well defined. "The initial state" is the initial state of a probabilistic system, but that's pure math and not the real world. We use math to make statements about the real world. The philosophy here is: "How does mathmatics relate to the real world?"

    The mathematical system of the probability of a fair die has a single variable: the outcome of a die throw. There is no initial state of the system, you just produce random results time and again. The real world always falls short of this perfect system. You understand this, which is why you're comparing ideal dice to real dice. "Initial states" aren't initial states of ideal dice, but of real dice. (I understand you correctly so far, no?)

    Now to describe a real die you need to expand the original system to include other variables. That is you expand to original ideal system into a new ideal system, but one with more variables taken into account. This ideal system will have an "initial state", but it's - again - an ideal system, and if you look at the "initial state", you'll see that the variables that make up the initial state can be described, too. This is important, because you're arriving at the phrase "complete information about the die" and you go on to say that "we can deliberately select the initial states." But there are systematic theoretical assumptions included in this in such a way that what initial states we pick is not part of the system we use to describe the die throw. (But, then, is the information really "complete"? What do you mean by "complete"?)

    So now to go back to my original post:

    A variable has an event space, and that event space has a distribution.Dawnstorm

    Take a look at a die. A die has six sides, and there are numbers printed on every side, and it's those numbers we're interested in. This is what makes the event space:

    1, 2, 3, 4, 5, 6

    The distribution is just an assumption we make. We assume that everyone of those outcomes is equally likely. This isn't an arbitrary assumption: it's a useful baseline to which we can compare any deviation. If a real die, for example, were most likely to throw a 5 due to some physical imbalance, then it's not a fair die. The distribution changes.

    In situations such as games of chance we want dice to behave as closely to a fair die as possible. Even without knowing each die's distribution, for example by simple rule: never throw the same die twice. The idea here is that we introduce a new random variable: which die to throw. Different dice are likely to have different biases, so individual biases won't have as much an effect of the outcome. In effect, we'd be using many different real dice, to simulate an ideal one.

    And now we can make the assumption that biases cancel each other out, i.e. there are equally man dice that are biased towards 1 than towards 2, etc. This two is an ideal assumption with its own distribution, and maybe there's an even more complicated system which equals out the real/ideal difference for this one, too. For puny human brains this gets harder and harder every step up. But the more deterministic a system is, the easier it gets to create such descriptive systems. And with complete knowledge of the entire universe, you can calculate every proability very precisely: you don't need to realy on assumptions and the distinction between ideal and real dice disappears.

    Under prefect knowledge of a deterministic system probability amounts to the frequentist description of a system of limited variables. An incomplete frequentist description of a deterministic system will always include probabilities, because of this. If, however, you follow the chain of causality for a single throw of a die, what you have isn't a frequentist description, and probability doesn't apply. They're just different perspectives: how the throw of a die relates to all the other events thus categorised, and how it came about. There's no contradiction.
  • Probability is an illusion
    There is no confusion at all. A die is deterministic and it behaves probabilistically. This probably needs further clarification.

    A die is a deterministic system in that each initial state has one and only one outcome but if the initial states are random then the outcomes will be random.
    TheMadFool

    A variable has an event space, and that event space has a distribution. How you pick a value for the variable determines whether the variable is independent or dependent. An independent variable can be a random variable, and a dependent variable can depend on one or more random variables.

    How we retrieve the values for the variable in an experiment (i.e. if it's a random variable or not) has no influence on the distribution of the event space of the variable, but it can introduce a bias into our results.

    That the same variable with the same distribution can have its values computed or chosen at random in different mathematical contexts is no mystery. It's a question of methodology.
  • Understanding suicide.
    I really don't know what you mean by "facing suicide". Usually (in my case), there's a lot of anxiety when those thoughts appear.Wallows

    Not so much facing suicide, as facing the suicidal thoughts and the emotions that come with them - that anxiety, for example. What that means for you in praxis I don't really know. I'm not even saying that medication is a bad idea. Just make sure not to enter into an unhealthy co-dependt relationship with the pharma industry, maybe? I don't know.

    I'll probably be addressing what "facing suicidal thoughts" meant for me with some of your other questions.

    That's pretty dark, man.Wallows

    Not really darker than the underlying suffering, though. If it works it works, and if it doesn't it doesn't. There's probably no solution that works for everyone. Not even chocolate.

    What do you mean by "psychological disincentive"?Wallows

    When you think of doing something, some aspects draw you towards the action (incentives), and some push you away from it (disincentives). I call them psychological, because unlike real-life policy (such as, say, taxes), these (dis)incentives are just part of how you react to the world. Their basically your bundle of values.

    Please elaborate. I seem to be encompassed by fear lately.Wallows

    The difference between fear of dying and fear of death is actually a pretty good opportunity to demonstrate psychological incentives and disincentives:

    So I have these unpleasant emotions: anxiety, disgust with myself and parts of the world, exhaustion... I don't want to feel them. The way I imagine death is this: no feelings at all. Those are gone, too. That's an incentive.

    Now, logically I'd also get rid of good feelings. But back then that didn't function as a psychological disincentive. Rather than something I wanted to keep, that felt like an acceptable price to pay.

    However, to get to the desired state of death I have to die, and dying is messy. I can't help but think of it as pain. The least painful method is probably overdosing on barbiturates of some sort, but - apart from being unreliable - I was imagining messing up and feeling really quesy or maybe having convulsions. None of this was based on research. I just had this association of dying with pain (or just undergoing an otherwise unpleasant process as queasiness).

    So basically the state of death worked as an incentive, while the process of dying worked as a disincentive. It's not a cost/benefit calculation. Nothing that rational; it's a felt attraction existing alongside a felt repulsion.

    To this day I'm not afraid of death. If I look forward a milennium, I realise I'll no longer be around. That doesn't affect me in any way, really. If I knew I had a fatal, incurable illness, I'd adapt pretty quickly to the new deadline. However, the illness itself? The process of dying? It sort of depends on the particulars, but in general this sounds like a rather unpleasant stretch of life. (Note that I don't have a shred of believe in any afterlife. Things might be different, if I thought death was just life v2.0.)

    Did time or your age help you see the whole issue as some childish desire or fantasy?Wallows

    Not really, no. You see, I always, even back then, thought I was being childish. It didn't help. If anything it just added a layer to my self-loathing. If anything, I'm less judgemental about my younger self now than I was back then.

    Remember how I said near the start of this post that I'd adress the question of what "facing suicidal thoughts" meant for me with another question? Well, it fits here. As I said, I was pretty hard on myself for having suicidal thoughts. Why I can't I deal with life? Other people can live just fine, and I can't? What's with all those petty inner tantrums? Those anxieties of mine are so stupid! And so on.

    Facing my suicidal thoughts for me meant suspending that sort of judgement. It wasn't easy, but it was easier than to - for example - just stop being anxious. So instead of berating myself, I just thought I'd dry indulging myself. To varying results. On bad days, that would lead to inner hysterics that were even harder to bear. But on good days?

    I have the mind of a story teller. I dramatise everything. That's just how I work. But not all stories are realistic. On good days, allowing myself all those petty, nonsensical, negative feelings turned into a sort of game. If I'm going to be rediculous, I'm going to be really ridiculous. That's a hard-to-explain process. They way I'm writing about this now sounds a lot more deliberate than I was. It was a sort of emotional escalation. The self-judgemental part of me didn't go away, but it sort of transformed from judge to fiction audience. In a sense the process gradually estranged me from my suffering, until it felt like some absurd spectacle. It's a way to non-jeeringly laugh at myself, by ramping up the drama and making it less and less belivable.

    It's not something I tried to do. I think the bad days that ended in hysteria would have put me off that methodology, if it wasn't something that... just happened. And I'm saying all of this now, looking back, so a lot of it will look neater in memory than it actually was while living through it. But that's roughly how I remember it playing out. I've been trying to think of an illustrative example, but I can't seem to get it right anymore. Maybe I should be thankful for that.
  • Understanding suicide.
    I think the best way to avoid suicidal thoughts is to first take some antidepressant, and engage in therapy or some constructive endeavor if one has enough motivation to do so.Wallows

    Should you avoid suicidal thoughts in the first place? Wouldn't it be better to face them? What if someone uses suicidal thoughts for some sort of catharsis, like roleplaying, rather than as premeditation for an act? The role suicidal thoughts play in the genisis of a suicide is interesting and not necessarily as straightforward as "I have suicidal thoghts therefor I want to die."

    Some suicidal thoughts never lead to an actual suicide. But even suicidal thoughts that are not connected to an intention to kill oneself can lay the groundwork for a future suicide - as you familiarise yourself with the thought patterns. An example would be: "having a favourite hypothetical method" --> "being comfortable with the method, thus removing one psychological disincentive."

    I was a suicidal teen. I'm now nearly fifty and don't consider myself suicidal anymore, but I do still have the thought habits. I can tell a difference in the quality; I'm not serious. (They're more over-the-top, exaggerated; a bit like I'm parodying my younger self.) Btw, I don't have a hypothecical favourite method. All methods suck. I think that's one of the major reason's I'm still alive. Too afraid of the system shock that comes with dying (painful methods), and of waking up after an unsucessful attempt and having to deal with the fallout (unreliable methods). As a formerly suicidal person I can tell you that fear of dying and fear of death are not the same thing. I have the former but not the latter.

    Talking about my non-serious suicidal thoughts is difficult, because of the taboo that surrounds the topic. I can be pretty casual about it, and people often don't know how to react to that. I usually have to explain that, no, I don't intend to kill myself, and, no, I don't intend to make fun of the topic (even though it sometimes sounds like it). I've just learned to live through my suicidal phase, and now suicidal thoughts are some sort of cathartic tool (and that sometimes includes black humour).

    As a result, talk about suicide entirely in terms of prevention feels isolating. It did back when I was suicidal (it felt like people were more interested in preventing a suicide than in trying to understand), and it does now (because of the disconnect). When it comes to fiction I react best to stuff that depicts emotional difficulty without taking sides (e.g. the film 'night, Mother with Sissy Spacek and Anne Bancroft), or with absurd comedy set-ups (e.g. the suicide arc in the anime Welcome to the NHK). I react worst to shows that idealise a single solution.

    In terms of this thread, I don't think it's helpful to seek a single solution to the problem. I mean, suicides range from the guy who walks in on his family to demonstratively shoot himself, to the guy who kills himself and leaves behind a binder explaining himself, a lot of articles about dealing with loss, as well as a list of therapists and help-lines. Suicide is really just a single puzzle piece in a person's life and you won't understand that single suicide without understanding how it fits in. You can abstract, but that would involve multiple non-exclusive categories, I think.

    Basically, you can't understand a person's suicide without understanding that person's life. A life can have problems. Suicide doesn't solve those problems, but it does end them (and also prevent solution, though that's moot by then). Focussing on the suicide ("you shouldn't kill yourself, because...") can come across as priviledging the topic over the underlying problems (as in "It's fine if you suffer, as long as you don't inconvenience me with your corpse"). Not all suicides are problem-centred, though. My own phase was more akin to what Pfhorrest describes in his post above. Problems, here, are more nuisances - life's a struggle and there's no reward. Depression is actually welcome, because it's more comfortable than the anxiety of what sort of contradictory demands will come your way next. It's not big deal, really, you can push through that as you always have. But you become increasingly exhausted. People notice this, so they try to be nice to you, and through this process the things you enjoy turn into
    obligations, too, and eventually you just forget how to want things, even though you're an expert in how to not want things. Eventually you just feel empty. That's fine during a depression, since you don't feel any sort of vigour anyway. During bouts of depression it's easy to dismiss life. You're not going to kill yourself; it's not worth the bother. But as it recedes? Or if you feel it coming? That's when there's an inner tension that's nearly unbearable; it's a sort of unspecified can't-do-anything-but-have-to anxiety. During that phase you're not likely to make any preparations, though. Half-hearted attempts would be the most likely (though that was never my style). You prepare while your fairly calm and even cheerful. In my case it ended with research, since I never found a method I liked. (I also wondered whether I really was suicidal, or if that was just my inner drama queen. Now that I'm definitely not suicidal, I think I was.)

    Basically, I didn't want to kill myself because of a specific problem, but because I was just gradually losing my grip on life.

    Suicide can be mitigated by becoming more aware of other people or thoughts.Wallows

    This definitely helped during bouts of what I call the brooding spiral. Re-focusing helped by itself, and as a bonus I tended to find out that I was asking way more of myself than nearly anyone else (though that was a lesson that usually didn't stick).
  • Probability is an illusion
    .
    Probability, in my opinion, has to be objective or real. By that I mean it is a property of nature just as mass or volume. So, when I say the probability of an atom of Plutonium to decay is 30% then this isn't because I lack information the acquisition of which will cause me to know exactly which atom will decay or not. Rather, radioactivity is objectively/really probabilistic.TheMadFool

    I don't know whether I agree or disagree. I'm not sure what - in terms of the real world - it would mean for "probability to be real". Probability is maths, and like all maths it's applied to the real world, and so the question is whether it's useful or not rather than whether it's real or not.

    A operates with a very "small" probability system, and B with a very large one. A can expand to B, and B can conflate to A. When A expands, the likelihood for throwing a particular number increases until it drops to either zero or hits 1. That's just conditional probability. A's probability table would have to exhaust all probabilities.

    What if the universe doesn't have an initial state, just a string of causality that breaks at some point in the past, because stuff like frequency stops working? You could only approximately describe this with a mathematical system, right? Assuming mutliple possible initial states would work, but only if we can describe all those states and their relations such as that they are mutually exclusive.

    So, yeah, what does it mean for probability to be real?
  • Probability is an illusion
    Good point. Anything's possible in a game of chance. However, the issue is of predictability. Person B, given he knows the initial state of the system (person A and the dice) is able to predict every outcome; implying that the system is deterministic. However, the system behaves as if that (deterministic character) isn't the case.TheMadFool

    I'm trying to figure out what you think a "probabilistic system" should look like. "The initial state of the system" is different for A and B. For A, it's simply a game of dice. For B, it's the current state of the universe. For A probability only allows six outcomes. B could know that A will die of a heart attack before he ever gets to throw the die (and his hand cramps, so the die doesn't even drop). In my view you're comparing apples and oranges. A asks "What are the odds?" and B asks "What will happen?"

    B uses the chain of causality to compute the outcome. A uses probability to compute the odds. Take the following example:

    A bag contains only red balls. You draw one of them in the hopes of it being red.

    A will use probability theory and know immediately that given that he'll successfully draw a ball it will be red (because there's only one option).

    B will have to go through multiple computations to figure out which ball A will draw and then check its colour. B will know, though this process, if A will successfully draw a ball, if so which one, and by implication its colour.

    In this limited case, A and B will come to the same conclusion. Why? Because the probability to draw a red ball from a bag that only contains red balls is 100 %. B has a lot more information that pertains to the situation, though, including whether A will draw a ball at all.

    I'm not sure I understood you correctly, though. I'm right in assuming that B follows the chain of causality (taking into account all data he has) and doesn't encounter a truly random process (which would contradict determinism)?

    Of course, given perfect knowledge in a deterministic system, the question "What are the odds?" is superfluous, because it's always 100 %. But A has very limited knowledge.

    A and B have different perspectives: A's tends to be more efficient (but he'll have to contend with risk), and B's tends to be more accurate (but he'd probably die of old age before he finishes the computions).
  • Probability is an illusion
    This result is in agreement with the theoretical probability calculated (4/6 = 2/3 = 66.66%). In other words the system (person A and the dice) behaves like a probabilistic system as if the system is truly non-determinsitic/probabilistic.TheMadFool

    And if A threw a hundred sixes in a row it wouldn't be behaving like a probablilistic system?
  • Collective Subjectivity


    Thanks. I never really know how well I make my points, so having feedback helps.

    The problem with my post was that it... wasn't sensitive to the flow of the conversation and rewinded the entire thing to a much earlier stage. I didn't notice I was doing that when I was typing. I think the key problem I was having, what caused my confusion, is that I took "crowd subjectivity" instinctively as a synonym for "collective subjectivity", when it's not. In my post, a vending machine could serve as a stand-in for a collective. But a vending machine is obviously not a crowd. Only when I read Galuchat's post did I realise that.

    I guess my question would be, then, what's a crowd to begin with (people using the subway vs. people attending a rock concert - I feel there's a difference in output here), and how does it relate to "collective subjectivity"? The prototype? An example?
  • Collective Subjectivity
    .
    The shop.,..Galuchat

    A crowd...Galuchat

    Re-reading the thread, I feel I replied to something nobody said. Well, that's embarrassing.
  • How much philosophical education do you have?
    "Some incidental college classes" would have been my first choice, even though it didn't occur to me that "university" and "college" could be synonyms. Thanks for the clarification. I voted now.
  • How much philosophical education do you have?
    I will say that the results so far surprise me some. I was expecting mostly autodidacts, then students, then decreasing numbers of the increasingly higher degrees, and while there are mostly autodidacts and degrees in descending order as expected, I'm surprised that there are no students or associate's degrees.Pfhorrest

    Hm, I don't post much, but I might have voted, as voting as a low-effort activity. But I couldn't because what formal education I have doesn't easily fit into the poll.

    First, the subdivision of school/university isn't easily translatable. I'm Austrian, have an elementary school, some sort of middle school, and then some sort of commercial college. After that I went to University where I earned a "Magister" (which is probably somewhat comparable to a Master but in reality might be somewhere between a Bachelor and Master, not at all sure).

    Next problem I have is how to map "philosophy" onto my education. Philosophy wasn't part of the elementary school education. "Philosophy" was part of the syllabus in Middle school only in the sense that it was integrated in "German" as part of German/Austrian literary history. It could have been part of my education had I stayed on the school for 4 more years (roughly a highschool equivalent - and I would have had to choose either a humanities or a nat-sci branch) , but I changed to a commercial college, where philosophy wasn't part of the syllabus much (you don't get through a commercial college without hearing about "the invisible Hand" and stuff like that).

    However, philosophy was a huge part of may sociology studies at University. Social philosophy (utopias, anarchy, etc.), philosophy of science (even if you didn't take the specifically targeted courses, which I did, you'd hear about Popper, Kuhn, Feyerabend, etc.), and depending on the theories you end up interested in you'll need to familiarise yourself with certain philosophers, though secondary literature usually suffices. (Marx, Husserl, Derrida...)

    I'd say "some incidental university classes" would maybe fit what I went through? I definitely don't have a degree in philisophy, though my univerity degree has included the most philosophy, formally. But it's not easily comparable to either a Bachelor or a Master (though it's definitely not a doctorate). And some of my philosophical knowledge is audtodidact (e.g. whatever little I know about Schopenhauer, Wittgenstein, Sartre...).

    As it is, I finished my degree over 20 years ago and have never done anything with it - I'm both out of the loop and unpractised, and I'm not confident at all. I can read logical notation but sometimes need a table to remind myself what some of the less frequent signs mean, and it's slooooowwwwww going in any case. An autodidact with the adequate passion will know more than I do.

    So what should I vote? Autodidact? Some incidental college classes? I chose not to vote at all.
  • Collective Subjectivity
    But what I'm having trouble discerning is precisely the implications of subjectivity when brought to bear on the phenomenon of crowds. That crowds have different capacities for action than individuals is, I think, a truism. What the snailshell ought to do, I'd imagine, is provide a novel & useful way of understanding crowds. We turn on the 'subjectivity' filter from our snailshell-cockpit and look out over the crowd and see patterns we wouldn't have, had we not turned that filter on. Or, if we're in the crowd, our understanding of subjectivity ought to give us some openness to possibilities of the crowd that others, without that understanding, might otherwise miss.csalisbury

    I've got a university degree in sociology (but am not doing anything with it and am out of touch with the mode of thinking, too), so I have litte trouble with "collective subjectivity". I don't remember anyone actually using the terms in just this way, but the topic is rather central to doing sociology. Early sociology was taking off from positivism, with Durkheim trying to explain suicide in terms of suicide rates, choosing the topic because it's been seen as a very personal topic and thus a topic for psychologists. Basically: sociology is positivist, and it's not about subjective experience, but it can still provide valuable insight into personal topics.

    The need to distinguish sociology (as the younger discipline) from psychology remains. But at the same time getting rid of subjectivity altogether didn't appeal to everyone. So after Durkheim's positivist sociology, you get Weber's interpretative sociology. But Weber worked with ideal types: you don't need to reference each person as an individual: you just posit ideal types and see how close you get to what actually happens.

    Take a transaction: You can't buy anything if nobody sells anything. Buying and selling are two actions that are intimately tied together. The meaning of the transaction translates, subjectively, into buying for one participant, and selling to the other participant. But it's really a single transaction, in which a "good" changes "owners". Once you're describing transactions like that, though, you're practically forced to separate the actions tied to such subjective positions from the actions tied to the people who fill the roles. Why? Because the more a society's structure differentiates, the more likely it becomes that at least one of the participants is a collective (even if represented by an individual).

    Compare:

    Private person buys from private person
    Private person buys from family shop
    Private person buys from corporate shop via shop assistant
    Private person buys from vending machine

    And so on (I didn't talk about the internet, about brokers, etc.)

    You can play the same game for the other position (or "subjectivity" in terms of this thread) in the transaction; just think in terms of "sells to" rather than "buys from".

    So, imagine you walk into a shop. You're taking a sandwich to the counter, but find you're one cent short. What happens?

    You may be torn between asking to be granted a 1-cent reduction, pay the difference later (if you're "known to the shop"), or apologise and not buy the sandwich. Meanwhile, the shop assistent as a representative of the shop may not be able to grant you a reduction, but may do so as a person - entering into a responsibility relationship to the shop.

    There's something here that needs a name, and I have no problem with "subjectivity", because it's actually about "taking the perspective of X", even if X is a set of abstract rules (either codified or understood).

    Note that a vending machine will not be able to respond to your being one cent short. It'll simply wait for the final cent until you abort the transaction (or an internal clock says time's up and the machine aborts). In a way, you can think of a vending machine as an inherently stubborn shop assistent (because it has no consciousness and isn't capable of flexibility).

    The biggest problem with using the term "subjectivity" for this sort of thing is that, if at any time you find you want to refer to a consciousness' outlook, too, the term "subjectivity" is no longer easily available: you'll either have to find a way to integrate a typology (e.g. personal vs. generalised subjectivity - which could be hard, or might not work as seamlessly as you'd hope), or you'll have to find another term (which could become an entry barrier for other people, when it comes to adopting the terminology).
  • Disambiguating the concept of gender
    What utility does the concept have? Are you trying to highlight body feelings in a discourse where performativity and social construction reigns?fdrake

    This is intricately tied in with "being able to pass". The less you look like the gender you feel like, the more often you will have to justify your feelings. Even well meaning people might treat you like a rare specimen. So you might have an operation, or you might only go for hormone treatment, but you can do little about bone structure. Now, that might not be a thing that bothers you, but the incongruity between how your looks are intuitively parsed and how you feel inside leads to an increased need to justify yourself, especially when there's thing you could do but don't want to (I've heard about peer pressure to take voice lessons, for example). That is, during the transition phase there might be a conflict between being at peace with yourself, and being at peace with the community you live in (and that can include the trans community, who are trying to help).

    So, the regular pressures to behave according to your genders can be exacerabated when you're trans, because - other than cis-gendered people - there's a need to legitimise your gender. So a transwoman may need to show an effort to be more "feminine" to prove that she's not faking it. You can't prove feelings easily, so all that's left is behaviour.

    If we were to accept that (a) trans people exist, and that (b) it's not all and not primarily about outward behaviour, we would adapt our expectation and lessen the burden of proof on daily life.

    And now switch perspectives. You're a woman, you're not that interested in conventionally feminine things, but you live in an environment where people keep expecting this. The constant need to explain yourself would be tedious, too. Then you see a transwoman take voice lessons. Maybe she doesn't quite pull it off, yet? This behaviour has as a side-effect the re-inforcement of the annoying gender expectation you have to correct again and again and again.

    So at that point, if we would accept that it's primarily about internal body-image (to be at peace with yourself), and we'd just get used to a trans status, then some of the behaviour might fall by the wayside, and behaviour would be more... instinctive?

    A trans woman isn't a cis woman, and they know that or there'd be no point to use the word. But that's sort of the big default concept. If we were to accept that a trans woman is not a cis woman, it wouldn't be a surprise for a transwoman to retain some pre-transition elements, if we just took the category for what it is. Otherwise there's a constant need to prove yourself, and the only real option in daily life re-inforces gendered stereotypes. And in turn people think that's what it's about. There's a social push and pull here, that maybe could be lessened by simply accepting the category with all its variations.

    (I'm talking mostly about trans women here because they're far more visible online than trans men.)
  • Did I know it was a picture of him?
    If to know is to hold a justified true belief, then what is the justification here? I know it is a picture of him because I recognise it as such? But that is to say just that I know it is a picture of him because I know it is a picture of him...Banno

    I don't think recognising the person in a picture is necessary for me to know that this is a picture of N (for example, N is the author of a book, I don't know what he looks like, but I see what I recognise as an "author photo"), nor do I think that me recognising N in a picture necessarily means that I know it's a picture of N (for example, if I know that the picture is a picture of an event X and I know that N was no longer alive at the time of event X, I have sufficient reason to doubt my recognition, and yet the recognition could be compelling enough to spook me). So, no, I don't think recognising N makes that justification circular.

    It does point out the source of a possible error, though, and if you specify "How do you know this is a picture of N?" as "How do you know you're not mistaking the person in the picture?" then that would indeed be circular. Basically, every justification for a knolwedge claim involves itself knowledge that you can question, and I don't think "I know this is a picture of N" and "I recognise N in this picture," are on the same level of abstraction. The latter is more concrete.

    The wording in the quote, though, is interesting: "suddenly I had to think of him", "suddenly, a picture of him floated before me..." The language leaves open (and even suggests to me) the possibility of an illusion. In that case, since there's no objective picture, isn't the act of recognition consitutitive? Is it even recognition?
  • Critical thinking and Creativity: Reading and Writing
    I've always thought there's a great deal of overlap between thought experiments in philisophy and short stories. Every take on the trolley problem, for example, is a character waiting to happen. The biggest difference is that short stories are allowed, maybe even encouraged to spin out of control.

    I find one of the most important skills in both thought experiements and story writing is not to automatically dismiss that which seems silly. If something seems silly, seize it, double down on it, until it's normalised. It's only one approach, or maybe even only one part of many potential approaches, but it can work. I mean nearly everything seems silly. Imagine woodpeckers don't exist, and someone approaches you with the concept:

    I have this idea for a bird. It eats things that live in trees, but it's not patient enough to wait for them to come out, see, so it bangs its beak against the bark again and again and again, and very fast, too, and... What? No, it's not prone to concussions. So, anyway, that's how it makes holes in trees, and... Wait, where are you going?
  • The subject in 'It is raining.'
    The "it" in "it is raining" cannot syntactically refer to the weather in the trivial way the "it" does in "it is sunny" because the syntax differs.Baden

    In the exchange Herg provided (What's the weather doing?/It's raining.) it can. People may consider it awkward, but "it is raining," as an analogue to "the weather is raining," as a reply to "what is the weather doing?" is plausible (but not necessary; it's ultimately an empirical question - I do agree with Terrapin Station that it's all in the head).

    The conversation says nothing about dummy it, though, other than in the case of a plausible antecedant for "it" a sentence might be ambiguous between dummy it and anaphoric it.
  • The subject in 'It is raining.'
    There are two possible readings of your "B: It's raining.", as follows:
    1. 'It' refers to the bumble bee. In this case, since a bumble bee can't rain, the speaker is uttering nonsense.
    2. (much more likely in real life) 'It' refers to the weather, and B is not answering A at all.

    So semantics matters. You can't simply assume that in 'it's raining', 'it' refers to the subject of the most recent sentence uttered. As Terrapin Station has said, 'it' is indexical, and in any sentence about the weather, suich as 'it is raining' or 'it is sunny', 'it' refers to the weather.
    Herg

    I could say the same thing about your example. Maybe B didn't hear what A was saying, and is just commenting about the weather, the connection being a co-incidence.

    Your example proves nothing, because you're basing the proof on the same imputed connection that I did in this example. But if the connection is there, you have anaphoric it and not dummy it. It's not the same situation.

    A: What's the weather doing?
    B: It's raining.

    Assumption 1: B responds to A. Anaphoric it.
    Assumption 2: B ignores A, and is randomly commenting on the weather. Dummy it.

    Two different situations. It's just more obvious with the bumble bee example.

    You can err on any utterance; but that's a question for pragmatics or conversation analysis rather than either syntax or semantics.

    Yeah, context matters. But it matters on more than one level, and you have to be careful not to mix them up.
  • The subject in 'It is raining.'
    A: What's the weather doing?
    B: It's raining.
    Herg

    A: What's the bumble bee doing?
    B: It's raining.

    So "it" refers to the bumble bee.

    The conversation makes no sense, but the syntactic connection is sound. In your conversation "it" refers to the weather; in mine to the bumble bee. But it's a question of syntax, not semantics.

    Does it matter that your conversation makes sense and mine doesn't, for determining reference?
  • The subject in 'It is raining.'
    Again, I'm guessing the context. It's context-dependent.Terrapin Station

    We're talking past each other.
  • The subject in 'It is raining.'
    As I said, ""What I'd normally take the subject to be in lieu of other information"Terrapin Station

    If you have enough information to parse an expression without actual context, it's not indexical, though.
  • The subject in 'It is raining.'
    What way? As an indexical. That's what we're talking about.Terrapin Station

    But if "It" in "It's raining," were indexical, then you couldn't be arguing that "it" refers to the weather or anything, because you couldn't tell what it was referring to until you had a context.

    If I say, "He's a carpenter," then you know that someone's a carpenter, but you don't know who, if you lack context. How does "It's raining," remotely behave like that? That's what I don't understand.
  • The subject in 'It is raining.'
    It does if you think about it that way.Terrapin Station

    What way? I don't understand.
  • The subject in 'It is raining.'
    "It" is indexical because the meaning depends on the context. "It" doesn't have a "fixed" meaning like "cat," say. Like all indexicals, the reference of the term can be completely different in different contexts, they function more like variables.Terrapin Station

    I know that. But you don't need any context to parse "It is raining," correctly: "it" doesn't behave like the usual indexical "it" in this sentence.
  • The subject in 'It is raining.'
    "It" from above, 3. used in the normal subject position in statements about time, distance, or weather.
    "it's half past five" or 5. used to emphasize a following part of a sentence.
    Bitter Crank

    Yes, in those cases I say it's referentially empty and only fills a syntactic function. (Also in 2., 4., and 6., for what it's worth).
  • The subject in 'It is raining.'
    Again, on my view, re semantics, terms mean, terms refer to whatever individuals consider them to mean/refer to. In other words, meaning is subjective. Contra Putnam, it is "just in the head."Terrapin Station

    I agree with this. But meaning in praxis, i.e. when you use "it" in "it is raining," is not quite the same as the meaning you assign in analysis. The latter can be adequate to the former or not. In other words, agreeing on what "It is raining," means is a lot easier than agreeing on the proper analysis of the component "it".

    So maybe you refer to something when you say "it" in "It is raining," and I don't; but this difference (should it exist) causes precious little problems for successful communication should either of us say that sentence.

    Beyond that, I'm not sure why you say that pronouns are indexical, if you think it's all in the head. I'll come out and say it: when I say "it" in "It is raining," I have no referent in mind. None whatsoever. When I say to you, "It's black with pink spots," you probably have no idea what I'm referring to. "It" is indexical, and you're not privy to the context (disclosure: there is no context - I made up a random sentence). When I say to you "It is raining," you probably have a good idea what I'm talking about, because all the information you need is in "is raining". Here "it" is not indexical; it's referentially empty and only fills a function. Please explain the difference in opaqueness of the sentences, if "it" is indexical in both sentences.

    If "it" were indexical in "It is raining," it would have a different meaning whenever you use the sentence, and I'd have to parse "it" first before I can understand the sentence, like in "It is black with pink spots." In fact, the general indexicality of pronouns is fairly good argument against the fact "it" in "it is raining," or "it is five o'clock", or similar sentences is referential. If it were, we couldn't fully understand the sentences until we figure out what "it" means (because the meaning of "it" would depend on the context of speaking).
  • The subject in 'It is raining.'
    Anyway, if we were avoiding semantics and ONLY talking about grammar per se, then obviously the subject of "It is raining" is "It."Terrapin Station

    Yes.

    As soon as you ask "What does 'it' refer to" you're doing semantics.Terrapin Station

    Yes.

    Semantically, "It" is "the meteorological conditions outside."Terrapin Station

    That's the tricky part. It is not a given that "it" is referential in the first place. One possible answer to "What does "it" refer to," is nothing, and for most (but not all) linguists that's the answer.

    One question we can ask about subjects (as arguments of verbs) is what the participant role of the subject is in relation to the verb. Is it an agent as in "I go to school," where going is an action the speaker undertakes? Is it an experiencer, as in "I'm dying," where the speaker is experiencing death?

    What is the relation between the verb and it's primary argument?

    My take on this topic is that any attempt at answering these questions is post hoc; the meaning is emergent rather than referential. "It" is referentially empty and has no semantic function until you enter the meta level and ask what sort of function it might have.

    I also don't see any reason to ask these questions. Syntactic relations are enough. I do realise that it's not a clear cut issue. Take a potential exchange:

    A: "It's raining."
    B: "No, it's not; it's snowing."

    There are three "its" in this exchange, and if I speak carelessly, I'd say that all three its refer to the same thing. Except it's a dummy it and refers to nothing, so how can it have the same reference? This is a problem, so at the very least your position is valid, if not even right.

    Consider this sentence:

    "It's true that it's raining."

    Two its, both dummy its, but clearly not "referring" to the same thing in the way the three its in the previous examples do.

    This is a situation where I see problems on either side, but my personal priorities find the problems with a generalised referent to be more severe.

    To summarise, I think the meaning of "it" arises out of the interection of grammar with the semantics of the verb and is thus vague and general. It's not referential; but it has some sort of substance, such that you can differentiate between different sorts of dummy-its. What that semantic substance is like is a problem I'm not sure how to address, but it's not a problem severe enough for me to abandon the dummy-it interpretation.

    Does this make sense?

    (To make matters worse, we shouldn't be confusing subject-predicate of philosophical propositions with subject-predicate of a sentence structure. It's harder than it should be.)
  • The subject in 'It is raining.'
    "Fuck you." makes perfect sense, but it lacks a subject.Bitter Crank

    Actually, that's an interesting case. "Fuck" in "Fuck you," looks like a lot a verb in the imperative, where people usually posit an understood subject, "you". However, if that were the case, we'd be expecting "Fuck yourslef," as in "Buy yourself a drink."

    I still think it's a normal sentence whose verb is in the imperative mood. I'm not sure what to do about the "you", though. It looks like an object, but if it were I'd expect a reflexive pronoun.

    When it comes to "It's raining," I prefer the "dummy subject" interpretation: "It" is all syntax and no reference. The verb carries all the referential meaning in the text.
  • Empty names
    ? Not that I agree with the idea of "empty names" in the first place (as I stated earlier, I think the whole notion of there being a problem stems from misconceived theories of reference), but names for fictional characters are often given as an example. Why would that be a category error then?Terrapin Station

    I think it would be more accurate to ask whether referring to names that reference fictional entities as "empty" constitutes a category error. And, as I said, it may be wrong to say that, but it's not a category error. Even if what you mean by "empty name" is a name that designate nothing and further assume that, by definition, all names designate unique objects (leaving aside paradoxes about referring to non-being), it's still not a category error. It would be like asking "are all primes are divisible by 2?" The statement is wrong analytically, but it's not a category error because "being divisible by 2" is still a property that belongs to numbers and so is within the same category as primes. Similarly, talking about empty names is wrong - and wrong analytically - on certain reasonable assumptions about how names work, but it is still in the category of a linguistic claim so not a category error.Mentalusion

    You're both skipping ahead of what I'm actually saying. The piece about "category error" was part of bigger and more complex point I was trying to make, and it was about type/token not "empty names". I wish I remembered why the type/token distinction has come up. I wasn't part of that conversation until it was nearly done, and none of the original participants ever engaged me on that. What I'm saying is about names in general, not "empty names" in particular.

    The sample sentence I gave was:

    "The Harry Potter from Rowling's book is an empty name." (I should have said "books".)

    The category error is this: "Harry Potter's" name is an empty name; Harry Potter himself is a character.

    This is so obvious that it's normally not worth saying. I think I may not have made my point very well, if you think I mean to say that "Harry Potter's name is an empty name." You probably haven't picked up why I think it's worth it's saying in this context.

    A name that's not assigned yet is still a name. It follows that names must have meanings as themselves, too. If I type sample sentences, like "Joe likes to sing in the shower," then you recognise "Joe" as a name. But the person behind the name is even less real than Rowling's Potter, since I just typed a random sample sentence, without any reference in mind. The sentences gains its meaning from the fact that we all know how naming works, and how we use them. Just using the name in not even a fictional context conjures up the expectation that there's a person (how ever hypothetical its existance). What's more, we recognise the name as a name other real or fictional people held.

    "Joe", as a name, is one name in a list of names we might choose for our children, and this is the meaning we inwoke when we attach articles to the name.

    On the other hand, this very meaning implies that there are person who are referents for those names: when used for individuals (without an article) the name actually starts functioning as a name: in a way it becomes active.

    "A Joe" in "A Joe has eaten your cake," and "Joe" in "Joe has eaten your cake," work differently with respect to the type/token distinction: it exists in the former case, but not in the latter.

    A thought experiment: A group prepares pseudonyms for participants at a meeting who wish to remain anonymous. Those names are assigned at random. Fewer people than expected show up. Some names have not been assigned. In what ways did the meaning of the unassigned names differ from the assigned ones during the meeting?

    For me, there's a disjunct. If I talk about the names themselves, they don't really differ. They're all different from each other and they were potentially to be assigned to people (some were, others were not). But they're all names.

    When we look at the transcript of a meeting, only the assigned names show up, and when they do they refer to the person in question. This is what they're supposed to do. But the connection is unique. If they re-use the names for the next meeting with different people involved, again assigned at random. The name, considered as a name, has now the property "Provided twice, assigned once" or "Provided twice, assigned twice", but that's something we know about the name. The relation between person and name, though, resets. The facts about the name itself are irrelevant, except when we look at the person as a token of the type "has been assigned the name".

    Or do situationally assigned (or chosen) names differ from names who have all your life? (That's been an issue is in this thread.) The question of "real names".

    Basically, if you insist on physical objects as the referents of names (as I understood the concept, and as the wikipedia link seems to suggest), how do you conceptualise the difference between a name that's been assigned to a fictional person, a name that is neither assigned nor used (see my thought experiment), or a name that is never assigned but used anyway (e.g. in a sample sentence)?

    Out of those three situations, the name of a fictional character seems the least empty. However, a name that isn't used doesn't actually work like name. And invoking the name in a sample sentence can invoke the idea of a person, even though the name has never been assigned.

    I'm not sure how to deal with this, but my hunch is that a name is something a person "has" not something a person "is". Referring to an individual entity is the function of a name, but unlike regular words, they confer no meaning unto the entity who's assigned it.

    That's different from regular words, where a word confers meaning: a [word] is something that you are, not something that you have.

    The problem with this is that a lot of this is dependent on the how any society organises the institution of naming. Telling names aren't impossible, but in general names need to be meaningless in themselves so that they can refer to individual entities continously (impervious to change). Basically, to be an idiot you have to behave like an idiot, and if you stop behaving like an idiot, you stop being an idiot. Similarly, to use a name you need to have that right (however that's organised), and if you lose that right, you no longer have that name.

    In the case of words, the assignation of the sign to the thing is extrinsic to the meaning behind the sign. In the case of a name, the assignation of the sign to the thing is instrinsic to the meaning behind the sign. Beyond that, any real life behaviour of the object creates connotations, not denotations.

    I think my conclusion would be: all names are empty when considered as names; no names are empty when used as names. Or something like that. I'm not sure.
  • Empty names
    It was for that reason the scenario didn't seem to me to get at the issue of "empty" names as well as other examples.Mentalusion

    I used the scenario in response to a question why the holder of a name isn't the token of a type (or so I understood the question). It wasn't meant to get at the issue of empty names.

    That said - and this is a parenthetical issue - I still don't think calling references to fictional entities "empty names" constitutes a category error. It's not the correct use of the concept of a name to be sure, but it's not a category error. It's just wrong. Not everything that's wrong is a category error.Mentalusion

    Of course referring to fictional entites as "empty names" is a category error. A fictional entity is not of the category "name", therefore your What am I missing here?

    I'm also not quite sure how you see the relation between what a person intends to say, and what that person actually says. Grammar is something you learn as you go; it's something you can get wrong. At the same time, if enough people get the same thing wrong for a long time it changes. The category error is on the language level, not on the concept level.
  • Empty names
    Generally I don't have an issue with your claims about how syntax can operate with proper names, and even think the type/token distinction could be useful for explaining the non-definite use of the proper name vs. the definite. I took my claim about the "lexical entity 'john smith'" to be basically consistent with that. However, my concern is exactly with what the actual context of the situation is here and the intentional state of the postal carrier. The package carrier not standing at the door wanting to give the package to anyone who happens to be named John Smith so he can happily walk off feeling like he did his job competently. Rather, he wants to give the package to the John Smith to whom the person who sent it addressed it to, he just doesn't know who that is. In other words, s/he isn't just looking for "a" John Smith, he's looking for "the" John Smith the package is addressed to. So, I just don't see that the syntactic distinctions you bring up - while legitimate in and of themselves - apply to this particular situation here since, in fact, given the context, it does not seem to me that either of the names are empty in the example given.Mentalusion

    But a lexical entity "John Smith" being different from a proper name "John Smith" or not is highly relevant for the type/token distinction. (I've had some minor linguistic education, but I know most about syntax and less about semantics, so there's that to bear in mind when reading my posts.)

    Yes, there's an ambiguity with the names. But to talk about the ambiguity you need a word that encompasses both names.

    With respect to empty names, ambiguity matters, too. "Harry Potter" is not itself an empty name. You need to know who it refers to (i.e. a fictional character) to know whether it is empty.

    We have three cases, here:

    1. "John Smith is a common name." -- Referent of "John Smith": a name

    2. "This parcel is for John Smith." -- Referent of "John Smith": a specific person.

    3. "This parcel is for a John Smith." -- Referent of "John Smith": a group of persons defined by holding the name "John Smith"; Referent of "a John Smith": a specific (but not specified) person.

    "Harry Potter is an empty name," uses "Harry Potter" in the first meaning, but there's an implicit assumption as to the identy:

    The sentence "The Harry Potter from Rowling's book is an empty name," is a category error. The Harry Potter from Rowling's books is a person, not a name. You'd have to say "Harry Potter is an empty name when it refers to Rowling's character." When I'd be arguing that "Harry Potter" is not an empty name because my neighbour is called that, I'd have misunderstood the concept.

    Now, it is actually possible to create a concept of "empty names" such that an "empty name" is only an "empty name" if there are no real entities with that name. "Harry Potter (= 1) is an empty name because there is no Harry Potter (= 3)" is a different concept from "Harry Potter (= 1) is an empty name because Harry Potter (= 2) does not exist", and it's useful, if at all, in different contexts.

    I think it's an important distinction, because it's easy to slip, and there may be contexts in which it's not clear what's being talked about, or in which the distinction is meaningless.

    In my scenario, there is no empty name. But if someone played a prank and there is no "John Smith" at that address, then one of the names would be empty. (Or formulated for people who don't like the homonym theory: The name would only be empty if it referred to the non-existent recipient of the prank parcel.)

    [For what it's worth, this thread is the first I ever heard of "empty names". My intuition was that it's about names that really don't refer to anyone. Maybe an author has made up a name, but is undecided if he'll ever use it and certainly has no character in mind. Such a name would exist, but it'd be "unused" and have no reference - i.e. the name can't be traced to any person fictional or real.]
  • Empty names
    I'm not sure the example gets to the difference between type/token and proper names. It seems to me that both speakers there are using proper names. the only possible type/token implication is that one could see the lexical entity 'john smith' as a type for the two token names "John Smith [1]" and "John Smith [2]" given the name is a homonym. I think the more nature description though would just be say there's any ambiguity in the name: they just sound alike but in fact reference two different things, like a river 'bank' vs. a financial 'bank'.Mentalusion

    When two or more people have the same name, there's ambiguity. You're right about that. But the hint, here, is in how language treats words syntacticly:

    A proper name doesn't take articles; semantically, it doesn't need to, because a proper name is definite by itself. Normally, ambiguities are resolved pragmatically rather than through syntax: "Joe" is far from a unique name, but if you say "I'm talking to Joe," people usually know who you mean through context. (It's, of course, possible to miss parts of the context and create an ambiguity that your conversation partner doesn't automatically resolve.)

    If you resolve the ambiguity syntactically, by adding articles (either indefinite, or definite), you make a shift from a proper name to regalur noun: "a John Smith" does not have the same meaning as "John Smith", even though the same person can be the referent for both (more precisesly "a referent" in the former case and "the referent" in the latter case). In the case of "a John Smith", he's part of a class (all people named "John Smith" are "a John Smith" - being named like that becomes the meaning of a type, and having that name makes you a token); in the case of "John Smith" he's uniquely named (and it doesn't matter that other people have the same name).

    What's philosophyically difficult here, I think, is the precise relation between semantics, pragmatics and syntax (and theoretically morphology - but not in this case).
  • Empty names
    I'm basically asking you why aren't proper names also referred to as tokens for things? Is this an issue?Posty McPostface

    Consider the following exchange:

    A: Am I speaking with John Smith?
    B: Yes. How can I help you?
    A: Please sign this receipt for...
    B: Oh no, you want my uncle.
    A: No, I want John Smith. That's you right.
    B: I'm also a John Smith, but the John Smith who sold....

    In this exchange, A uses "John Smith" exclusively as a proper name, but B, in the last line of the exchange, uses "John Smith" as type/token word with the meaning "people named John Smith".

    Words generally have meanings, and then you check whether an object qualifies for those meanings.

    Proper names don't work like that. There's a 1:1 relationship of reference between the name and a single object. The object can change completely; what matters is that it retains the name, and that's a matter of social convention and not meaning. You don't have to fulfill any sort of semantic criterea to qualify for any proper name attached to you; the continuity of the relationship between the name and the ting itself is what matters, and it's also what's invoked when you say the name.

    It seems clear to me that types are the descriptive content of tokens. So why not include token under the monkier of proper names which would designate that descriptive content?

    Because there's no descriptive content in a proper name. "Harry Potter" describes nothing - it's just the name assigned to a fictional character. I assume there are Harry Potters in real life, and they don't have to be anything like the fictional character. I could call my favourite coffe cup "Harry Potter", if I wanted to. The act of assigning a name is all that matters for proper names. That you henceforth associate the proper name with the person/thing in question and expect certain features of the person/thing to remain constant has little to do with the name itself.
  • Empty names
    Yes, but haven't I already proven that we are posting anonymously with my silly nickname?Posty McPostface

    No, you haven't. Your posting under an alias, which is different.

    All post written by the user "Posty McPostface" are attributed to that user. No other user is called "Posty McPostface". If ALL users would change their name to "Posty McPostface", then we'd all be posting practically anonymously (of course, we'd also have to choose the same avatars, or have the board disallow avatars.)