• What is faith
    Your posts are well-informed and thought-provoking, thank you.Wayfarer

    Thanks; confidence isn't my strong suit. For every post I finish, there are probably two I don't, and for every three posts I finish and post there's probably one I don't post. That might motivate me to post more... or not. Time will tell. But this cheered me up.

    I agree with this one some things. I don't think this is always true though. Just for example, health is at least part of the human good and living a good life. I think that part is obvious. What promotes good health is often not that obvious, and we rely on the medical sciences, neuroscience, biology, etc. to inform our opinions here. Isaac Newton's consumption of mercury to boost his health is probably a fine example; it wasn't obvious what a an absolutely terrible idea this was, even to a genius like Newton. Other examples, like the existence of externalities in economics, or the pernicious effects of price floors and price ceilings abound. Having basic access to food is part of the human good and early price ceiling schemes, e.g. during the French Revolution, led directly to massive food scarcity, having the opposite of the intended effect.Count Timothy von Icarus

    Yeah, I agree. I think I was focussed on baby stomping here.

    Yes, there is context dependence.Count Timothy von Icarus

    Actually, after some thinking, I think I was "barking up the wrong tree".

    You were asking how one can be "wrong when making a judgement about something which has no truth value, where there is no fact in play?" (Last post I was replying to.) And that's a good question.

    A question of my own: would an emotivist agree that you could derive a fact about value from a fact about emotion? For example, if I said "boo to baby stomping," would it be a fact that "Dawnstorm feels negatively about baby stomping"? If so, there's plenty to be wrong about when you consider the path from internalised attitudes to aquired social values as instantiated in a specific situation and actualised in the decision-making process: you can be wrong about the item in question (e.g. the car), about the social value attached (e.g. I thought cars were supposed to be faster), about my attitude (e.g. I though I want a fast car, but I really just want to outdo my neighbour), about my projection (e.g. I thought going really fast with a car would be fun, but it's scary), and so on. And then you can be wrong how any of that inter-relates (e.g. I knew going really fast would be scary but I thought I'd get a kick out of being scared).

    Even apparently simple things are pretty complex if you drill down.

    As for this:

    Indeed, the focus on acts is also part of the problem. People are primarily good or free, not acts. Just as there is never motion with nothing (no thing) moving, human acts are parasitic for their existence on men. Hence, while it is sometimes useful to speak of the freedom or goodness of acts, desires, appetites, etc., I think it is better to speak of men, lives, and societies.Count Timothy von Icarus

    I'm a relativist, so yeah, I agree pretty much. Who in this thread is likely going to disagree that "baby stomping is bad"? The force of the rhetoric derives in part of the extremety of the act. The variance in reaction is fairly low. What underlies this? An absolute moral principle? An anthropological constant (we're a social species)? A social contract of some sort? And off we go in abstract land.

    But this type of rhetoric is also a good example of how morals proliferate. The target here is not the protection of babies: it's a meta ethical stance, with the problem being that some people want there to be a right and a wrong, a good and a bad, etc. more than others. Part of this thread is ritualistic: we affirm our stances and solve little. That's not all there is, but it's certainly there. We're topicalising a well-known divide and portraying our stances. Little will change. We re-iterate the moral landscape.

    This, I think, is what it would mean for "people" to matter: we stop talking and take a long, hard look at us right now. But then I would think this; I'm a relativist after all.

    In the end, I'm fine talking acts.
  • What is faith
    Do you think one has to adopt a position like eliminitive materialism or epiphenomenalism in order to being doing proper objective science? Or is it allowable for consciousness and intentionality (and thus value judgements) to be part of an explanation of natural phenomena, without these being presumed to be fully reducible to "mindless mechanism?"Count Timothy von Icarus

    I actually think that eliminitave realism is of very limited use in social siences. Take sociology: the discipline was established by Emile Durkheim with an eye to Comtean positivism. The methodology was pretty much all about statistics (e.g. the suicide rate). And the intent was to proof that social facts exist, so to establish the discipline in academia. Later, we have Max Weber introduce the concept of "verstehen" (via a methodoly of "ideal types"). This put the knolwedgable agent on the table and would set off the interpretative branch of sociology: sociologists were very much aware that to understand action is to use their own intuition. Alfred Schütz would update Weber's approach with Husserl's phenomenology, and that is where I directed most of my attention. However, I was always aware of an unfortunate split of macro and micro sociology; either big systems (developed mostly in America by Merton and Parsons; also setting off from Weber, but in a different direction) or situational interaction. So I eventually stumbled on Anthony Giddens' theory of structuration, which attempted to unify the strands by rooting both in spacetime via input from geography. I really liked that.

    All that to say: I'm very firmly on the side of intentionality here. I'd say ignoring this isn't an option in the social sciences at the very least, though it might be useful elsewhere (not an expert).

    IDK, if I am reading this correctly, then it seems like the presupposition that "real facts don't include value" is doing the heavy lifting here. It seems like you're saying that an explanation from the medical sciences (involving value) is "fudging over the (real) facts" and is not "real science" precisely because "real facts cannot involve values in this way." Do I have that right?Count Timothy von Icarus

    It's probably due to the way I put things, but, no, I don't actually even care much about what "real science" is supposed to be. What facts need above all is a modicum of precision, and that's something that words like "bad" almost never allow. What I'm saying is that the scientific facts tell you nothing that your fussy-wussy intuition doesn't also tell you, so there's little point in appealing to the facts. It doesn't really matter how much damage a boot at a certain velocity can do. You can appeal to facts, but you gain nothing by appealing to science here.

    And medicine isn't only science; it's also applied technology. Biology itself, for example, is more about basic research. In its application it has to feed into stuff like medicine, farming, breeding... even outdated stuff like, say, phrenology. So when you present "stomping baby is bad for them," as a fact here, it's ambiguous between the precise effect on the body, the ethical environment of treatment, and so on. But if you were to resolve those ambiguities it gets harder to see the point.

    I'm not really 100 % sure what I mean myself. Maybe I was saying that science is red herring here?

    I'd just point out that sometimes it is extremely obvious that natural selection has been shaped by intentionality and goals, the most obvious cases being domestication, dog breeding, etc.— unless we want to somehow say that this is not "real natural selection" (but then what is it, supernatural selection?) This seems problematic for accounts that want to exclude consciousness from biology, unless there is an appeal to something like epiphenomenalism (which has its own plausibility issues). But I digress. I think it proves quite difficult to allow for goal-directedness and not to allow for values related to the completion or failure to complete goals.Count Timothy von Icarus

    You're addressing something here that's always been bothering me. I certainly think breeding should fall under natural selection, but I see it as problematic to incorporate it easily. For example, what little experience I have with evolutionary psychology didn't impress me too much. Douglas Adam's puddle analogy comes to mind here.

    How can one be wrong when making a judgement about something which has no truth value, where there is no fact in play? For instance, how can one "buy a bad car," if cars are never really good or bad? One can certainly say "boohoo to my past purchasing decisions," but you cannot have been wrong about a goodness that doesn't exist.Count Timothy von Icarus

    I'm honestly quite confused right now. A car that doesn't move is a bad car, but if we didn't want the car to be a car then it could be something else, which it always is - beyond the judging. I think what I'm going for is insconsistence-despite-continuity or something? If I ever figure this out and have the time (not likely today or tommorrow - depending on your timezone maybe even the day after tommorrow) I'll be back - unless someone else says it better (which has preamted quite a lot of posts from me).
  • What is faith
    Is medicine not a science? What about botany, zoology, or biology more generally, which have notions of health, harm, goal-directedness, function, etc. that all involve value? What about all the social sciences? Psychology, economics, criminology, political science, etc.? These often deal with values rather explicitly.Count Timothy von Icarus

    I'm a sociologist by formal training, though I never went down that path professionally and it's now a few decades in the past, but I'm quite familiar with the value discussion, and the funny thing is that my personal position on this topic is that value free science is an unreachable ideal that nevertheless may have some function when you strive for it, though you have to stay vigilant and not pat yourself on the back for being all-out unbiased (you're not). Writing this post was a little weird in that respect; I was trying to put on an emotivist hat while wondering to what degree I am one. As I said before, I'm not that familiar with emotivism.

    That's a huge topic, though, and not all of it is relevant here. The scientist should be disintered (i.e. not take sides when different factions want different outcomes). The question of positivism (e.g. Popper vs. Habermas for the social sciences). And so on. What I was going for:

    "Stomping babies is bad for them," may be a medical fact based on the medical ethos. The associated scientific facts need to be phrased quite differently: the not yet hardened skull is more vulnerable to boots, for example. (I'm hardly an expert.) But more than that, the appeal to "facts of science" looks like an appeal to either authority or objectivity. It's also unneccessary: common-sense "knowledge" will tell you everything relevant here, and science doesn't contradict it. And the value judgement "bad" in "stomping babies is bad for them," is used to fudge over the actual facts - and this works partly because of the ethos inherent in medicine.

    In sociologogy, for example, you come across studies that are arranged in a way that they just echo common sense knowledge without adding anything of value. I wish I had examples, but it's too long ago. People get away with this because, as one of my professors said, more papers are written than read these days. A lot of market research works that way. I've worked in market research, and my impression is that a lot of the clients demand suggestive phrasing (or presence in group discussions, etc.) - methodology that will render the results useless. I suspect at least some know and don't care; they're after legitimisation rather than knowledge.

    In most contexts, saying "stomping babies is bad for them," is a scientific fact is perfectly fine. Here, I think it fudges the topic (if the topic is emotivism). When you're doing science, you need to be very clear about the facts: define you terms, provide the data, etc. When you do that for above sentence you'll likely lose the point your trying to make.

    Whereas, IMO, if we go in the direction of "science says the universe is meaningless and valueless" we have left science for the realm of (often quite dogmatic) philosophy, and at any rate "emotivism must be true because 'science says' goodness doesn't exist," seems to be a pretty hard case to make, no?Count Timothy von Icarus

    Eh, first there's methodological naturalism, which is useful or not, depending on the science. As for "goodness", the most central related sociological concepts would be norms and sanctions, no? (I might be forgetting something obvious.) Rival theories often won't accept each other's set-ups - and that's part of the conversation. It's, IMO, necessary, as sociological knowledge always involves man-made meanings. You can't expect the one true way, here. And that's why how we look at social facts must always be carefully contextualised.

    So for example, I might say "Stomping babies is bad," is a fact within the institution of medicine, as practised in predictable roles. So if I were interested in how the "goodness/badness of baby stomping" plays out here, I could. But what I pay attention to and how I approach the matter requires some theoretic background, and that could include emotivist influence if I were so inclined. I've always felt that, in the social sciences, you shouldn't suppress your bias - you should lay it open.

    Of course, here I'm talking about a "social fact" (a topic for sociology, not medicine). The medical facts are about young bodies and what stomping does to them. The social facts are about... what? How we treat wounded bodies? How we react emotionally to the source of the wound, and we allow ourselves the luxury (not that often during an operation, I'd say). Values frame other values; it's complex.

    So, then, what's the purpose of "stomping babies is bad"? To prevent as many babies from being hurt? To legitimise your anger? To explain your sadness? As I said above: it's a bundle. And it's a bundle you can tie up in very many different ways.

    This doesn't seem like emotivism anymore though. In this case, moral statements wouldn't just be expressions of emotion or sentiment ("boo-hoo" or "hoorah.")Count Timothy von Icarus

    Yeah, I've taken off my emotivist hat here. This is where I'd need to read up on emotivism more. What I'd want to know is how they deal with these topics:

    The emotivist thesis is that there is nothing else, no facts, to moral statements, just expressions of sentiments.Count Timothy von Icarus

    That's their way of expressing themselves. But just like, as you say, other people recognise the role of emotion, I would imagine emotivists have their own way to handle things like habits, norms, legitimisation rhetoric, etc. I expect them to unravel the threads starting with emotion, here, of course, so that everything follows from there. (Stevenson's first/second pattern analysis seems to hint at that, from my limited skimming of wikipedia so far.) When you come from sociology and develop an affinity for the hermeneutic approaches, you're kind of used to navigate and translate between theories. No two sociologists see eye to eye, in my experience. There was a running joke at the institute I studied at: Two sociologists, four opinions.

    Smoking is an interesting case because neither I, nor any of the people I know who have quit, particularly miss it (maybe some social elements of it), but perhaps some people really do enjoy it immensely.Count Timothy von Icarus

    Yeah, I've never smoked, but I've encountered plenty of smokers who knew smoking was bad for them but smoked anyway. They weren't the ones who thought they should quite either. What role does "bad" play here I wonder? I amost addressed this, but decided against this. My post was long and unfocussed enough, as it is.

    Well, let me just start by asking, can people ever be wrong about their own choices? Or are we always infallible as to our own choices as respects what is best for us, and if we later regret our choices they are only bad choices for some "future us" but not bad choices for the "us" when we decided to make them?Count Timothy von Icarus

    I have no real way to approach this question. My intuitive response is a joke: "Can anyone ever be right about their own choices? Being wrong is easy." I don't mean it like that, but this is underscored by my intuition. I think I might think of "right" as "provisionally unproblematic" or something? Not really saying much here; just trying to uncover my bias here - unsystematically.

    If we can never be wrong about what is good for us, I don't think there can be any value in philosophy or introspection. Whatever we choose is right because we currently desire to choose it (so long as we always do only what we want).Count Timothy von Icarus

    Does emotivism say whatever we choose is right? Surely they're aware of conflicting emotions? In some ways, "right" seems like a magic spell to quiet that inner war. We want decision making to be easier than it is.
  • What is faith
    So on this view, "it is a fact of medical science that stomping babies is bad for them," being a value statement, would amount to "boohoo for baby stomping," but could not relate a fact or be based upon a fact.Count Timothy von Icarus

    Again, I know little about emotivism. I read up a little since my last post, mostly because I found it hard to believe that emotivism has no theories to deal with stuff like that. I came across, for instance, Stevenson's first/second pattern analysis, but I haven't thought this through enough to be comfortable talking about this. Instead, I'll just go with my intuition: what if Dawnstorm were an emotivist?

    First, that stomping babies is bad for them is not a scientific fact; it's probably a medical one. Science is to some degree at least supposed to be as value neutral as possible, but it does need its cues, as for what to do. Medicine, as a social institution, is meant to heal people, so that sets a context that sort of defines good and bad; as in health is good. This is taken from a greater social context: you ought to act in such a way that you stay healthy. And so on. So, yes, I do think it's true that "stomping babies is a fact of medicinal science" if you follow the traces of social values.

    If I were an emotivist, I'd likely intutitively see the anchoring of social values during the primary socialisation as the fundamental process here. Beyond socialistion, though, there's a visceral, non-social reaction to being stomped, which I, as a baby, would express through crying (at least as long as I still can). It's not just one emotion; it's a bundle. And that bundle develops. It would not be the case that there's a simple one-to-one relation between any particular emotion and any one particular action. By the time we're able to act it's already far to complex for that.

    Nevertheless, emotions don't just motivate moral statements. I don't even trust moral statements to do justice to the underlying bundle of emotions that tag you into your larger social context. And I do think that there are social facts that transcend any such time-space bound emotional bundle. As an emotivist, I'd first have to be a methodological individualist: I'd have to de-emphasise the social context in favour of emotional growth. I'd need a theory of how rationality ties into this. And so on. What would remain the same, though: moral statements are surface phenomena, secondary to lived social praxis. And values are the basic motivational structure of an agent - emotions.

    One thing I'm not sure about is this: I don't think values/emotion is a one way road. "Value <--> Emotion" rather than "Emotion --> Value". That is because both your visceral emotions and your social-belonging derived emotions are constrained by facts: about (a) social realities, (b) biological bodies, and (c) biographic actions you've taken (even if by mistake, such as "stepping on someone's foot").

    Still, I'm not sure I couldn't make it work. At the very least, I don't find that your argumentations dissuade me away from emotivism.

    For example:

    But you seem to be suggesting that, so long as they are being reasonable, they cannot be wrong? If 10 years later they tell me "I wish I had listened to you," is it not fair to say that I was correct in this case?Count Timothy von Icarus

    What I consider the facts here is simply that present me (I'm taking the role of the smoker) and past+present-you currently agree. But past-me probably saw this coming. What we'd need here is a theory that crosses the temporal divide: past me didn't have to suffer health problems, and present me didn't have to live a life without smoking. The different temporal positions give an obvious bias towards the present situation (which is emotionally more acute). One could say that present me has eaten well, but doesn't want to pay, for example. How do you mediate (theoretically) between the two positions? Yeah, you correctly predicted the present situation, but so did past me. It's even possible that, back then, I took this into account. Does our current agreement constitute a moral fact?

    Here I imagine some random passerby overhearing the exchange and taking this as an impetus to quit smoking. A future self made more vivid by current example provided at an opportune moment in the biography. A story to be told to friends who notice he finally quit. This is the reality we live in. We're not unique - we identify with others, we imagine future outcomes one way or another, then what we didn't imagine happens and we imagine our past selves through this lense. But emotions are sort of fundamental to all these imaginings.

    So we make decisions to take the bad with the good (because pure good is rare), and then when it's time to "pay up" we wish we didn't have to pay. Not such a rare occurance, and there are plenty of recipes to deal with this: askesis, the middle way... You're not going to invent the wheel. You can also just come to terms: I made my bed, so to speak.

    But what's the overall theory here - when priorities change? What's the temporal aspect of morality. I don't see this as a problem for the emotivist; but I feel you have to address this if you want to say that you are "right" in this situation.
  • What is faith
    To use the unpleasant example brought up earlier in this thread, that "being stomped" is bad for infants would appear to be about as obvious of a truth of medical science as there is. We might suppose that people have negative emotions as respects "baby stomping" in virtue of this fact.

    However, the dedicated emotivist often ends up resorting to claims like: "being stomped isn't actually bad for babies," and defending this claim (which I think most would judge to be obviously false) by appealing to the notion that all value judgements are just statements of emotion. But that's obviously question begging.
    Count Timothy von Icarus

    I'm not that familiar with emotivism, and I'm unsure if I'm one or not, but I wouldn't rule it out. So: do emotivists really make claims like that? That being stomped isn't actually bad for babies?

    It seems to me that there are plenty of other ways to deal with this that actually include emotions, and not such vague "isn't bad for" language that seems ill-defined to begin with. Sounds like speaking nonsense when cornered by unintuitive rhetoric.

    For example: I could say that it's pretty safe to assume that babies have a negative emotional response towards being stomped. That even if you don't have a negative emotional response towards stomping babies others have a negative emotional response towards you stomping babies, and you in turn have a negative emotional response to that response... etc.

    Moral discussions seem to be... difficult. And the rhetoric surrounding it is often already steeped in an implicit morality that rhetoric than obviously can't deny:

    Is murder bad? Well, yeah. It has to be. That's the way the word "murder" is used. Morals are more complex, though, and (as Banno said) an iterative process. If we think a parictular act of killing wasn't bad, we're not going to say this particular murder isn't bad; we're going to argue that it isn't murder. Then there's also the question if all killing that's wrong should count as murder or not. Imagine a society where "honour killings" are regulated and accepted, you've grown up in that society, but you've come to think of them as wrong. You wouldn't necessarily think of the act as murder: but if all the "allies" you can find come from outside your society and they do consider all "honour killings" murder you might find your stance to change. It's only partly a linguistic change: your own doubts find confirmation elsewhere, and you internalise that way thinking to some degree. But there are also nuances that you have to live with every day that might make it impossible for you to fully internalise that model. You might find you have more sympathy for someone committing an honour killing than for other forms of murder. You now have to navigate a moral path between a native and a foreign moral system. And you might find that others near you have the same problem...

    What you can do is reject the concept of murder to begin with (if you can): that is, there's nothing wrong with killing under any circumstances, therefore the concept of murder makes no sense. If you can really, truly live this, you're likely a very rare specimen, and you'll likely have internalised the moral rule in a different way: as knowledge that you won't be able to convince others and that you have to be very careful who to reveal yourself to.

    All of that is part of the iterative process that comprises the morals of a society.

    I'd also like to draw attention to the fact that what was at issue here wasn't actually just "stomping babies", it was "stomping babies for fun". So what about stomping babies out of... oh... frustration? The single mother who needs to work and can't sleep... It's not that stomping babies is now no longer wrong; it's that the expected conversation is likely to go off on any number of possible tangents. So we're going with "for fun" - for maximum expected agreement.

    But what's the source of agreement here, if not an immediate emotional response. Most likely some form of disgust or anger? (Well, my immediate emotional response is more fatigue, but I don't react to the content as much anymore as I react to a rhetoric pattern I've grown tired of. I'm always tempted by troll replies such as: "Sure it's wrong. I mean it's fun, sure, but I restrain myself; I'm a person of character after all." My actual internal response is actually quite a bit more graphic and gross, with the sarcasm being more acidic. It's a personal hang-up.)

    It's not actually my intention to argue for emotivism here. I'm more a social relativist: morals are always institutionalised in a society, internalised to one degree or another, with the deepest roots going to early infancy and thus not even available for discourse. You-don't-know-it-could-be-different structures of cognition (a non-moral example would be the case structure of a language: easy to use but hard to explain unless you've looked at alternatives - most languages are nominative-accusative languages, but some are - for example - ergative-absolutive: you don't need to be able to know the difference to speak English, but it's there underlying it in the praxis. I have no such example for morality, but my base-intuition tells me that's how it works here, too - though maybe the divisions here are biological in addition to social?)

    So what's the moral make-up of person who thinks stomping babies is fun? What other things are fun? Does that person accept the social prohibition? What is the self-image here? These would be important questions, and ones you can't really ask out of a putative context.

    In terms of "right or wrong" in the abstract, you always need to set values, I think. For example: what values do you need to set for stomping babies to be wrong? This sort theorising is actually pretty common: we don't just learn that something is wrong, we also learn why that something is wrong, so we can make ad-hoc judgments in surprising (previously unimagined) situations. There's always some sort of legitimisation structure underlying our morality: some things are tacit, some things are explict, there's a hierarchy of relevance, etc. Some justifications are just more likely to work in praxis than others. But even what sort of moral justifications you can think of to begin with is part of your moral make-up. For example, "mine" and "yours" might be a primal impulse, but "property" is a social institution.

    As I said in my first paragraph: I'm not that familiar with emotivism. It seems to me that your moral make-up is going to influence your emotions as much as your emotions influence your emotional make-up. But there was a time when all you had were emotions. Your moral make-up grew later - when you stopped crying whenever things were uncomfortable. Morality as an iterative process seems two-fold - psychological and social. You can't have one without the other. I think that maybe morality is tied in with the development of identity: a lived distinction me/others.

    Changing nappies is oddly relevant here: think of it from the other perspective - toilet training. We no longer need them, and we rarely think about that. No freedom-from-nappies day. We just learned what a toilet is, and not just or even primarily in theory. We use it daily without thinking. And we don't tend to talk about it much. We certainly would bring it up if we encountered someone who, let's say, bucks the trend. There's a partitioning of time and space here that's second nature now but wasn't always. That's what morality is to me - basically toilet training (but more complex).
  • What is faith
    There's also the matter of scale. I have a reasonable expectation that my plane won’t crash (although perhaps this expectation has diminished in the U.S. under Trump?). In contrast, using faith to justify the belief that the world was created by a magic sky wizard -the literalist's deity- operates on an entirely different level. How can these two phenomena be meaningfully compared? It’s not merely that faith is a poor analogy for reasonable expectation; it's also about the magnitude of the claim being justified. The assertion that we can know the will and actions of a world-creating entity is significantly different from an empirically grounded confidence that air travel is safe. Perhaps the scale of the claim says something about why faith is a necessary concept for some.Tom Storm

    I tend to think of it very much like that, but I find there's a lot to talk about here. The line I've bolded is a good starting point:

    It's not entirely a poor analogy. I agree that "scale" is a problem here, but you can correct for that in pursuit of communication, maybe. Let me try to explain (none of this is fully thought through):

    The first major difference is this: When I get on the plain I do this fully well knowing that plains can crash. I do not need re-assurance that this plain doesn't crash. I usually don't even think of the possibility, which I'm very much aware of and furthermore wouldn't deny in the moment, either. I have no fear of flying, and I'd be perfectly fine watching, say, an "Airplane" movie on a plane (which might not be appropriate to air on a flight). There's some underlying principle at work; part of it personality, part of it experience (if I'd survived a prior crash, psychology otherwise being equal, I might have a different mindset).

    At the core of this, there's just lived habit. Something I don't need to talk about. I'm far more worried about getting airsick (something I've experienced multiple times) than a crash. Why? Experience, I think. It's more imaginable than an airplane crash. Reason, I think, is secondary here, and it works because we tend to experience rare thing less often (though one might be unlucky).

    Now, if my psychology were different, and I'd be prone to worrying about crashes I might be inclined to use calming rituals to get over it, find people talking about crashes in the lobby unsettling, etc. I might over time develop a state of knowing-despite-not-knowing, an epistemic buffer between primal worrying and operating on a daily basis - a lifestyle that includes stuff like habituated selective attention and certain modes of integrating new knowledge into this practical cognitive flow.

    Now this is limited in two ways: it's limited to a single topic (aviation), and it's personal (the rituals are for me only). And in this way we have little comparison here.

    Now I'm fairly sure I have certain more comprehensive thought-rituals like that; thought rituals that don't only comprise the topic of aviation. But it's not accessible to "discoursive consciousness": it's vague and intuitive and there beneath all the daily flow of surface thoughts. And I think this sort of thing is necessary. And I wouldn't be surprised if believers pick up on that and call that "faith".

    A computing metaphor would be the OS of your consciousness. Everyone has their own perosnal operation system. And to that effect I actually don't think the analogy is a bad one. Everyone has their own sense of how the world works, as expressed in their daily functioning. But so far it's entirely personal.

    The question is how your personal OS tags into your social surroundings; are you faithful, deluded, seeking? What's the relationship between how your mind works, how your daily surroundings work socially, how others see you, how you see yoursef?

    I'm a homegrown atheist with Roman-Catholic parents, who went to school in Austria, had compulsory (opt-out system) religious education in school, which was the only time our class was divided between Catholics and Protestants (with the odd student who didn't attend either class, either because they opted out - parental consent needed, or because their religion [or denomination in very rare cases] wasn't covered locally). Now Austria is and has been my entire life a very secular country: that means the overarching daily life does not involve religion: religion is segmented off into its own meaningful province. There's a weak default assumption that church membership implies believe, but surprises are no big thing and change little. That is: I had to explain that I'm an atheist on occasion, and that's always been fine. The funny thing is that it's always also been fine in religious context; I've been to confession if it was part of religious education (optional; I could have refused without parental consent) and told the priest flat out that I was just here to bide my time, and we chatted. Similarly, I underwent "confirmation" - a catholic sacrement establishing your faith - while being open about my atheism in the preperation group. It was no big deal.

    Why am I talking about this at length here? Well, because I've grown up in that sort of environment, I'm actually more familiar with the "God-language" as presented by the local Roman-Catholic church than I am with any language that might express my inner compass better. This is a mismatch in what I can think and understand (myself) to what I can express (the other).

    The second thing of note here is the lack of hostility towards me-as-atheist has left me with a neutral view of faith. The secularity of the society around me also ensured there's enough commonality in what counts as rational between those within a denomination and those without. (The late seventies/early eighties were also big on the ecomene - so there was already a widespread different-paths-same-goal mentality around, which meant different aprroaches to the rationality of it all to begin with.) I'm fairly sure that, since I didn't distance myself, a lot of the Christians (see above - confession, confirmation) might have thought of my "atheism" as a path that leads to God through doubt. Nobody was ever impatient or pushy. People from back then I considered truly "faithful" (in the religious sense) had a calm and... serenity? to them I could never match. I'm the slightly nervous, always ironic type.

    When the ideology that surrounds you is pretty familiar in its verbage, but is decidedly not what you intuit and makes no sense, but what you intuit has little to no verbage attached to it (in your mind), it's easy to externalise faith as what others do. However, there really is something that is missing - and that's a layer of social integration, a sense of ontological security - I'm living in an absurd world that nobody can explain to me. But at the same time, I have my thought habits I'm fine with. A gambler's mindset perhaps? That would work if I could see win-conditions...

    Basically, to the extent that me being an "unbeliever" is relevant, I'm living in a world with an ideology I don't share; I'm living without ideological validation, and without the language to explain myself. Because of my above experiences, I've never sought out sub-groups to integrate into - so now I have a lived substratus of praxis, very little ideology I feel like defending (some relativism, but not really fervently), and a generally ironic attitude towards the world.

    To some degree there might well be something like proto-faith here, who knows? The problem is I don't attach to abstracts very well. I have my favourites (relativism, phenomenology - for example), but it's more like a best-of list than something I deeply identify with. If what I have is faith-like, then it lacks an obvious target.

    If fiath is "faith in..." then my "in" remains a question mark. I certainly don't have faith in airplanes; I know they can crash. And they can certainly crash with me in them. If they do? Bad luck. So what do I have faith in here, then? Luck, since I just typed "bad luck"? Maybe. But, see, there's no elaborate ideological system built around that. You just learn to live in a world where bad things can happen - one way or another.

    Take marital faith as an example: you having "faith" in your spouse implies having internalised the rules of living around you. You know what marriage is, what you should do, what you should want, what you can expect. It's not just the person; it's also the social role - the lived praxis of being married. You can make your own rules, but they'll start out as deviations from a learned default. And the faith in your spouse has a both a target and a form in a lived-through social institution which you perpetuate and modify just by living "normally". And when things go wrong, you work it out.

    But your faith has a target, and it has meaning in a specific institutional context; one you participate in. Similarly, having faith in God is meaningful in a particular social context. How much you internalised is an issue here. But with no such institution to appeal to what is the meaning of my putative faith? Where do I get it from?

    I have the personal level, same as more or less everyone, sure. But beyond that? If we're both bottles that contain liquid, I just kinda stood out in the rain, while others might have filled under the tap, or even with a funnel... Not sure how much of this makes sense; I guess I see the biggest issues being social. How much social control and legitimisation, from where, from how many sources? How much in-group/out-group conflicts do you encounter?

    So is the analogy a good one or a bad one? I feel like you can tilt it this way and that; an attempt to build a bridge, or an attempt to solidify positions (e.g. relgion vs science - not a popular conflict around me). So what is faith?

    It's an interesting question, but in my daily life it's really just a word I don't use often (I did in this thread, for obvious reasons). And that means when talking on the topic I have little at stake, but it's also never homeground. So do I have faith in... something? Maybe. Then what follows from that?
  • "Underlying Reality" for Husserl
    Once again I want to raise the question of infants and psychological development.J

    I tried to respond to this when you rephrased yourself in response to me, but I didn't know how. This is fiendishly difficult to get a hold of. I'll try to get a few things in here; not sure how relevant they'll be, though.

    None of this standard phenomenological/Kantian picture can be said to obtain until a certain developmental point has been reached.J

    I want to emphasise, here, that Husserl didn't hold with the Kantian idea of a "thing in itself". The world we live in constitutes in the mind. It's probably possible to give a phenomenological account of the psychological development of a child, but this wouldn't be addressing the "underlying reality".

    During the reading of this thread, it struck me that Husserl's ego seemed very much like Descartes' cogito, so I googled what Husserl had to say on Descartes, and I found a text called "Cartesian Meditations"; apparently only published in French during Husserl's life time. The text I skimmed over was in German (my mother tongue).

    Basically, both Descartes and Husserl start out with radical doubt; but Husserl that the phenomena present themselves in a particular way no matter whether there's an underlying reality or not: the world we live in is always and forever consituted in our consciousness. This is the starting point, and it gets ever more complex from here on out. (One problem, for example, is the other: we construe them of having a consciouness of their own, much like we do.)

    What you're interested in just seems outside of the scope of phenomenological analysis, so we'd need some other frame of reference. I'm not sure what could apply, given that - to me - the singular strength of phenomenology (as it occurred in sociological theory) has always been that you don't really need to make up your mind about the underlying reality before drawing conclusions.
  • "Underlying Reality" for Husserl
    1) Is this theory meant to be a psychological description of how infants begin to constitute objects? If so, how do infants replace the shared-lifeworld aspect that seems so necessary to the description? Some equivalent of a Chomskian universal grammar?J

    This is a difficult question for me. Back then I didn't quite understand some of this stuff, and now I don't have enough memory about it. It's made even more difficult, given that the institutionalisation of psychology and academic differentiation has progressed quite a bit since Husserl's time, so even if I knew whether he talked about psychology (I vaguely think he did; or that at least someone said he did...), we'd probably have to dive into the history of what the term would have meant in academic circles back then and how that impacts now.

    In any case, I don't think infants "replace" anything under the theory. Rather, this is an ongoing process and not stable. So for example, when a new-born child cries... is it already in a shared-lifeworld-to-be? Something to be developed from that moment on? I mean, usually a crying baby is going to be comforted by physical contact quite soon. It's not like you delevop something you replace later; it's that you just develop, and left to your own devices you just... die.

    It's an interesting question, though.

    2) Does Husserl mean that what we encounter in the lifeworld must be as he describes, or only that it may be, for all we know? A similar question can be posed about Kantian noumena: Do we know that noumena do not resemble phenomena at all, or is it merely the case that we can't know either way?J

    Again, take everything here with a grain of salt, since I'm out of the loop, but as far as I remember Husserl's phenomenology didn't have much to do with the thing-in-itself. As far as I remember, where Kant speaks of noeuma, Husserl speaks of noesis which results in "nouma", which are part of the stream of consciousness rather than part of the world.

    I think Husserl might have said that resemblance is relation between phenomena, and to ask if a phenomenon "resembles" some putative thing-in-itself is a category error. Also, the "shared life-world" in (1) is a phenomenon, something that emerges from the process of noesis. At the thing-in-itself level, if we posit something like that, the world isn't yet differentiated into perceiver-perceived to begin with.

    Personally, I think "flow" is a metaphor something we can't grasp without metaphor, and as such it might not be the only applicable metaphor. You can easily imagine that objects exist as we see them, too, but that, too, would be a mataphor, and imagining an undifferentiated flow instead has the advantage of being different - so it's harder to forget that it's a metaphor for something otherwise ungraspable.

    Or differently speaking: there's one tree and there's another, and they're both alike in some ways and different in others, so we can tell them apart and also categorise them together. You cannot make the same sort of comparison to something you can't experience. You can make any number of working assumptions - ideal forms, a material level of existance, an undifferentiated flow... The differences will relate to who you few the world: what's intuitive here is different for different people. Phenomenology needn't make any working assumption, because they start "later". But that "starting later" has to be conceptualised to ground the ideas - say with the concept of "noesis".

    Basically, I think question (2) lies outside the scope of phenomenology, and I'm not sure in what framework to treat the quesion. (I have a hunch I'd have to solve "the hard problem" to even begin having an approach.)
  • "Underlying Reality" for Husserl
    We also want to understand how this flow is not an entity, nor is it composed of entities.J

    I'm not an expert on Husserl; my perspective on this topic comes from sociology - in particular from Helmut Plessner, Alfred Schütz, and Berger/Luckmann, all of whom were heavily leaning on Husserl. I've never read Husserl (except in the form of quotes); but what this reminded me of is Husserl's distinction between "Leib" and "Körper" (two German words for "body"), which Plessner discussed as "being a body" (Leib) and "having a body" (Körper). As far as I remember, this is a lived duality: you attend to one or the other and relate to that relation thus creating a tension field (from which, among other things, the subject-object distinction emerges).

    While refreshing my memory, I stumbled on a pretty interesting article about this, which I'll save here for myself (and I hope it's interesting for the topic at hand):

    Being a Body and Having a Body. The Twofold Temporality of Embodied Intentionality - WEHRLE, Maren

    I find the body to be the best topic to think about this, because it's basically the locus of our perspective: the structure of our bodies (eyes facing forward, ears on the side of our heads...) determine our perceptive within a broader world we're part of. But the borders between body/not body emerge through some sort of worldly process. The very concept of "entities" wouldn't make sense if that difference didn't emerge.

    At what moment does the air in your lungs become part of you? This feels like a pretty silly and inconsequential question, but if we assume "entities", we'd need to answer that, or at least figure out in what we can't. If we don't normally even ask this sort of question, it's because our daily praxis doesn't recognise this as thematic. It's also the reason why losing a finger is somewhat more relevant than losing a hair, and so on. There's a tension field here that never pins things down enough to make full sense, but remains within a certain perimater so that we can just experience the equivalent of a tip-of-the-tongue experience.

    Or to sum it up: if we think of the flow, we are tempted to think of it in terms of "object/subject" - but we're actually engulfed - we're part of it. Metaphorically, we're maybe ripples that fall in on themselves and disappear - but it's all water. Trying to answer this question feels to me like a cartoon coyote running on air until he looks down... I can still reason, but nothing underpins it anymore. At some point reason stops being meaningful to me.
  • What does Quine mean by Inscrutability of Reference
    Artifacts are (for the most part) not self-organizing. A bikini isn't.Count Timothy von Icarus

    I'd like to think about this for a while. When you were talking about tigers a few pages back, you suggested there were two things that were important:

    Divisibility/Unity and Self-Organistion.

    Now you're saying that a bikini isn't self-organizing. I find this obvious at first glance, but it becomes less obvious when I look at divisibility: A bikini is already divided to begin with, in a physical sense, and is only a whole on a social background. Other clothes follow this pattern: shoes, socks, gloves... the bikini stands out by not being symmetric. So we sell pantys and bras seperately, but we sell bikinis as a unit?

    I'm thinking it might be useful to think in terms of system-integration, here, too: while we may be self-organising in terms of being an organism, we're not self-organising in terms of society, so we're not necessarily self-organising in the subsystem that includes bikinis. But that we're self-organising as organisms is part of the way society self-organises. So a bikini is only a bikini within the context of a self-organising system (such as society) that also includes us.

    How does this lead us back to Quine's inscrutability of reference? If society self-organises, and we're agentially involved with this, but also "self-centered", and if what a bikini is emerges from that self-organising process, then what we, each of us, think that a bikini is does not necessarily exhaust what a bikini is on the higher system level, so that no two people in concrete situations will ever topicalise the totality of it, and the difference in attention/meaning attribution is one of the mechanisms that give rise to inner-system dynamics.

    In other words, reference needs to be inscrutable on the organism level, as organisms aren't made to operate on higher organisational levels.

    I'm not going to defend any of this. This was mostly an exercise in brainstorming. I'm playing around, if you will. But that's not meant as a sign of disrespect; it's how I best think through abstract topics that don't really come intuitively to me. It's a way not to reject them outright.

    While I'm just putting stuff out there:

    t would be more in line with popular trends in physics to say something like: "the universal fields are in flux cat-wise."Count Timothy von Icarus

    While I was googling terms in order to better understand this thread, I came across Karen Barad's agential realism. Sounds like a variation on this, maybe? Basically, if I understood this correctly, the relata in a relation don't pre-exist, but emerge from an "intra-action" of... not sure what.
  • What does Quine mean by Inscrutability of Reference
    I've never heard of permissivists before this thread, and I'm not yet done with reading Count Timothy von Icarus' link about "objects", but it turns out that if I had to choose between those options, that's where I would gravitate towards, though I'm not sure how far I'd go. This is a thought experiment I've had along those lines some years ago:

    Imagine a butterfly on a flower. Draw an imaginary globe such that part of the butterfly (and the flower and whatever else) is inside the globe and part outside. Watch the butterfly fly away. Now figure out some maths that allows you to run a exact simulation of the universe, except that you have to follow one rule: what's inside the circle will stay together. You can break time and space if you have to. If this is possible you have a universe that's materially the same as the one we live in, except it's also entirely incomprehensible. That does sound pretty permissivist, doesn't it? The fun thing is this actually helps me make sense of the world (intuitively; I can't explain how).

    ***

    As for those semantic triangles:

    My thoughts tend towards the idea that what we have in our minds is "knowledge about the world", which is a web of concepts, a world view. Seeing, for example, tree activates the tree node. Hearing the word "tree" (or reading it) also activates the tree node. But hearing the word "tree" also activates the word-node, while seeing a tree only activates the tree node. And when we add new information, there are two possibilities: we construct word-first concepts or we construct experience-first concepts. All concepts are ultimately experience-first concepts ("ultimately" here simply meaning that at one time there were no words). If you're born into a language community, though, when faced with a "new" experience, you'll likely at first try to express it with the words at hand. Maybe there's a so far ill-understood word that applies? If not, we can always ask a what's-that-called question. If presented with a "new" word, we can ask a what-is-a-[word] question, to which the answer will either consist of other words or experiences (a picture, a demonstration etc.).

    I've always thought of the Sausseurian model as zeroing in on the thought-signbody (signified-signifier) angle and ignoring the object, not because it's not important, but because it's not part of the discipline of linguistics. Under the structuralist model, we look at the nodes. Take away one node, and we have to distribute the content among the neighbouring ones. It's a valid if limited way of looking at language.

    Now, if you want to bring back the object into discussion under the Sausseurian model, the only real opportunity is to treat the object as an alternate sign (which it isn't if you expand linguistics to include pragmatics - Austin, Grice et. al). And that's how you basically get Derrida. There's no ultimate signifier, it's all differance, and we cling to our binary oppisitions so we don't get carried away by the current. (I think Derrida still has something to say, but I'm losing that sense with people like Lacan...).

    Quines inscrutable reference as illustrated by gavagai is fairly intuitive to me. Makes sense. People don't need to know what they're talking about; they need to get things done. So:

    Others, perhaps you and I and maybe Dawnstorm, think that there may be multiple ways to divvy up stuff, each of them capable of being coherent if not complete.Banno

    Yes, that's pretty much me, too.
  • What does Quine mean by Inscrutability of Reference
    Well, "cultural" would tend to imply a diffuse, collective project, right? But surely a man stranded on a desert island can come to recognize new species of flora and fauna there, and abstract their properties from concrete particulars, or even come to name them, all in isolation.Count Timothy von Icarus

    Sure. New words crop up all the time. Someone (or a group of people) would have named the computer mouse "mouse", for example. I maybe wouldn't call it a "project", though. It's less directed, more just a process of people living together - an iterative process to be precise. When you use a word you both reaffirm it and change it ever so slightly.

    I'll pick out a line from your response to Apustimelogist, because it struck me as interesting:

    However, words generally try to focus on the actual, not the potential.Count Timothy von Icarus

    I have a hard time formulating my thoughts. On the one hand, it's clear that we can only name what's there (or what we thought of, see science fiction/fantasy for example, but that doesn't impact your point I think). But on the other hand, a word needs to be general enough to accommodate the unforseen, or we'd have far more neologisms than we actually have. That is: a certain openness must be baked into language for it to be useful.

    Take a look at Apustimelogist's latest post about evolution. That's basically the old paradox: if you remove a grain of sand from a heap and keep going, when does the heap stop being a heap? In other words, when do you need a new word? Chesterton, in your quote, doesn't seem to like considering grains of sand in a heap, if that makes sense.
  • What does Quine mean by Inscrutability of Reference
    are cats and dogs best thought of as "cultural abstractions?" Or are they just abstractions of a certain type of organism.Count Timothy von Icarus

    What work does the "just" do in this sentence? They're cultural abstractions of a certain type of organism. If you're living in a society it's culture (or subculture) will influence how you abstract. (It will also influence how those organisms will act, which is another, more indirect, source of influence on how you abstract.) They're not abstractions of instituional facts, but the abstractions themselves are institutional facts. (That's something that's often left unacknowledged in current discourse on gender, for example - where we're talking about organisms.)
  • What does Quine mean by Inscrutability of Reference
    The emphasis on "sign" is problematic, in that it supposes that the main purpose, or fundamental element, in language is the noun.Banno

    Syntax as pattern, semantics as what we do with the pattern?Banno

    Hmm... I'm more familiar with linguistics than philosophy, but I'd say both syntax and semantics are patterns and how we use them. I think the actual real life interpretation can't complete until we add the third level of analysis: pragmatics. That's the huge contribution of mid-twentieth-century language philosophy: notably Austin, Searle and Grice.

    I'll get to my take on Quine through all this; I've never read him, only about him, so there's that.

    In linguistics, syntax and semantics are different ways words relate to other words. In syntax, we look at how words work together to make a sentence, regardless of what they say. Meanwhile, semantics is about what the words used typically mean ("lexical semantics"). Which words can you replace in this or this slot. Note that it's not about sign bodies. Ambiguity can be both semantic or syntactic. The textbook example is:

    We saw her duck.

    Syntax:
    a) We [personal pronoun, first person plural] saw [verb, past tense, indictative] her [possessive pronoun] duck [noun].
    b) We [personal pronoun, first person plural] saw [verb, past tense, indictative] her [personal pronoun, accusative case] duck [verb, bare infinitive].
    c) We [personal pronoun, first person plural] saw [verb, present tense, indicative] her [possessive pronoun] duck [noun].

    Lexical Semantics:
    Two different words with the same sign body:
    "to see" vs. "to saw". And "duck (n.)" vs. "to duck (v.)"

    Syntax can change the meaining of a sentence, without touching lexical semantics:

    1. The cat sat on the mat.
    2. The cats sat on the mat.

    The suffix -s indicates plural. Thus "cat" evokes one cat, and "cats" evokes more than one cat. That's a difference in meaning, but it's not expressed over different word choice ("lexical semantics"), but over syntax (plural suffix "-s"). You'll probably see how this is one pivot point for different theories to conceptualise the study of meaning. (Not all theories go this route.)

    So I have one major problem with understanding this quote of yours:

    The emphasis on "sign" is problematic, in that it supposes that the main purpose, or fundamental element, in language is the noun.Banno

    It's a problem with the lexical semantics of the word "noun": I do not know what you're referring to. I suspect that it has something to do with "nouns are words for things" and reification, but I can't construct a coherent meaning.

    In lexical semantics we're basically creating a dictionary, a list of words we must know so we can use them. That is they're all, at this stage, decontextualised. Reference is not a reference to things, on this level, but a reference to abstractions: content words refer, function words (such as "the") don't. [Again, this is conroversial.]

    For example, if I modify 1. above to read:

    3. The dog sat on the mat.

    then I haven't changed the syntax at all, but I've certainly introduced a new word. Since I just mention the sentence as an example, and I don't actually say anything about animals and mats, I'm not referring to real life set of affairs. I am, though, referring to certain common cultural abstractions: "cat", "dog".

    If I were referring to a real life situation but couldn't quite remember who sat on the mat, I could say:

    4. The mammal sat on the mat.
    or
    5. The furry animal sat on the mat.

    Or any other combination.

    And if I quite clearly remembered the cat, I could say:

    6. The feline sat on the mat.

    4./6. are purely semantic changes (though "feline" is morphologically different from "cat", being derived from an adjective, but we're not talking morphology...), and 5. also includes semantic changes.

    When you want to know how people refer to things using words, you're not using that model, though it might be part of your methodology if you so choose. You also need to know what people do with words.
    For example, you'd know that "What circumstances do you have in mind?" is not among the expected reactions to "Could you open that window over there?" even though the inventory-level interpretation would allow for the response. Language occurs in context.

    So, on to the "gavagai" example:

    The anthropologist would have two problems here:

    1. the Lexical level: Am I making the same abstractions as the native?
    and
    2. the situational level: When I'm pointing towards the rabbit, am I paying attention to the same thing that the native is paying attention to when he sees me pointing?

    And I think what Quine is trying to illustrate with that example is that we can't ever answer either of the two questions with certainty, because any clarification attempt runs into the same problem.

    Not sure where Quine goes from there, but I think that real-life interactions lead to satisfaction among the participants, and when everyone's satisfied (actually, when everyone assumes of each other that they're satisfied while being themselves satisfied) people make working assumptions about what the words mean which they maintain until they have reason to modify it. And it's this sort of process, repeated over and over again by lots of people that lets people approach, assymptotically, some sort of ideal abstraction: people are constantly bringing into being and modifying what they assume is already there - as a concerted effort. So, yes, I'm a constructivist on that matter.

    As for the triangles: I like the first one @Count Timothy von Icarus posted best, as the dotted line an the bottom makes sure to emphasise that relation between signifyer and signified is an imputed one. And I also like that the "thought" sits on top. I think the source is Ogden/Richards The meaning of meaning, but I'd have to check to make sure [it doesn't say]. I like that, because I tend to think of thought as a process: not one thought, one clear-cut piece of mental content, but a stream of consiousness, classified and edited by analysis, so we can think about that.
  • What are the top 5 heavy metal albums of all time?
    It's just a convenient label that I made up, though I'm sure other people made it up before me. It's hard to be original. By "Hippe Rock" I just mean bands that sound like Jefferson Airplane, The Mamas and the Papas, Pentangle, etc. Perhaps Folk Rock or Psychedelic Rock might be a more appropriate terms.Arcane Sandwich

    Ah, gotcha. It's all fluid anyway. And I can't say I know Black Sabbath well enough to have much of a view on their sound. A lot would have come out of 60ies psychodelia. That Deep Purple Album that has April on it and that Hieronymus Bosch cover is deeply routed in it. Led Zeppelin is basically blues and folk with more guitars, at least early on. And so on. This makes genre very hard to gauge.

    For example, some tracks on Bowies The Man Who Sold the World feel not that much "softer" than the genre stuff at the time (Try: Width of a Circle, Saviour Machine, She Shook me Cold). A lot of it feels like evolved blues.

    It's interesting to see how things evolve, sometimes even on one record. Try Alice Cooper's Easy Action, where you get songs like "Shoe Salesman" along with "Still no Air" (which is closer to what he would become).

    I also never felt that much of a difference between Born to Be Wild and Smoke on the Water, for example. I probably can hear them when listening for them, but they don't matter in my reception.

    My favourite era in music would probably be around 1967 - 1973. A lot was going on, and little was settled.

    No one had mentioned Uriah Heep until yet, great band. Blue Öyster Cult was mentioned but we didn't dwell too much on it, I don't know why. Awesome band.Arcane Sandwich

    Hm, the thing is Uriah Heep, to me, doesn't feel like an album-band. I don't have stand-out albums for them. I generally love a few songs on every album, and usually there are no bad songs (Gypsy, Bird of Prey, July Morning, Return to Fantasy...). Maybe it's just hard to choose an album, because of that? Or maybe it's just me who feels that way. Heep came up mostly with "The Magician's Birthday", which surprised me, to be honest. (It's an album I've heard but don't own; I've looked at the track list and can only remember Sweet Lorrain.)

    With Blue Oyster Cult, I tend to gravitate towards albums: I like A Fire of Unknown Origin and Imaginos, for example. (There are plenty I don't know.)
  • What are the top 5 heavy metal albums of all time?
    I would say that both Coven and Sabbath sound more like Hippie Rock than what we usually think that metal sounds like. In that sense, I would say that Motörhead sounds more metal than both of those bands.Arcane Sandwich

    I'm not that knowledgable about genre, but "hippie rock"? This is just a random comment triggered by this paragraph, but I just had to remember that "Motorhead" (the song) was first a Hawkwind song. I've heard that Lemmie's sound with Hawkwind came from playing the bass as if it was a guitar. Not sure where, and not sure if it's true. You hear a lot of things. It's just that... if Hawkwind isn't "hippie rock", I'm not sure what you're talking about.

    Out of curiosity, I've looked over a few top-lists online to see if I even know enough metal albums. Turns out, I know mostly the classics/progenitors (Black Sabbath, Led Zeppelin, Uriah Heep, Blue Oyster Cult). I sort of feel like I can't make much of a contribution. If I like a metal album, it's probably not very metal - heh. (My favourite Black Sabbath album, for example, is Sabotage. That came up on maybe one list; it's mostly Paranoid, Black Sabbath or Master of Reality, and don't know the latter two).

    It's an interesting thread to read. When I have the time, I'll listen through some tracks. (Thanks for Afroman; he's brilliant.)
  • The Philosophy of Alignment, Using D&D as an Example
    Well, in terms of D&D, this is mostly a rule-of-thumb to be interpreted in any way practical by the Dungeon Master. In terms of genre history (F/SF), I'd say it's a marriage of Tolkien and Moorcock, but the overarching morality systems here are not that compatible, I'd say. In terms of gameplay, I'd say the alignment system is primarily social - how you slot into society. So for example, to be a paladin you need to be lawful good; but I've always found this to be a requirement towards effort rather than personality: You should strive to be lawful good, no matter how difficult this would be for you psychologically. That makes a difference. But there is still a psychological component to this: it's not how other people expect you to behave, but how you think you should behave, and there's overlap here.

    I've got a degree in sociology, and what the D&D alignment system has most reminded me of is a typology developed by Emile Durkheim in his book Le Suicide. Durkheim was trying to establish the discipline of sociology as an academic discipline at the time, and we was relying on Comtean positivism. He chose the topic of suicide, because it's a very personal topic and at the time was considered the domain of psychology. If he could show that sociology has something to say about the topic, he could clearly demarcate what sociology as a positivistic science has to say about society.

    He examined suicide rates of different countries and came up with four types of suicide, three of which he considered relevant (and one of which he only named in a footnote). These are: egoistic/altruistic and fatalistic/anomic. These are two pairs, and the relevant dimensions are:

    - integration into society
    - regulation by society

    I've always thought that integration (egoism/altruism) corresponded nicely to good/evil, and that regulation (fatalism/anomie) corresponds neatly to law/chaos. It's certainly not the only way to look at it, but it just feels like it could fit.

    Note that under this view "evil" would label a lot of people as evil who we don't particularly consider evil: it's more a measure of outsider status. And this might not be the intention. So for example, someone who hasn't internalised any of the local values, and on the surface keeps to the "laws of the land" would be lawful evil, even if he showed basic human traits such as compassion to people in need.

    Another problem is that alignment would be relative to any particular group: a member of the mafia could be lawful good when the reference group is the Family, but neutral evil when the reference group is wider society. You social integration/regulation spreads out in concentric circles, so to speak, and different allegiencies can create inner conflict. So it's not clear at all, if the social model is what we're after (depends on what we need the model for, for starters).

    No conclusion really, just thoughts.
  • Yukio Mishima
    Sounds like "mono no aware" (loosely, the surprising poignancy of things), which includes an in-the-moment appreciation of beauty, sort of like being lucky to be there in the moment? An example would be the art of fixing bowls with an inlay of gold or silver to highlight where it once was broken (rather than hiding it as well as we can, over here in the West). I'm hardly an expert in Japanese culture, but I'm certainly not used to have this concept be accompanied by such violent language. I'd guess it's an expression of passion?
  • How do you know the Earth is round?
    What's your preferred method of guaranteeing you're travelling the same direction?flannel jesus

    Try lots of times, and if you never find the edge of the world... (It's not practical anyway, since you need to fly. This means you need to make a pilot license, and then you'd need to trust your instruments...)
  • How do you know the Earth is round?
    Travel in one direction until you end up where you start. (Unless you think the flat Earth was Pacmanesque warp points?)
  • What are you listening to right now?
    I'm in this kind of mood:





  • p and "I think p"
    Moreover, if "I think" was required for self consciousness, it would be odd, right? Because some animals are definitely aware of themselves but don't have language.fdrake

    So what's the relationship between thought and language? I've often found myself confused on that topic when reading philosophy, as if there's a basic assumption of thought being inherently linguistic? I can't find a good approach to this thread because of this confusion.

    To my mind, there's this "stream of consciousness". When we think of "thought", I think we think abstract from this stream of consciousness and we structure and order it in some way. And language helps with this because sentences are artifacts that spring from this and have a fixed form, as opposed to the flow. But language (as in sentence-forming) isn't the only thing that's fixed in a way to help us structure our stream of consciousness, as is apparent to me when I build a lego model or solve a sudoku (as no words accompany this kind of activity; my stream of consciousness is "silent").

    To me, language is an extra that may or may not accompany thought - this is my experience. But to "pin down" a thought I need language. Language fixes a thought with a sentence, and then you can think the sentence and think more complex thoughts, because you can store the bundle (by repetition) while bypassing the sentence. Maybe. So some thoughts we have may well be impossible without language, while still not being associated with a particular sentence.

    But with self consciousness being rather basic, I have no problem with ascribing non-linguistic thought to animals without language. But I'm unsure if we disagree about anything, or if I just have a broader intuitive concept of what makes a thought.

    I get even more of a head ache when I remember that "p" in "I think p" is likely for "proposition"... I've left that aside for now, deliberately. Because, when I make a post like this, I'm always insecure about my sentences representing my thoughts (or my thoughts being stable to begin with).
  • The case against suicide
    To make a case against it you'd have to engage with why living would be preferable when it's not a requirement to be alive.Darkneos

    I honestly don't understand what you're after, though. "Preferable"? So I consider suicide: (a) Do I prefer to continue living, or (b) do I prefer to die? That's a choice. "Requirement"? Someone or something requires me to live. Who? What? How does that impact the choice I'm about to make (as soon as I stop dithering)? Or would you like some convincing philosophical position that makes the choice moot?

    The two poles aren't equal, here. It's not a choice between to equally attainable options, where you can also just walk away. Vanilla or chocolate ice cream? Meh, I want strawberry. Maybe next time. If one wonders whether one wants to die or not, one is necessarily alive. You don't need to make a decision to go on living: that's the default state. When I was suicidal, I was constantly dithering until I was no longer suicidal. I never made a choice, so I still live. If I'd made the choice to go on living, that would, presumably, have changed the way I went on with my life.

    In real life situations, rather than being between life and death, the choice is usually between taking different sorts of action: there are quite a few ways to go out, and there are quite a few ways to go on. A lot of the time, people may have decided to kill themselves, but they don't go through with it because they can't find a good method (success-rate too low, too painful, leaves too much of a mess for others to clean up...). Some people might kill themselves because there's an easy method available (e.g. the gun in Daddy's locker), and because the way forward has no visible path. People don't pick between life and death in a cosmological slot machine. They decide act: one way or another. (Or, as in my case, make no choice at all.) It's a rare philosophical suicide who chooses between life and death on some underlying requirement.

    That doesn't mean that there's no discussion here; it just means that, because over the course of my life, I've read a lot about suicide for personal reasons, I tend to have my head filled with the practicals. So what could a requirement for life even be in principle? The way I see it living things live and eventually die. Any choice occurs during that stretch of time. "To live" is thus not a choice. The child that wasn't born doesn't get to choose life. The child that does get born, doesn't get a say either. So the requirement must somehow be ex post facto: it's a requirement for the living to continue living. And they do anyway: until they die. So it's not so much a requirement to continue living (which is automatic), but a proscription: don't take actions that shorten your life. But then we're not quite with suicide yet. See, that can apply to any risk taking behaviour, too: don't smoke, don't be a fire fighter etc. So maybe it's "Don't set death as your goal?"

    But if it's about goal setting, what do I make of this line from your OP:

    Desire for pleasures only applies if you are alive, if you die there is no need for any of that. Same with love, friendship, food, money, etc.Darkneos

    Pleasure and Death are alternative goals you can set. As you say, they're mutually exclusive. What you're saying sounds to me like "Given that I'm dead, why should I set as a goal any of those things that can no longer matter to me?" But this makes no sort of sense to me: first, you can't set any goals once you're dead. Second, once you're dead that-which-matters-to-you is n/a. You're gone. It's a category error. It's not that things no longer matter to you; it's that mattering has ceased.

    This is a long and maybe pointless post, but I'm having trouble pinning down a perspective from which it makes sense to tackle your question. I hope you understand my troubles otherwise we're bound to talk past each other.

    (Besides this, there's a secondary question I have: what if there's a requirement for life, but I don't like that requirement and kill myself anyway? But that's a different post.)
  • The case against suicide
    I'm going to question what a "reason for living" even is to begin with. I was suicidal from, roughly, 12 years of age to... maybe 16 years of age, and I was quite vulnerable to a relapse for at least a decade more, I'd say. It's hard to tell. I'm over 50 now, and memory isn't... reliable?

    I've never been looking for a "reason to live," though. What I was looking for was... determination. Either way: determination to get myself in order, or determination to end it. I think if I'd found determination, I wouldn't be here today. Being a wimp saved my life, for whatever that's worth. I grew out out the suicidal mindset, but the language stayed with me. I still think every now and then, I should just end it. But I've lived through wanting to be dead, so when it comes up now (I don't say this out loud to any one), I'm quite confident that I don't mean it. Wanting to die just feels different.

    Something I've often wondered, though, is this: what if I'd really found "determination"? What would I have done? Would I have killed myself, or would have gotten my act together? It's possible, for example, that if I had been the person who could reach the determination to kill myself, might I have been a person who didn't want to kill himself? I'm quite content to never find out, because quite frankly I don't want to go through something like that ever again.

    I don't have a "reason to live", though. And I don't feel like I need one. I find that life is... naturally persistent. I've been living all my life, and I'll be living until one day I won't be living anymore, which is a stretch of time only available as abstract protection - I may call it death, but since it's not part of my life it's not a state I'll ever have to contend with. Dying though... Dying is part of life, and a lot of the ways to go are unpleasant. Unless you die really quickly, or just drift off while asleep you'll have to contend with dying. Dying is far more frightening than being dead, to be honest.

    So I just muddle through from day to day, enjoy what I can, and take on the rest as it comes. Life is value neutral, though it acquires secondary value - as a perceived binary switch - through the balance of things enjoyable and not. You can switch it all off, but if you do you're dead, and the question of whether it was worth it or not won't apply anymore. While I'm here, I might as well make the best of it, no? Won't always succeed, but, well... that's life. Because I used to be sucidal, and because the language never really left me, though, I have to stay vigilant. You see, a good internal "life sucks" can be quite cathartic, but say it just once too often, and it becomes this... habit, and it takes over the way you think. That's quite frightening. From someone who's been through it: life spent brooding about wanting to die is far more scary than death can ever be. It's a state of mind I don't ever want back.

    But at the same time, all this talk about "love", or "life is good"... it all feels hollow and unreal to this day. It's ineffective. At the same time, though, some of it is demonstrably true.

    As long as you think only of yourself, you will keep coming back to the same miserable thoughts again and again.unenlightened

    Oh, yes, have I ever been through this. Around ten(?) years ago, I remember saying that not much worked when I was in deep, but what ultimately helped me was "doing things and watching people". That's how I phrased it, and it got a laugh out of who I think might have been a suicidal teen. It's really simple. In theory that is. Your wordview's quite a prison; tailored to keep you in.

    So if people ask for a reason to live, what is it they ask for? A surefire plan to go through life without suffering? A teleological end so that your live will have had meaning once it's gone? A pot of gold at the end of a rainbow you can chase even if you know it's not there?

    To me, looking for a reason to live sounds like trap to keep brooding. Life is value-neutral. Without it, you have nothing - which is sometimes good and sometimes bad, and when it's gone, it's neither good nor bad, because value has gone out with it. (Er... yes, we have social effects that outlast us and cast tendrils back in time to influence what we do while we can still do things, but my post's too long as it is.)

    So, yeah, what helped to get back into the groove was "doing things and watching people", as a younger, wiser me has put it. Life won't necessarily get better, but the bad things get easier to bear, and the good things get easier to enjoy. The latter I found especially valuable.

    Not easy, though. Not easy at all. A song that gets it, but promises too much:

  • Critical thinking and Creativity: Reading and Writing
    I don't know that I'm caught up in a stigmatisation of telling. Or that I agree with a default of showing.Amity

    This I find difficult to talk about. First, I did quote what you literally said, but the "you" in the line was supposed to be the generalised you (like "one says"). It's so difficult, because the phrase means different things to different people, and it's not even always clear how the rhetoric relates to the praxis of writing.

    There's the rhetoric with its personal impact and its social impact, and then there's the personal meaning of the phrase, which is part of the personal impact but not all of it; there's what the writer actually does, which again relates to both the personal meaning and the personal impact of the phrase, but the show-don't-tell part of writing isn't a thing on its own. Whether you're "showing" or "telling" in a particular section of text depends on how you interpret those terms (part of the personal meaning of the phrase). This then means that the personal meaning of the phrase goes into the praxis potentially twice, once as a generative rule and once as a corrective rule. And it's not clear that the generative rule and corrective rule are the same, even though the phrase of origin is definitely the same.

    Then, beyond that, none of these rules are meant to be absolute. Nobody says that. They're meant to be rules of thumb. So figuring out whether an author who favours "show don't tell," in his discourse about writing also favours it while writing isn't easy - you first have to figure out what the line means to the writer, and then you have to figure out how many exceptions are too many.

    None of that would matter much. What really matters in the end is the text. But then there's the social level: no matter what the rule means to any specific writer, the phrasings are socially "out there". "Show, don't tell," is a phrase you can google. And the discussions around the phrase cover various predictable meanings. And the time spent on figuring out what this vague line means could be spent writing and developing an intution for what to do (though some people find that hard to do without guidance and thus seek out rules...)

    Now once the phrase is out in the open, people who don't yet have an understanding of the line will encounter and hear it, and what they hear is a sentence of "do this, don't do that". So when they approach the problem of what to do when writing they do so with that particular topic framed as one thing to do, and one thing not to do. They'll eventually figure out that something you do have to tell (i.e. the thing you should not do according to the phrasing), but by that time, showing is already the default. You're usually showing, but somtimes you have to tell. However, that's a judgement that doesn't fit all styles equally. To top it off, some people are natural showers (their "native style" tend towards that), but they might still worry they tell too much.

    So:

    I could have phrased it better and I could have expanded...
    I was trying to say that both were needed.
    Amity

    Yes, but there's a way to talk about this we all participate in. Me, too. I sort-of vaguely half reject the rule, but that's also participating in the lingo. When I reject the rule, it's no clearer what about I reject than what it is that others like about. One thing you should know about me is that I have a chip on my shoulder when it comes to these righting rules. That doesn't mean I disagree with everything proponents of the rule say, or that I think you can't write well while keeping that rule in mind, or... or... or... If I go online to talk about writing, I'll always go into rule-blaster mode - and it'll never quite come across how I want to (if I even know how I want to come across). I've built up a lot of frustration that way, and that's why I've been bowing out of writing forums.

    A message-board experience in creative writing sounds like my idea of hell. I guess some forums are more helpful than others.Amity

    Oh, it was lots of fun. I'm more of a short story writer, but I did finish a very rough draft of novel, which I doubt I'd have finished on my own. I had some excellent feedback.

    Hmm. I'll have to take your word for that.Amity

    Nah, don't take my word for it. That's precisely the kind of nonsense I catch myself saying when I go into rant mode. Here's again what I wrote: "it's just that people suddenly started put the same few stock movements in place of the same few stock emotions." This is mostly based on a subjective impression by a biased mind, and it's now all around a decade ago, so on top of that it's a memory. First, I'd edit out the "suddenly". I'm fairly certain nothing about it was sudden. Second the body-movement/stock-emotions part is more of an excerpt example (to be sure, I could probably find examples, but that doesn't say anything). See, when talking about why I don't like those rules, I find myself doing the same sort of thing I don't like about the rhetoric that surrounds them. I'm vague, I'm inaccurate, I make mistakes (I don't think I made one in this post, so no example for the time being), and so on. I demonstrably do know a lot about writing, but I'm hardly the only one, and on a message board I tend not to be as careful as I should be. I end up saying stuff that I find embarrassing (like that line, for example). So, no, don't take my word for it. Never take my word for it.

    So, a quick way to connect and evoke. A short-hand without the need for detailed explanations.
    Handy, especially when words are limited as in a micro/mini story.

    The repetitive use of 'Turned his/her head' isn't the same kind of short-cut. It's just unimaginative.
    Amity

    To be sure, I called "putting the same few body movements in place of the same few stock emotions" a cultural-shorthand. This what we're comparing to "turned his/her head". The head-turning I'd call just a bad habit. It's between me and my writing. But if you see many people make the same type of edits for a variety of texts, when different texts would need different approaches, that's a different problem. Do they edit their own texts the same way? Is this sort of behaviour triggered by the message board environment? I don't know any of that, so I can't classify it on a personal level. I don't know if it's even a habit, and if so, if it's a writing habit, an editing habit, a critiquing habit, all of it? Whatever it is on the personal level, it's a cultural shorthand on the social level.

    So the differences: Mine: personal level, about writing. "Theirs": social level, about critiquing (and sometimes editing). And in terms of judgement: Mine: flag for a re-write. "Theirs": Do it like this to improve your text.

    In the process of laying out this difference, I've noticed another aspect about my line above that's nonsense: "same body movements for the same stock emotions" is not only judgemental, it unintentionally judging the writing I meant to defend (the "stock emotions" are what occur in the original writing, as opposed to the "body movements" which occur in the edit). Really, it's good not to trust me too much.

    Goodness. That is quite an obstacle for anyone, never mind a creative writer. I can't imagine how difficult that must be. Having no inner eye means not being able to visualise. This is key to imagination and perhaps links to empathy?Amity

    It's not that bad. In fact, for most of my life, I never noticed that I didn't have an inner eye. I thought when people talked about that it was more of a metaphor than it actually seems to be. I can visualise to a minimal degree: if I close my eyes and concentrate very hard I can create a micro-second flash of an image. Research about aphantasia indicates that the "inner eye" can be trained. It's not particularly difficult to engage in creative writing with aphantasia. In descriptions, I tend to focus on a few key properties when writing; extended descriptions in fiction I read tend to bore me if they exceed my capacity for detail-retention. I sort of space out, then. If I want to see stunning scenery, for example, narrative is never going to cut it for me; I prefer the visual arts - where I actually have something to look at. I used to just put it down to taste - which it still might be, who knows?

    Why did you stop creative writing? Don't you miss it? Have you considered taking part in TPF's Literary Activity - either as a writer or reader, both?Amity

    I stopped creative writing when I got a job I didn't particularly like. I just felt too drained to actually write. I don't particularly miss it. I figure I'll pick it up again when I retire. I've still occasionally generated story ideas, for example. I just don't feel like actually writing. Even when I was writing, I usually didn't share what I was writing (and what I shared on writing forums was usually written specifically for community activities). I've considered taking part in the Literary Activity here, but I think participation would overwhelm me - too much time and energy (I mean, just look at the size of this post, and it's not even about a particular piece of writing). Also, one thing I've noticed is that I don't like reading fiction on a computer screen. No problem with academic articles, blog posts, forum posts etc. No problem with poetry. But fiction? For some reason it doesn't quite work for me, on a screen. Weird.

    I read that Chekhov is the culprit who inspired the concept of 'Show, don't tell'.
    "Don't tell me the moon is shining; show me the glint of light on broken glass."
    Amity

    I've heard that. It's such a beautiful way to put it. It's so very much like Chekhov that the reflecting glass is broken. I bought a book of his short stories which I really enjoyed. A lot of those rules go back to something authors said. I find those examples really interesting in that sort of context.

    Maybe "show don't tell" is more like "tell the effects, not the fact." For example, instead of saying it was a dark and stormy night you describe indistinct shadowy movements, the trees swaying, rain pounding on the conservatory roof, and a door being blown open.Jamal

    Yes, and all that description takes time, which means you'll get into the meat of the story later. Or maybe the style's more exposition heavy (e.g. Marquez)? Can you lead in with that sentence, if you still give all those description afterwards? On message boards, blogs, etc. these "rules" tend to mean sort-of-but-not-quite the same thing. Nearly everything you can dislike has a name:

    Exposition? Falls under "Show, don't tell," for sure, but more specifically "info-dump". Conveying information through unnatural dialogue? Google "As you know, Bob" (the Bob is optional, but if a name's there that's usually the one). And so on... Actually, the admonition against As-you-know-Bob dialogue is one of the few I've heard I find very hard to dismiss: it's very specific, and exceptions would have to be very deliberately crafted (I'm sure they exist).
  • Critical thinking and Creativity: Reading and Writing
    The 'show don't tell' - has a point but, of course, some telling is necessary.Amity

    "Show, don't tell," is one piece of advice that's... vague. The problem is, since you're in a medium that almost always works with text, the only way to really show something is to tell about something else, so the author might think they're showing, but the reader might be reading one abstraction-level down, and thus read it as telling. So what does that piece of advice mean, in the end?

    There's an intuitive space that's almost always telling, and one that's almost always showing, but there's a lot of overlap in the middle. Often, there's no clear difference between showing telling. For example, a simple line like "He picked up the phone," omits a lot of details, and whether you get a showy or telly feeling from it depends on what's going on in the scene, and how important the event is. But that's not always in the text; it can be in the reader-side interpretation. (Of course, a single line occurs in a wider text, and it's that wider text that's showy or telly, and not the line alone. I'm just simplifying because it's easier to make a point.)

    The basic question here is (a) when do people interpret details that are in the text vs. (b) when do people imagine details that are not in the text. And what do you, as the author, want? This, too, ties in with point of view in some narrative context: for example, a telly line like "this made him very angry," might be a misinterpretion of unrevealed details by an unreliable narrator.

    The problem with standard rules is that they often guide attention in a rather limiting way. When you edit with a rule mindset, there's a danger that you lose the big picture. "Show, don't tell," in my message-board experience, discourages lines like "this made him very angry," and would render situations in which this would work as exceptions. And beginning writers "must know the rules before they can break them."

    So at the moment you say "The 'show don't tell' - has a point but, of course, some telling is necessary." you're already caught up in a rhetoric that stigmatizes telling and sets showing as the default, when what you really need is an understanding of how many details to use and when. It's not clear whether "show, don't tell," is helpful or harmful. That depends on (a) how you learn to interpret the line, and (b) what sort of style your intuitive voice tends towards.

    For example, when I was still writing, I noticed that my characters were "turning their heads" a lot when something caught their attention. All of them. When I wrote "turned his/her head", that was usually me putting in a short cut. It's a physical detail, a sort of behavior-icon for some recurring type of events. It's not only repetitive, it's also not taking into account the character's body language. So I have this private little rule that says "beware of swivel-head syndrome." So... should I peddle this rule? Should I just assume that many people share the same problem? Should I stigmatize head-turning?

    Not really, no. It's a problem I have. I can't just put it out there. However, "swivel-head syndrome" is, as an unintended consequence, encouraged by "show, don't tell," as a rule. It doesn't have to be, but that's been my message-board impression. Don't tell me something caught their attention, show them turn their heads. Out of the frying pan, into the fire. Again, it's not an inevitable consequence of the rule; it's just that people suddenly started put the same few stock movements in place of the same few stock emotions. Like a cultural short hand.

    So noticing this trend, I could abstract from my "swivel-head syndrome" personal rule, and say something like "Know the body language of your characters!" But if that caught on (I doubt it would; it doesn't tell you what to do), it would likely be distributed as a slogan, and it's context would eventually be lost, and it would create its own set of problems.

    (Aside: One of the reason "know the body-language of your characters!" is useful for me is because I have aphantasia. I have no inner eye. I can't see my characters at all, and often don't even know what they look like until some setting interaction fixes a trait. So making up random body-language in keeping with their personality helps me add some visual touches to stories. My swivel-head syndrome is a side-effect of getting lazy in that process. People with a vivid inner eye are almost certainly not going to profit from that rule, given that they probably just need to visually imagine their characters. It's very involved.)
  • Critical thinking and Creativity: Reading and Writing
    Care to say more?Amity

    Sure, but let me address the following first:

    A bit harsh, no? We can all be prick-ish and think we're right. Difficult to let go of own ideas/beliefs when challenged. But wonderful to be surprised by an 'Aha!' moment when reading or listening.Amity

    A bit harsh? Yes and no. It's not that I thought that's how I came across to others. Sometimes maybe, but they can counter; that's fine with me. My problem was that the more I got embroiled in arguments, the more I found myself saying things that... I didn't really mean. I did mean them to some extent, but the matters-of-fact here are... difficult, and the moment you put something into words, you can think of a few ways that could be wrong, and so on and so forth. It became exhausting to argue a position more vehemently than you mean to, but at the same time feeling that if you let go the rebound of the opposing position would smash your right into a wall. In the end, I figured what I have to say isn't all that important, since my core point that underlies all the creative writing stuff is that people have to find their own way. I just retreated.

    As for the "chaos" comment, that sort of follows from what I just said: people need to find their own way. Writing seminars can certainly be part of that, but I find that... a lot of the advice I've come to expect works against that. There are those well-meant slogans: "Show, don't tell", "Don't end a sentence with a preposition," "Avoid the passive voice, adverbs, etc.", "a protagonist needs a goal"...

    Taken all together these sort of rules converge on a style. More then once I saw authors put up their writing for criticism, get a few predictable remarks (e.g. there are too many adverbs), then edit the excerpt, post it again, and then get better responses. I once asked one of those writers which version they personally liked better; they said they liked the new one better, though they might just be in the high of the moment. The thing is this: I almost always liked the original version better. The edited version might be smoother, but usually they lost voice. What remained is that uni-voice style. Some authors naturally fit into that style (I think David Mitchell of Cloud Atlas is a good example), so it's not that the style can't produce a good voice. I'm not against that style. The thing is, though, that in direct comparison there's something to an authors original voice that gets lost when it's edited down to an industry standard.

    Another anecdote: When defending adverbs, my favourite example comes from the final paragraph of James Joyce's short story "The Dead". It's just a beautiful use of adverbs, and it contains sentence structures that would not be possible without adverbs. One time, though, I quoted what I though was an ingenious example of use of adverbs in Salman Rushdie's Midnight's Children. The reply I got was that this was unreadable. Fine. I'm not going to argue against someone's taste. I loved the section, someone else did not. I moved on. Years later, though, I read an interview with Rushdie where he basically said something along the lines of having used to many adverbs in his earlier fiction. I wasn't sure that included Midnight's Children, though it's likely. The book is full of adverbs. Thing is, I really liked the style, and here the author himself aligns himself with the... prevailing tend.

    To be sure, it's not a loss. Rushdie's later books are still fun to read, and the older books won't go away. But it's sort of exasperating. It's like there's a set of industry standards slowly forming... taste. It's like these writing rules are slowly becoming true through... taste formation?

    And now go back up to the introductory paragraph: I don't actually think any think any of this is true. Real life is more complex, and I think I'm being melodramatic. One other thing that changed, for example, is that with the rise of Amazon, it's become harder to find the books I'd like to read in bookshops, and I don't buy stuff online. So I'm sort of out of touch. For all those reasons, I don't really want to be believed.

    But at the same time, these anecdotes really happened. I've seen rough but interesting texts polished into a smeblence of professionality, but losing that initial spark in the progress. More often than I ever wanted to, I've seen texts being polished until they're utterly dull. If I were a slush-pile reader, I probably wouldn't have accepted the original versions, but I'd have remembered them. The edited version I'd have passed over without a second thought. So now, when I read a potentially interesting book that's ultimately not very memorable I wonder if that happened here, too; if somewhere hidden in this version is an interesting original that's been edited out. See, it's entirely possible, likely even, that other people (including the author) really love the result. It's possible that that's just how they write, and that I just don't get it. That would be too bad for me, but all in all it would be all right. However, if there's really an original version out there which - for all its flaws - I'd have liked better - than that'd make me a little sad.
  • Critical thinking and Creativity: Reading and Writing
    I agree that there is overlap. However, I don't think that short stories are 'encouraged to spin out of control'. TPF's Literary Activity ( previously Short Story Competition) is a case in point. The latest: https://thephilosophyforum.com/discussion/15585/literary-activity-dec-2024/p1Amity

    Oh my, this was six years ago? I don't remember this post at all, and I had to go back to read this thread for a while to see why I was saying what I was saying and what I could have meant by it. So it was about creativity in academic writing?

    I'm quite fond of chaos in literature, and I find that - since I came online in early 2000s - a certain brand of "creative writing" seminar style has taken over writing forums, so that I grew bored of them and abandoned them. I was also a bit of a know-it-all and a prick back then; didn't much like my way of communicating any more...

    Basically, I think my main point was that in writing short stories you're allowed to let your mind wander even if it doesn't go along with your original impetus of writing the story. The same process ruins a philosophical thought experiment.

    It amuses me to see that, back then, I'd only written 50 posts!Amity

    Interestingly, I still only have 244 posts (including this one). I'm not the most proficient poster, it seems.
  • The Cogito
    Why assume that the thinking thing , and all its activities, is the most important and most characteristic part of being a subject?J

    One of the reasons I tend to stay out of cogito-ergo-sum threads is that I never read Descartes and am only passingly familiar with it. I find the topic interesting, though, and this line is a good lead in for a problem I've always had with the response to this line.

    You see, I came across this line in my childhood. I already knew enough Latin to parse the line, so I must have been between twelve and fourteen, not quite a teen yet. And for a long while this has been (a) intuitive, (b) banal, and (c) rhetorically witty. I quite liked the line. Only later did I learn that my intuition may not have aligned with Descarte's, and it certainly didn't align with a lot of other people's.

    So on to your quote: according to my intuition, the thinking thing's importance is contextual. While I doubt (a form of cogitating), I can't doubt that I doubt. It's like a plug. A moment of certainty. As soon as I pull back only slightly, out comes the plug and life flows back in. Which is why "thinking thing" is a rhetorical stand-in. At the moment of "cogito-ergo-sum" you're certain of your existence, but nothing else. It's a holiday from doubting, but little else. Nothing can follow from it, since at that point no other interest can be cogitated about. You gained certainty at the expense of your worldview. No meaning is left. Cogito-ergo-sum is a dead end. You can pull back, but you can't take your certainty with you. But you, the radical doubter, have a place of rest. However the you-that-needs-such-rest only exists by virtue of its connection to a world full of doubt, and you take back that feeling and rationalise it. For example:

    The Cogito points to the indubitability of the disunity part.frank

    Pulled back too far, but if that's the way you make sense of it...

    What does this mean? Is it unwarranted to conclude that he is a thing that thinks? Isn't thinking essential to being human?Fooloso4

    Pulled back too far, but if that's the way you make sense of it...

    And so on. (There are many more examples in this thread, I just picked two from the page I'm currently on.)

    I did come across a take on Descartes that resonated with my intuition once, but I forget what it was (a vague memory of "you can be certain you have a toothache, but not that you have teeth"). I want to say it's Ortega y Gasset, but I really am not sure. In any case, the collapsing of a world-view into the cogito and the reconstruction of the world-view in daily praxis feels quite compatible with phenomenology as I understand (which is not as far as some others on this board - I'm not a well-read philosopher).

    Not sure I made much sense here, given that I'm not sure how compatible I've ever been with Descartes or his reception, but that's where I've always been.
  • Making My Points With The World
    I doubt anyone deliberately aims for their points to be misunderstood.Tom Storm

    I sometimes wonder how important it is to *have* a point. I remember, back at University, in a literary class, I was trying to explain my point. The lecturer seemed really interested, and then said, I never even thought of that, what an interesting point. He then checked back if he understood me correctly. It turns out he hasn't. I was making a different point, but because that other point I could have made but didn't was so much more fascinating, I forgot what it is was trying to say... Fun times.

    Again, at universtiy, I would often try to make a point, then talk myself into a corner and find myself unable to continue, because everything I've been saying up to that point seemed like nonsense, which I then admitted to. It usually didn't go down well with others, who thought I was saying something interesting and were hoping I'd get to the end...

    I sometimes wonder if people think they have a point just use repetition and rhetoric to fool themselves. Row, row, row your boat, gently down the stream...

    This post, btw, doesn't really have a point; more something like a strange attractor.
  • Moral Subjectism Is Internally Inconsistent
    This seems to be the crux of your argument, and I am not following this distinction you are making.

    All I meant, was that the truth-value of something is completely independent of any stance taken on it.
    Bob Ross

    I believe I may have confused myself here, or missed something. Let me go through this step by step with "Torturing babies is wrong."

    1. "Torturing babies is wrong," is propisitional. It has "truthity"; i.e. it is either true or false. (Do I understand your use of "truthity" correctly here?)

    2. I can believe the proposition to be false or true.

    3. That I believe the proposition to be true (or false) is distnict the proposition's being true (or false).

    4. Because of (3), I can evaluate the dependency structure of the believe that a proposition is true and the truth of a proposition. This leads me to two questions:

    5. a) Is that I believe torturing babies is wrong dependent on torturing babies being wrong?
    5. b) Is torturing babies being wrong dependent on me believing that torturing babies is wrong?

    To me, your syllogism seems to show 5.a) not 5.b).:

    P1: A stance taken on the truthity of something, is independent of the truthity of that something.Bob Ross

    Applied to the current example: I can believe that torturing babies is not wrong, even if it is.

    P2: A belief is a (cognitive) stance taken on the truthity of a proposition.

    Yes.

    C1: Therefore, a belief cannot make a proposition true or false.

    How do you arrive at that conclusion? You have shown that the belief is independent of the truth(ity). You have not shown that the truthity is independent of the belief. Now this clearly leads me into into a muddle:

    Given that truth is dependent on belief (but not the other way round), I'd get a truth table like the following:

    "Killing babies is Wrong." Believe, Truth, allowed under dependency structure

    B --> independent of T

    B: Yes. T: Yes (allowed)
    B: No T: Yes (allowed)
    B: Yes: T: No (allowed)

    B: No T: No (allowed)

    T --> dependent on B

    B: Yes: T: Yes (allowed)
    B: No T: Yes (not allowed)
    B: Yes T: No (not allowed)

    B: No T: No (allowed)

    Logically, that believe is independent of truth does not necessitate that truth be indepenent of believe. It's possible for believe to be the independent variable, and truth the dependent one. This leads us into a contradiction (bolded above): I can be wrong about a truth I'm setting by believing in it.

    I think that this problem might come out clearer if we investigate the social aspect of morals. Beliefs set morals in aggregate via a complicate process; thus any single belief is both a hypothesis and a bid.

    Moral truth is iterative through belief. (What the role of a proposition is in all this, I don't know, but it would have to play some role.)
  • Moral Subjectism Is Internally Inconsistent
    P1 is not the claim that beliefs cannot make something true or false (which would beg the question): it is an uncontroversial claim that the stance taken on something is distinct from that something.Bob Ross

    I'm not that familiar with moral subjectivism, but with this you've given me something to react to. Let me try to make sense of this.

    Here you use the word "distinct", but in your opening post you used the word "independent". The two words are significantly different:

    If A is distinct from B, then B is distinct from A. But if A is independent from B, it does not follow that B is independent from A.

    So a subjectivist might agree that the stance on the truthiness of something is independent of the truthiness of something with little ill effect. If moral believes make moral statements true, what they'd need to argue is that "The truthiness of something is independent from the stance of the truthiness of that something." Your P1 doen't seem to address that at all.

    They're both still "distinct", though.

    ***

    As I said I'm not very knoledgable about what moral subjectivists are saying. But what's missing in this thread, I feel, is the acknowledgment of the social aspect of moral statements.

    So "Torturing babies is wrong," is a moral statement with a truth value. I assume you believe this to be true. I certainly believe this to be true. It comes up a lot in discussions like this, precisely because a lot of people believe this is true, AND because they believe it's uncontroversial. There seems to be a desire to go from uncontroversial to "absolutely true" or "objectively true"?

    There's something odd going on with belief in social situations, as it's two-pronged: it's on the one hand, looking backwards as a hypothesis what you can expect others to agree with, but it's also - looking forward - the source of action - i.e. part of future data sets of what future people might expect. If suddenly a significant number of people were to pop up who genuinely believe (and express that believe) that "torturing babies is not wrong" than we'd be looking at moral change.

    In effect, every moral proposition is both a guess and a bid. And it's all in flux. (I'm really not sure how to work this into meta-ethics, though.)
  • I am deeply spiritual, but I struggle with religious faith
    A trial of the soul is a concept used by Kierkegaard. Precisely, K talks about 'Anfaestgalse' a Danish word which there is a big debate on what really means. I have the Spanish version, and it is translated as 'anxiety', but I found some English papers and the authors translated it as 'trial of the soul'. Approaching the main topic of this thread, I wonder if, after behaving badly or unethically, there would be a trial about my soul. I mean, is there a cause and effect? It is obvious that in the tangible or real world there are a lot of consequences. People stop trusting me and I lack confidence and I suffer from anxiety. But I want to dive deeper into this matter. Afterwards, is there a possibility that our spirit will experience a trial because of our actions? By the way, I am not referring to karma.javi2541997

    I've tried to look up what Kierkegaard said on the topic, but... it's impenetrable answers to impenetrable questions. I really need to go back a few steps if I'm even to hope to know what he's talking about. What I read felt like gibberish, I'm sorry to say. I'm not sure I have the time and inclination to dive that deep, though. (Note: I'm not saying that Kierkegaard is gibberish; I'm saying my current understanding of Kierkegaard is gibberish.)

    Also: google doesn't know "anfaestgalse". My mothertongue's German, and the word doesn't sound very Germanic either. Some sort of typo? Anxiety, according to one source I found, would have been "angest", which makes sense as it's cognate with English (and German) "angst". I didn't find any reference to "trial of the soul" (after very superficial googling, mind you), but I did find "spiritual trial", which may or may not be an alternate translation; I didn't find the Danish word, though.

    In any case, I'm unsure how much solving the language puzzle would help me; no idea how similar Danish and German are, and how much my intuition might mislead me.

    Because sins, bad actions, unethical behaviour, lying, etc, Have to affect someone or something. Don't you think? I believe those affect the vitality of the spirit.javi2541997

    Well, yes, lying affects relationships. But I feel like I can analyse or think about this without any reference to the soul.

    Say you're freshly in love, and the person you're in love with cooks a dish for you that you hate (it's no the cooking but the main ingredient). You can't bring yourself to admit this and successfully pretend that it's delicisious. The lie will set expectations for the further relationship. Now you may have to eat food you hate or admit to lying in addition to telling the uncomfortable truth. The more often you repeat the lie, the more involved this becomes. And there's a good chance that the truth will come out in a rather unpleasant situation; like when you're fighting one time. I can imagine that a situation like can feel in a way that could be described as a "taint in a soul" or something like that, but for me this would just be a short cut for something more complex - but all there is is actions, expectations, relationships and things like this. I can't go from there to a trial of the "soul". There's nothing coherent enough so that it can be tried. Or tainted. There's just the flow of my daily conduct and its outward connections into social situations, sometimes good, sometimes bad, often neither, always a muddle. I live, I sometimes fret about it, and then I live no more. That's about the whole of it for me.
  • I am deeply spiritual, but I struggle with religious faith
    No! I don't think so. There is nothing here which causes me infliction. It is completely otherwise. I think it is good to open myself to others in this thread.javi2541997

    I didn't mean only right here in this thread. More like: at this point in your life, you're worrying a lot about this topic, and from a non-spiritual perspective such as mine this looks like the extent of what spiritual trial might be. Judge, jury and defendant in one person, only the defendant isn't much interested in defense.

    It's this soul stuff I don't properly understand, though, so I'm likely wrong. So:

    Hmm... Didn't you ever feel anxiety for not acting accordingly to values and ethics?javi2541997

    To the extent that I have a conscience, sure. But there's no blight on my soul, nor a soul to begin with, in my world view. The worst anxieties I experience are for the future: when all the choices realistically open to me seem equally bad. After the fact, it's usually more a kind of shame. I sort of imagine that's the origin of Japanese seuppuku: cutting yourself open from the soft tissue in the belly upwards: that's where I start to feel the shame in extreme cases. (Though knowing Japanese culture it's probably more a show of determination - cutting yourself there hurts a lot.) Anxiety is more chest-centred for me.

    Indeed. Why does this happen? Well, because when a person (like me) is used to acting in a mask constantly, it is not that difficult to keep acting in the same way. OK. I say sorry to the ones I lied to. But how do I know I will not lie again? This is where the problem arises. I don't want to cause that bad behaviour as part of my 'nature'. At the moment, the only solution to this issue is redeeming myself. To start, finally assuming that acting badly has its consequences and there will be a trial to my spirit after all.javi2541997

    It's about what you do from now on out, then, right? Or do I misunderstand?

    This does sound plausible: forgiving yourself too easily can lead to letting yourself go, which in turn makes all that self-examination seem more like a sort of gambit, or self-pity. You do need the motivation to better yourself, and forgiving yourself too easily can get in the way of this. There's no such problem when it comes to others (or, on second thought, there may be: forgive them too easily and you enable their bad habits maybe?)

    Not sure I understood you correctly, here. I'm not sure what difference a "soul" makes. I never had much use for the concept of "sin", for example. Shintoist kegare seems more useful: less judgemental, but also a bit of... too afraid of the world maybe?

    For me it's all just a muddle of what I think I should do (which I often don't know), what I think my most selfish aspects want to do (which I often don't quite know either), and what I think I'm mostly likely to do (which is the easiest to predict), and how I think about all of that (not too well, since I tend towards pessimism - luckily my pessimism is tempered by my cynicism). I just sort of muddle through all that on day by day basis until one day I'm gone.
  • I am deeply spiritual, but I struggle with religious faith
    .
    Will there be a trial of the soul after all?javi2541997

    Aren't you inflicting one upon yourself right now?

    A question that occurred to me: Given the same act, do you find it easier to forgive it in others than in yourself?

    For context: I'm neither spiritual nor religious, so I probably can't fully understand what you're going through.
  • The Thomas Riker argument for body-soul dualism
    TE=thought experiment.AmadeusD

    Ah, thanks. I could have figured that out, but didn't.

    I think that's true, yes. There memories would differ in "cogito"-type ways that ensure knowledge of which they are.AmadeusD

    Actually, I think I made some assumptions when I said this, so it's not necessarily true. For example, if the "original" were duped into thinking it was just going to be a transportation, then the person popping out on Mars would think he's the original, and the person walking out of the transporter would think the transportation failed. What really matters, I think is this:

    There's a difference in bodily continuity between the person not "transported" and the person on Mars, and that difference is susceptible to ordinal description: one body is more continuous than the other.

    That is not the case under the Star Trek model. Even non-duplicative transporter usage creates a copy of a body that's been destroyed. So is the person who steps into the transporter the same person that steps out of the transporter, even though the body that stepped into the transporter has been taken apart and re-assembled?

    And if the answer to that is "yes," then what changes when you assemble a copy more than once?

    Personally, I think: not much. (And I think the answer is "yes", not because of any philosophical position, but because that's how I think people treat each other in Star Trek stories.)
  • The Thomas Riker argument for body-soul dualism
    This is not the case in the TE. The branch line case results in the original and one duplicate; not two duplicates. Perhaps that’s the issueAmadeusD

    Ah, yeah, I was talking Star Trek transporter as per the OP. I missed the two-line post about Parfit. I've never heard of that case, and am unfamiliar of the specifics. I'm not sure I'd change my mind, but I might. What's "TE"?

    So after reading up on Parfit's branchline scenario, that's definitely a case with an original and a copy, based on physical continuity. Consider the difference to the Star Trek transporter technology with the following example:

    I kill a person, then duplicate myself. Then two identical people show up at the police station, saying "One of us killed X."

    Under the Parfit model, the guilty party would be the one who walked out of the scanner; and it would be a matter of proving who that was. At the very least the original and copy would know who is who.

    Under the Star Trek teleporter model, there's nothing meaningful to distinguish the resulting individuals, since the original (who committed the killing) got taken apart, and both versions were assembled using the same information. There'd be no practical way to tell them apart, so any ruling (if you hold only one responsible) would have to be of a theoretical nature. Not even the people themselves would have a clue.

    So, yes, I'd say there's at least a theoretical difference here; but the simple existence of such a duplication technology might have effects that need to be dealt with one way or another.

    For example, consider a religious fanatic who thinks he must kill unbelievers but since killing is a crime, he must also atone for it. He could use this technology to first kill someone, then duplicate himself, then turn himself in, expecting his duplicate to do the same (which he probably will if it is possible, since he is an identical copy of the original).

    If comparable cases are relatively rare, this could probably be accommodated somehow under existing legal models. But if it becomes a common pattern, we might be looking at a new legal concept. A new type of legal person (defined as a natural person and all its copies)? A reframing of responsibility? And so on.

    This is not primarily a philosophical problem; it's not about truth. It's about how to efficiently get things done, and how to accommodate the new social-psychological configuration of the public, all of which is hard to predict.

    And since I think our ideas are based on our experiences, I think such technology might have rather radical effect on what ideas we can even think about.
  • The Thomas Riker argument for body-soul dualism
    I've re-read your post, and I now think our differences might be this:

    A transporter accident results in:

    You: an original and a copy

    Me: Two copies of the original (which is destroyed).

    Thus, I think during normal operation a transporter creates a copy of a body, and the beam contains the information for re-assembly. The information can be used multiple times.

    As for souls; I don't find the concept useful, so I don't worry about that.
  • The Thomas Riker argument for body-soul dualism
    There was only P1 before the splitting even - regardless of Classic or Branchline version.AmadeusD

    This is where I think we're talking past each other.

    First, this is not how I used P1 and P2. Before the duplication event there was one person P. After the duplication event, there were two people P1 and P2. P1 and P2 exist simultaneously as separate existances. P exists only in a past where neither P1 nor P2 existed.

    How do you connect P1 and P2 to P? Who's responsible for acts that P did? Nobody? P1? P2? Both?

    Second, I don't know what you mean by branchline vs. Classic version. For me, there's a branching point in the personal history of P, such that at some point History(P) split into Histoy(P1) and History(P2). This is a novel situation. There is no Classic version I can see.

    I'll re-read your post later. Maybe I'll get it some time.
  • The Thomas Riker argument for body-soul dualism
    If there's something meaningful that remains between teh two, fire it at meAmadeusD

    They share a history. This is something no legal system is equipped to handle.

    If I do something, I'm liable for it, no? I'm not legal expert.

    Person P(t1) did deed D(t1), therfore Person P(t2) is responsible for D(t1). That works for many things: marriage, debt, murder... Legal responsibility assumes that the Person who did the thing at t1 is responsible for it at t2.

    Now, if we have a branching point, what we get is

    P(t1) --Duplication event--> P1(t2) and P2(t2). There is no P1(t1)/P2(t1). There's only P(t1).

    So what's your intuition here? Mine is that P1(t2) and P2(t2) are P at t1. That's where biographical continuity leads for both of them. It would then, maybe, follow that they both are responsible for D(t1), because there is no distinction between P1 and P2 at t1. That can lead to absurdities, though, like in the situation of debt collection.

    Obviously, the problem disappears for all deeds that occur at t2 or later. The branching point creates a situation where two people are identical with one person before a certain event. This is a fundamental change. We need to adapt to this: legally, morally, economically, pscho-socially...

    For example: Is it more economic to train 1000 employees, or to train 10 and then duplicate the best one 1000 times? And if the latter is more efficient on paper, what about a working environment where you only work with versions of yourself (not twins, but people who know everything about you that you know, too, before the branching point).

    I don't think any of our current intuitions can prepare us for this type of technology. We need to go through a period of chaos and see which way it settles.

    As for specific points:

    These relate to whether you're a legal positivist or not. Yes?AmadeusD

    What any one person believes is besides the point. How likely is it that all relevant personage agrees? And what about effects and implications of their decisions that they didn't anticipate?

    As I said, they are not the same person on ANY conception except Immaterial SoulAmadeusD

    As per the above, they are not the same person now. They were the same person before the splitting event, which is when the certificate was issued. Legally, I see three possibilities:

    a) The certificate is invalid for both (because neither P1 nor P2 are uniquely continuous with P)
    b) The certificate is valid for both (because both P1 and P2 are continuous with P)
    c) The certificate is valid for one of them, and invalid for the other (no idea how to argue for this; my least favourite)

    On top of that, a/b/c might apply differently in different contexts. For example, in the case of marriage, I could see annulling the marriage with a possibity of remarriage with one of them as a plausible solution. In the case of ownership of property, though, joint ownership might be a better solution.

    Of course:

    NB: probably worth realizing that in a world that this machine exists, the Law knows about it and has anticipated these problems.AmadeusD

    Yes: if the tech's been around for a while. I'm talking about the transition period. You're not going to predict all the problems that'll arise from the introduction of such a fundamental novelty.

    For example: when I wrote about joint ownership above, I wondered how that would look like. Pre-arrangements would be likely, if the duplication is voluntary (and not an accident or forced). But what would that pre-arrangement entail? My immediate intuition went to "contract", but that wouldn't work, since the potentially disagreeing parties are at that time still one person. A type of "will"? I will let this to P1 and this to P2?

    Obviously, after the first few generations this is all going to be the new normal. But for the people who have to figure out how to deal with non-unique personal continuity as a novelty, these are going to be... interesting times.