Comments

  • What is faith
    Yes, that's reasonable, otherwise you start thinking in terms of joyous martyrdom or some such. But even "bad" vs. "worse" is problematic. Should we imagine a self-sacrificing hero (with, as you say, a bit more time to cogitate than a grenade would allow) saying to herself, "I'll feel really bad if these innocent people die. I will feel nothing at all if I sacrifice myself to save them, since I'll be dead. So I'm choosing to feel nothing rather than feel really bad"? Maybe. But it would be a very subterranean level of cogitation, as it were; what usually goes through a hero's mind is thoughts of duty and compassion, I would imagine, not how rotten they'll feel if they funk it. I'm inclined to say that it's only plausible if, for independent reasons, we've already decided to rule out genuinely altruistic motives as incompatible with the "what I choose = what I like" equation. Then we can say, "She thinks she's acting from altruistic motives but here's what's really going on -- it's what she likes, even if she doesn't realize it."J

    Well, there are two things going on. One is how we make decisions based on value (where rational choice comes in), and the other is where value comes from (e.g. "feelings" - rational choice theory doesn't demand that feelings be the source of values, just that you have values).

    But what values does a "truly altruistic person" have? "I want others to be happy"? And in what terms would you describe the value? If you want to describe the value with respect to rationality, rational choice can probably achieve that, but they'd need recourse to other values. And there are pretty much only two options open I can see: some sort of structuralism - it's all circular, values feed into other values etc. Or values come from something other than rational thought (e.g. we are "social animals").

    A rational choice theorist who decides values derive ultimately from feelings would likely describe "genuniely altruistic feelings" in terms of feeling - maing others feel good feels good. And I don't see how that would devalue altruism. Or differently put: if making other people feel good didn't make you feel good, would you be "genuinely altruistic"? Maybe. But "genuine altruism" is a loaded term here. You need to be aware that a rational choice theory might describe that differently from you.

    Again, rational choice theory isn't something I read widely. At university I've written a paper about the sociology of suicide; one approach was rational choice - it was, I think, my least favourite approach. I don't remember the name of either the author or the book anymore, but that was my most in-depth reading of a rational-choice point of view. I went with summaries for the rest of my studies. All this to say, it's never been my expertise. So take what I say with a grain of salt.

    It's now between two sorts of unlivability -- death, and moral disgrace -- one of which at least will spare the innocents.J

    Yes, as I said, "bad" vs. "worse". Where there's no gain, you minimise cost. Ultimately it's "feelings"; they needn't be pleasant. That's a misconception. You can rescue a modicum of pleasantness by, say, attaching it to the hero concept. Some people can feel at peace if they take a role with only lousy prospects, but it's socially valued. Identification is a powerful enabler.

    Turn this around, the same person who might be touched by the heroism eulogy might berate her for being reckless while drunk in the bar and missing her. Where there's a tension field between feelings you can use rational thought to establish a legitimisation structure, so you can feel good about doing the right thing. "I was drunk; I didn't mean it." And then you put some emotional weight behind "objective morality" so you can feel good about yourself.

    I think people are too messy to be rational when choosing. That said, rational thought does play its part; we just need to pin down "where".
  • What is faith
    It's tough to make this work with examples of altruism and self-sacrifice. You'd have to stretch the meaning of "joy" awfully far.J

    I read the posts more as cost-benefit calculations (as in rational choice theory). It's not all that hard to account for altruism: even if there's no benefit to be had, there are still costs to minimise. It's just a matter of priorities. I though "joy" was just the word used in the context of Beethoven vs. Bach, while "good feelings" vs. "bad feelings" is the more general model. I'd like to append that in situations where there are no good feelings involved, it's likely "bad feelings" vs. "worse feelings". That said there might be some marginal good feelings in throwing yourself on a grenade: "I'll be remembered a hero!" As you say, it's the stuff we admire, and some people might enjoy the prospect of being admired.

    Given how quickly granades explode, I wager there won't be much time for deliberation, though.
  • Synthesis: Life is Good, the axiom for all value
    So, no, this isn’t a matter of opinion or hermeneutic complexity - without life, there is no value. The axiomatic nature means hermeneutic drift (of the axiom at least - not the contextually driven implications of acting on it - which are dynamic, think Foucault - you highlighted this) is impossible. It is an axiomatic foundation - undeniable by definition.James Dean Conroy

    I understand that (or at least think I do). It's precily the dynamic context, though, that makes the axiom meaningful. Otherwise it's just... floating free. This is why I called my reservations "mostly methodological". You reply seems to indicate you think I was talking about the content.

    I'm looking for an application of the axiom; I can't see one.
  • Synthesis: Life is Good, the axiom for all value
    Life must see itself as 'good'.
    Otherwise, it self-terminates.
    So across time, only "life-affirming" value-sets endure.
    James Dean Conroy

    This is excellent. I think the penny dropped... but the slot machine is kinda slow in operating, so I won't really know how much I agree/disagree until later (maybe much later). I do have one reservation, and it's mostly methodological.

    The above phrasing is... hermeneutically difficult to pull through, I think. If you go by surivival, for example, "it survived, so it must be good", you go into circular arguing. You'd need to figure out a way to describe a system as "lifeaffirming" independently of its survival, and this is always going to be difficult to pull off in a way that doesn't suggest you're motivated by maintaining your theory - especially by people who would have a different view of what counts as "life affirming".

    For example, you describe antinatalism as parasitic and reliant on surplus. There's a baseline here that's easy enough to describe: you have an empirical way to test this. You'd expect antinatalism to be more popular in times of plenty, or on economic decline, rather than on during economic crisis or growth. But even then... parasitic strategies are strategies that propagate. (A single human can host quite a lot of tapeworms.) And you're using that as your metaphor here. And I find that... difficult to parse. What's the parasite-host relation here? The antinatalist to society? Antinatalism to the antinatalist? Both?

    Or another example: how would you deal with Christianity's fixation on the afterlife? Praising martyrs? The saviour dying to "defeat" death?

    The problem is that there are a lot of goalposts to shift, and it's easy to do so without realising. Survival of the individual? Survival of genome? Survival of meme?

    I'm not rejecting the sytem outright, but knowing myself I'll likely stay at a skeptical distance, the way I react to psychoanalysis or evolutionary psychology. Not implausible, but full of hermeneutic traps. That's where I am right now, but bear in mind that I need to still let this settle.
  • Synthesis: Life is Good, the axiom for all value
    Can I ask you a few questions to establish where the disconnect is?James Dean Conroy

    Great approach. That could really help.

    1. When I say "life is the source of value", do you hear "life feels valuable to humans"?
    (Or do you interpret it as a structural claim - about how all value originates from being alive?)
    James Dean Conroy

    Structural. Value arises out of praxis.

    2. When I say "life is good", do you think I mean "life is morally right" in the human ethical sense?
    (Or do you see that I mean "good" as in the precondition for goodness to exist at all?)
    James Dean Conroy

    Not really ethics/morals, no, though after a view permutations that's included. It's more generally just evaluation. I'm not entirely sure where you're going with it, but value would include lots of things: instrumental, aesthetic, moral.. None of it without life.

    3. Do you believe there’s such a thing as value without any life to perceive or act on it?
    (If yes, how? If no, then you already agree: life is the necessary condition.)
    James Dean Conroy

    No, I'm a pretty staunch relativist. Even stuff that doesn't arise out of human praxis is filtered through the lense of human praxis to be "good" (e.g. oxigen is good for fire to burn).

    4. When I say "morality emerges from the structure of life", do you think I mean “animals have moral systems”?
    (Or do you see that I’m saying morality is a refined strategy for multi-agent survival over time?)
    James Dean Conroy

    I didn't get that far, to be honest. I have no idea on that one. I'm not even sure how you view morality within a framework of evaluation (e.g. what's the relationship to instrumentality and aesthetics and other stuff I'm forgetting). I saw you talking about a blanket term for all that.

    5. When I equate survival-optimised behaviour with morality, do you hear "murder is fine if it helps survival"?
    (Or do you understand that moral systems optimise survival under social, complex, recursive constraints - and that’s why they evolve towards things like empathy, fairness, reciprocity?)
    James Dean Conroy

    After 4., it shouldn't be a surprise that I didn't quite think that through either.

    Personally, though, I see morality as a tension field of human praxis, and anything one can say about as an interpretation to "phase-lock" people roughly into compatible behaviour, partly through constance of the word-sounds/graphs. Morality as ongoing social process, fed into by and feeding into minds. It's iterative. (And for it to be iterative people need to be alive. Otherwise, the process just stops.)

    6. When I say value is not "subjective" or "objective" but "emergent", do you hear that as vague fluff?
    (Or can you imagine value as something arising from pattern persistence in systems capable of preference?)
    James Dean Conroy

    No, that part's fine (as my answer to 4. may have indicated). I also missed you saying this, to be honest. I did sense you going in that direction, but that confused me even more, because I couldn't (still can't) see how life could be "good" under that approach, rather than just sort of "there". I come from sociology, not philosophy, where this is the default mode of viewing morality. Phenomenological sociology uses the word "intersubjectivity" here.
  • Synthesis: Life is Good, the axiom for all value
    Life is the condition for value,
    Because value is only ever a function of life.
    James Dean Conroy

    I agree with this. It's entirely opaque to me how you get from here to "life is good". As I said, this means that life is value-neutral. Once alive, you can evaluate anything, even life itself.

    You're mistaking the axiom for an opinion. It's not. It's an axiom.James Dean Conroy

    I'm not. As an axiom, it's just entirely meaningless to me. I don't understand the axiom, and your posts don't help. What do you want to do with this axiom? What's the context? As my failure to communicate this demonstrates, I don't even know how to properly respond to this.

    Yes: without life, no value. Fine. Now: why is life good? Because it gives rise to value? Living things evaluate things; non-living things don't. Fine. How does that make life "good". Or even before that: what do we even evaluate if we evaluate "life".

    So "life is good" is an axiom. So how do you use that axiom to argue stuff? What sort of logic does this axiom tie in? How, for example, do you deal with predators killing prey? This is not disagreement. I don't see a way forward here - I don't know why I should assume the axiom.
  • Synthesis: Life is Good, the axiom for all value
    This is so utterly against my intuition that I have a hard time figuring out what you're even saying. I'll only adress point (1), because here I'm still relatively on board with what you're saying, so I can still somewhat talk about it. From point (2) on out, you pretty much lose me completely, but maybe if I understand point (1) better, I'll find a key to unlock the other points? Experience has taught me not to be overly hopeful, but who knows.

    Life is the only frame from which value can be assessed.James Dean Conroy

    At this point, I wasn't sure yet what you're talking about, so: read on.

    It is the necessary condition for all experience, meaning, and judgment. Without life, there is no perception, no action, and no evaluation. To deny this is paradoxical because denial itself is a living process.James Dean Conroy

    I'm on board with this, for the most part. There is one word, though, that gives me pause: I would invoke irony rather than paradox. I simply don't a paradox. I simply see no paradox here. Part of the problem is the abstraction "life": if we're comparing a live person with a dead one, I will definitely not find a dead person denying this point. Empirically, all I can ascertain, though, is that they don't communicate; not that they have no experience. When we compare a person to a rock, it's hard for me to see what we would be talking about, empirically, if we were to say that "rocks" have experience. What's lacking here is not empirical evidence, but a theoretical framework. I'd have to put that question to a panpsychist, but I can't rule out that they have some sort of theory, or something approaching it that is empirically viable (even if it's not useful to me).

    All I'm saying here is that a living person denying that life is necessary for experience, might just say that non-living things, too, have experience. The fact that they say so while alive doesn't seem to cause a paradox. Them saying that while dead would be rather surprising, sure, but that's not particularly relevant to a paradox, I feel. There's no contradiction here.

    Example:
Even nihilists, who claim life is meaningless, participate in actions designed to preserve themselves.James Dean Conroy

    "Designed" is a value word. If nihilists don't eat they die. But that's just a description of a process. If you set life as a goal, you could argue for hunger as a function, but it doesn't seem like that's what you're going for. In any case, we'd first have to figure out what "life is meaningless" means to a nihilist. "Hunger is unpleasant, therefore I eat," doesn't impart any value on life. It imparts some sort of value on one mode of living over another - that is all. After all, starving is something you do while you live. You can't starve while dead. In fact, dying is part of life. It's baked in at the end. (Or is it? You could invoke jellyfish, I suppose.) Basically, just like you can't have experience when not alive, you can't die when not alive.

    At this point I wonder how you see the relation between the individual, the species, the clade... all living things. For instance, a conflict between predator and prey, parasite and host, etc. plays out differently on the level of the organism than on the level of the ecosystem. The life of an organism - the iteration of organisms - the branching off/dead ends during speciesation - the presence of *any* living thing at all. There's no intuitive anchor point here where value comes in to begin with.

    The act of breathing, eating, and communicating all point back to an unconscious, unavoidable affirmation of life’s primacy.James Dean Conroy

    Again, I don't see any affirmation, just a process. Again, it makes sense to view hunger as a function to keep an individual alive, but to do this you need to set survival as a goal, and that, IMO, is a methodological assumption.

    But then I'm fairly radical here. For example:

    Evolution has no aim other than to survive and the propogation of the genome.Wayfarer

    Evolution has no aim period. Extinction is one possible outcome of evolution. Extinction of everything that evolves is the end of evolution. Does that mean that "evolution has failed"? When there's nothing that can evolve, then there's no evolution - that's all.
  • More Sophisticated, Philosophical Accounts of God
    It might interest you to know of a pubic figure who’s come to prominence in this regard in the last five years or so. That is John Vervaeke, who is professor of Psychology and Cognitive Science at University of Toronto. He has a YouTube lecture series comprising 52 units on the topic of ‘The Meaning Crisis’. Review here.Wayfarer

    That's interesting, thanks. I read the article; most of it felt familiar (the worldview part, for example, sounds straight out of phenomenological sociology - and then the name drop: Peter L. Berger, yep). I'll need to get to the videos, but it's a tad daunting. We'll see when I get to it.

    From the article alone it sounds like "society will find a way," coupled with an awareness that academics participate in society.

    I think we not only have every right but perhaps even a responsibility to try to understand where others are coming from.Tom Storm

    Heh, I'm certainly not worried about trying to understand others. It's assymetry inherent in simply not having a concept that's fundamental to others that worries me: trying to understand can easily go astray in the sense of "they believe this because" theories I might hold unconsciously. Things that sound ridiculous to me aren't ridiculous to others; but it's hard to cut out the ridicule, if you know what I mean. If it were just clear-cut this-is-nuts moments of breakdown, it would be easier to deal with. But it's more insidious.

    I remember someone online saying something like "atheists often don't have no strong father figures". This happens to be true for me. My inner response to that was something like "so you folks want the universe to take care of you?" There's a sense of sparring, here, that overlays the understanding. I'm well aware that I can't overgeneralise like that, but there's this sense of condescension here that I have to be very vigilant against. (Does this make any sense?)
  • More Sophisticated, Philosophical Accounts of God
    Eagleton published an hilariously scathing review, Lunging, Flailing, Mispunching - from which I quote below.Wayfarer

    I remember reading this article. It sounded plausible, but since I don't actually know what sort of picture of God Dawkins portrays (the parts I read concerned religion), I couldn't actually judge it. I remember Eagleton from his works about literary criticism. Was a good read.

    If someone tells me they believe in the God of Moses, the burning bush, and the ark with all the animals, that's a very different conception compared to someone who talks about the God of classical theism. The former, most priests and vicars don't believe in.Tom Storm

    Ah, gotcha. I didn't think of it like that. It's true that I don't believe in cartoon God either, but seeing as nobody around me does, that's not really what my atheism dismisses. I guess what I primarily stand apart from is the Roman Catholic God (with a pinch of evangelism thrown in). I didn't know Biblical literalism was such a big deal in America until I came online. It was quite a surprise.

    That said, I totally understand if you or others have no interest in it. I’m simply interested in what others believe and why.Tom Storm

    That's actually me, too; otherwise I wouldn't be in these sort of threads at all. But it's a second-hand interest: I'm interested in believers, not God. I guess there's a derived intellectual curiosity that does make me interested in God, too, but not in a practically relevant way.

    I sort of have misgivings about this: as if I'm putting myself above others and play arm-chair psychiatrist. I don't think that's quite it, but I do worry from time to time. In any case, even if I do, it's a two-way road: I look back at myself, too.

    For example:

    However, at the very least, the phenomenon of a "crisis of meaning" seems to cause many people very real mental anguish...Count Timothy von Icarus

    That, for example, is very true. There are threads on this site about this. It's something I have trouble understanding, something I'm curious about, but it's also something I'd be sort of afraid to ask about when it's acute: when people worry, they don't really want to be... specimen? And in any case I feel there's a gulf here that's very hard to bridge with language, as words can't activate meaning that's not there. You just sort of blunder about until something clicks.
  • More Sophisticated, Philosophical Accounts of God
    If you're going to say you don't believe in God, you'd better be sure what you mean by 'God,' right?Tom Storm

    I've been reading this thread since there was only one page, but I've never quite known what to say. This line stood out, and I have to ask: why?

    Me not believing in God is a fact of social praxis (and one I could be wrong about, though I have a hard time seeing how), and it's predicated on me not quite understanding what a God is supposed to be. I've grown up among a mix of Catholics (roughly 70%) and protestants (roughly 30 %), during a time when the ecumene was very popular. I've heard a lot of the arguments. They all went over my head. The disconnect seems far more primal:

    ...is it the case that atheism should evolve its thinking about the notion of God beyond the cartoon versions?Tom Storm

    How, though? On the God TV, I either get the cartoon, or I get static. The cartoon may be silly, but it's got the advantage that we both, the theist and me, can understand it. And on account of that I know (and I believe them) that that's not what they believe. What *do* they believe instead? At that point all I can do is shrug.

    Stuff like "God is being itself," might help people who have developed a concept of God past the cartoon to understand things. For me? So what is "being itself" is a big enough problem in itself - without relating it to God (a concept I mostly relate to religious praxis, but is utterly alien to my daily life). It feels like there's quite some reification going on, but I'm unsure, and even if I were to assume I have point, I'd be unsure what on the dual end of "God <-> being" is there to be reified.

    For instance, I feel the same way about concepts like "love" or "justice". I don't use these words, I don't fully grasp their scope, but if it came up I could investigate what I think is being reified here: feelings, patterns of action... etc. In contrast, the word "God" seems to be entirely superfluous wherever it shows up in discussions about, say, "being". We're not on the same page, the theist and I. It's more a lack of topic on my part than a disagreement. I don't argue from the cartoon God, but if you'd ask me what sort of God I don't believe in the cartoon God is all that I can offer. The rest just makes my head spin - and as a result remains utterly irrelevant to my day-to-day conduct.

    You may notice that avoided making myself a poster-book atheist in the above post, resorting to phrasings like "the theist and I", rather than "the theist, and I, the atheist" or some such. Now I am an atheist. And I might have used such phrasings in another post (probably have on these boards?). The reason I'm not doing it on here is that I feel this muddies the waters. I think there are very real (and generalisable) differences between atheists like me, who grew up among believers but never really solidified as a believer himself, and atheists who started out as believers and changed their mind. The latter must have had some sort of sense of what "God" is supposed to be, and they probably retain some sort of memory of that (though re-interpretation according to current life-situations can make "fair" recall difficult).

    An example: when Dawkin's God Illusion was new I picked it up in a bookshop and randomly read a chapter. I think it was about the ill influence of religion, and Dawkins used as an example the treatment of the indiginous population of Australia by the settlers. I was reading this, and my first thoght was: but wasn't this more about civilisation? Sure, religion plays a big part here, and sure missionaries would have played a big part, but... My second thought was to close the book and put it back on the shelf. I'd only later learn what a big deal the book was. Now, here you'll see what I paid attention to regarding the topic: I didn't emphasise religion - I looked at a broader context. Do I disagree with Dawkins? No idea. My disinterest didn't stem from what he was saying; I just felt this was too tendentiously argued. Too much shallow rhetoric, beyond the validity of any point here. But me not focussing on religion is compatible with my day-to-day context: I'm living in highly secular country; I have little interest in God as a topic (when I try to understand what God is, I try to get along with theists - the topic itself is of no interest to me).

    Then there's my motivational structure: all the big questions that come up - the meaning of life, life after death, free will etc. - none of that means much to me. They're not "big questions" to me; more like intellectual diversions, somewhat akin to crossword puzzles. Any answers to those questions feel inconsequential. I locate the disjunct between theists and me (and other athiests probably) here. So I don't think I need to figure out what "God" is to be an atheist. Not caring is enough. There are theists who can't seem to imagine what "not caring" feels like, sometimes to the point of denying that I don't care (because clearly that's impossible). That polemic-laden apologist who thinks I don't want to believe in God because that allows me to sin (unlike other atheists, I don't think that's pure rhetoric; it makes sense for them to think this), or the benevolent Catholic who thinks I'm in my (prolonged) doubting phase.

    My intuition is that the God concept is meaningless by design. It's a hermeneutic buffer zone that inherits meaning from the bordering areas and allows for a game of constant goal-post shifting. That's the impression I get when I read those more sophisticated takes. They feel plausible for a while, until I realise that my mind went astray and I forgot to think of "God". But I don't take that intuition seriously enough to want to explore this line of thinking, much less actually argue it. I'm literally a Godless person; beyond the cartoon God there is nothing I can talk about.
  • What is faith
    To the extent you have faith that a plane won't crash, that's just probabilistic reasoning, so I'd agree that's not really faith. That's just playing the odds.Hanover

    I wonder if it's even that. As long as we don't have "fear of flying", aren't we just going along our way without giving it much thought? Like crossing the road (a car could run me over), walking on sidewalks (a flower pot could fall on my head) etc.

    We do engage in probablistic reasoning on occasion, like when we look at a cloudy sky and wonder if we should take an umbrella. But even then a large part of the decision might be not wanting to carry around that inconvenient thing. (I'd think how much anticipation of potential effect translates into imagining the event and thus making it more or less acute at the moment of decision also plays a role. For example, I'd be more inclined to worry about a planecrash, if I've survived one before - but the difference isn't a re-evaluation of the risk, it's likely a greater vividness in imagination.)

    I feel like "playing the odds" is as much an ex-post interpretation of our day-to-day conduct as "having faith". And as such I could accept something like "I have faith in God (or a bigger plan, or whatever), and you have faith in statistics," as a provisional resting point to figure out what's going on. But it's difficult for me to see beyond that point: I don't know what faith is supposed to do here. I can't really pin down the common ground. Whether you have faith in God or not, whether I believe in odds or not, planes crash, and when we're in a crashing plane that sucks (I'd probably be at least a little distracted from fear of dying by being horribly airsick - a blessing?). And for me that's all there really is to say about this.

    So would you say, I arrive somewhere else after a branching point, or I just stop and settle down on the branching point (I could sell people lemonade as they pass by towards their teleological or stochastic destinations)? Or if we include the social aspect, am I just going off-road, since I don't get along with the way that people maintain the roads? (Is this metaphor even useful?)

    The thing is this: it's my experience that whether my outlook seems to me to align with someone else's is not something I can predict from the single information of whether they believe in God or not. I'm far more likely to find common ground with, say, a Christian fideist than with say Christopher Hitchens. How, if at all, is this meaningful?

    I do find the question of what faith is intellectually interesting (hence I'm here in this thread). But I don't find it impacts my day-to-day life much at all, except when it comes to the rhetoric involved. Let's say I'm sitting next to a priest in a crashing plane; if he we trying to calm with the usual rhetoric I wouldn't doubt his good intentions, but it wouldn't calm me - it'd be a nuisance. I'd spend my last few minutes on earth humouring a theist. (But then, that's just the life of an atheist in a predominantly theist country. In a predominantly theist but also predominantly secular country, the irony of the matter is that you usually don't have to confront that rhetoric - with the exception being moments of crisis, which is also when you're least likely to have the mental energy to spare to deal with this. People who find God-talk calming don't tend to understand this, or at least there doesn't seem to be a contingency plan for such situations in place [if I can't say this, what else is there to say?]).

    Very often "you have faith, too" is a genuine attempt at finding common ground. It's probably here where there's a practical interest beyond just the intellectual curiosity. But depending on where you are more often, equally often, or less often it's also an attempt to errect a barrier and either lure you over or use you as foil to solidify the barrier. (I'm in a less-often place. It seems I'm lucky.)
  • What is faith
    Your posts are well-informed and thought-provoking, thank you.Wayfarer

    Thanks; confidence isn't my strong suit. For every post I finish, there are probably two I don't, and for every three posts I finish and post there's probably one I don't post. That might motivate me to post more... or not. Time will tell. But this cheered me up.

    I agree with this one some things. I don't think this is always true though. Just for example, health is at least part of the human good and living a good life. I think that part is obvious. What promotes good health is often not that obvious, and we rely on the medical sciences, neuroscience, biology, etc. to inform our opinions here. Isaac Newton's consumption of mercury to boost his health is probably a fine example; it wasn't obvious what a an absolutely terrible idea this was, even to a genius like Newton. Other examples, like the existence of externalities in economics, or the pernicious effects of price floors and price ceilings abound. Having basic access to food is part of the human good and early price ceiling schemes, e.g. during the French Revolution, led directly to massive food scarcity, having the opposite of the intended effect.Count Timothy von Icarus

    Yeah, I agree. I think I was focussed on baby stomping here.

    Yes, there is context dependence.Count Timothy von Icarus

    Actually, after some thinking, I think I was "barking up the wrong tree".

    You were asking how one can be "wrong when making a judgement about something which has no truth value, where there is no fact in play?" (Last post I was replying to.) And that's a good question.

    A question of my own: would an emotivist agree that you could derive a fact about value from a fact about emotion? For example, if I said "boo to baby stomping," would it be a fact that "Dawnstorm feels negatively about baby stomping"? If so, there's plenty to be wrong about when you consider the path from internalised attitudes to aquired social values as instantiated in a specific situation and actualised in the decision-making process: you can be wrong about the item in question (e.g. the car), about the social value attached (e.g. I thought cars were supposed to be faster), about my attitude (e.g. I though I want a fast car, but I really just want to outdo my neighbour), about my projection (e.g. I thought going really fast with a car would be fun, but it's scary), and so on. And then you can be wrong how any of that inter-relates (e.g. I knew going really fast would be scary but I thought I'd get a kick out of being scared).

    Even apparently simple things are pretty complex if you drill down.

    As for this:

    Indeed, the focus on acts is also part of the problem. People are primarily good or free, not acts. Just as there is never motion with nothing (no thing) moving, human acts are parasitic for their existence on men. Hence, while it is sometimes useful to speak of the freedom or goodness of acts, desires, appetites, etc., I think it is better to speak of men, lives, and societies.Count Timothy von Icarus

    I'm a relativist, so yeah, I agree pretty much. Who in this thread is likely going to disagree that "baby stomping is bad"? The force of the rhetoric derives in part of the extremety of the act. The variance in reaction is fairly low. What underlies this? An absolute moral principle? An anthropological constant (we're a social species)? A social contract of some sort? And off we go in abstract land.

    But this type of rhetoric is also a good example of how morals proliferate. The target here is not the protection of babies: it's a meta ethical stance, with the problem being that some people want there to be a right and a wrong, a good and a bad, etc. more than others. Part of this thread is ritualistic: we affirm our stances and solve little. That's not all there is, but it's certainly there. We're topicalising a well-known divide and portraying our stances. Little will change. We re-iterate the moral landscape.

    This, I think, is what it would mean for "people" to matter: we stop talking and take a long, hard look at us right now. But then I would think this; I'm a relativist after all.

    In the end, I'm fine talking acts.
  • What is faith
    Do you think one has to adopt a position like eliminitive materialism or epiphenomenalism in order to being doing proper objective science? Or is it allowable for consciousness and intentionality (and thus value judgements) to be part of an explanation of natural phenomena, without these being presumed to be fully reducible to "mindless mechanism?"Count Timothy von Icarus

    I actually think that eliminitave realism is of very limited use in social siences. Take sociology: the discipline was established by Emile Durkheim with an eye to Comtean positivism. The methodology was pretty much all about statistics (e.g. the suicide rate). And the intent was to proof that social facts exist, so to establish the discipline in academia. Later, we have Max Weber introduce the concept of "verstehen" (via a methodoly of "ideal types"). This put the knolwedgable agent on the table and would set off the interpretative branch of sociology: sociologists were very much aware that to understand action is to use their own intuition. Alfred Schütz would update Weber's approach with Husserl's phenomenology, and that is where I directed most of my attention. However, I was always aware of an unfortunate split of macro and micro sociology; either big systems (developed mostly in America by Merton and Parsons; also setting off from Weber, but in a different direction) or situational interaction. So I eventually stumbled on Anthony Giddens' theory of structuration, which attempted to unify the strands by rooting both in spacetime via input from geography. I really liked that.

    All that to say: I'm very firmly on the side of intentionality here. I'd say ignoring this isn't an option in the social sciences at the very least, though it might be useful elsewhere (not an expert).

    IDK, if I am reading this correctly, then it seems like the presupposition that "real facts don't include value" is doing the heavy lifting here. It seems like you're saying that an explanation from the medical sciences (involving value) is "fudging over the (real) facts" and is not "real science" precisely because "real facts cannot involve values in this way." Do I have that right?Count Timothy von Icarus

    It's probably due to the way I put things, but, no, I don't actually even care much about what "real science" is supposed to be. What facts need above all is a modicum of precision, and that's something that words like "bad" almost never allow. What I'm saying is that the scientific facts tell you nothing that your fussy-wussy intuition doesn't also tell you, so there's little point in appealing to the facts. It doesn't really matter how much damage a boot at a certain velocity can do. You can appeal to facts, but you gain nothing by appealing to science here.

    And medicine isn't only science; it's also applied technology. Biology itself, for example, is more about basic research. In its application it has to feed into stuff like medicine, farming, breeding... even outdated stuff like, say, phrenology. So when you present "stomping baby is bad for them," as a fact here, it's ambiguous between the precise effect on the body, the ethical environment of treatment, and so on. But if you were to resolve those ambiguities it gets harder to see the point.

    I'm not really 100 % sure what I mean myself. Maybe I was saying that science is red herring here?

    I'd just point out that sometimes it is extremely obvious that natural selection has been shaped by intentionality and goals, the most obvious cases being domestication, dog breeding, etc.— unless we want to somehow say that this is not "real natural selection" (but then what is it, supernatural selection?) This seems problematic for accounts that want to exclude consciousness from biology, unless there is an appeal to something like epiphenomenalism (which has its own plausibility issues). But I digress. I think it proves quite difficult to allow for goal-directedness and not to allow for values related to the completion or failure to complete goals.Count Timothy von Icarus

    You're addressing something here that's always been bothering me. I certainly think breeding should fall under natural selection, but I see it as problematic to incorporate it easily. For example, what little experience I have with evolutionary psychology didn't impress me too much. Douglas Adam's puddle analogy comes to mind here.

    How can one be wrong when making a judgement about something which has no truth value, where there is no fact in play? For instance, how can one "buy a bad car," if cars are never really good or bad? One can certainly say "boohoo to my past purchasing decisions," but you cannot have been wrong about a goodness that doesn't exist.Count Timothy von Icarus

    I'm honestly quite confused right now. A car that doesn't move is a bad car, but if we didn't want the car to be a car then it could be something else, which it always is - beyond the judging. I think what I'm going for is insconsistence-despite-continuity or something? If I ever figure this out and have the time (not likely today or tommorrow - depending on your timezone maybe even the day after tommorrow) I'll be back - unless someone else says it better (which has preamted quite a lot of posts from me).
  • What is faith
    Is medicine not a science? What about botany, zoology, or biology more generally, which have notions of health, harm, goal-directedness, function, etc. that all involve value? What about all the social sciences? Psychology, economics, criminology, political science, etc.? These often deal with values rather explicitly.Count Timothy von Icarus

    I'm a sociologist by formal training, though I never went down that path professionally and it's now a few decades in the past, but I'm quite familiar with the value discussion, and the funny thing is that my personal position on this topic is that value free science is an unreachable ideal that nevertheless may have some function when you strive for it, though you have to stay vigilant and not pat yourself on the back for being all-out unbiased (you're not). Writing this post was a little weird in that respect; I was trying to put on an emotivist hat while wondering to what degree I am one. As I said before, I'm not that familiar with emotivism.

    That's a huge topic, though, and not all of it is relevant here. The scientist should be disintered (i.e. not take sides when different factions want different outcomes). The question of positivism (e.g. Popper vs. Habermas for the social sciences). And so on. What I was going for:

    "Stomping babies is bad for them," may be a medical fact based on the medical ethos. The associated scientific facts need to be phrased quite differently: the not yet hardened skull is more vulnerable to boots, for example. (I'm hardly an expert.) But more than that, the appeal to "facts of science" looks like an appeal to either authority or objectivity. It's also unneccessary: common-sense "knowledge" will tell you everything relevant here, and science doesn't contradict it. And the value judgement "bad" in "stomping babies is bad for them," is used to fudge over the actual facts - and this works partly because of the ethos inherent in medicine.

    In sociologogy, for example, you come across studies that are arranged in a way that they just echo common sense knowledge without adding anything of value. I wish I had examples, but it's too long ago. People get away with this because, as one of my professors said, more papers are written than read these days. A lot of market research works that way. I've worked in market research, and my impression is that a lot of the clients demand suggestive phrasing (or presence in group discussions, etc.) - methodology that will render the results useless. I suspect at least some know and don't care; they're after legitimisation rather than knowledge.

    In most contexts, saying "stomping babies is bad for them," is a scientific fact is perfectly fine. Here, I think it fudges the topic (if the topic is emotivism). When you're doing science, you need to be very clear about the facts: define you terms, provide the data, etc. When you do that for above sentence you'll likely lose the point your trying to make.

    Whereas, IMO, if we go in the direction of "science says the universe is meaningless and valueless" we have left science for the realm of (often quite dogmatic) philosophy, and at any rate "emotivism must be true because 'science says' goodness doesn't exist," seems to be a pretty hard case to make, no?Count Timothy von Icarus

    Eh, first there's methodological naturalism, which is useful or not, depending on the science. As for "goodness", the most central related sociological concepts would be norms and sanctions, no? (I might be forgetting something obvious.) Rival theories often won't accept each other's set-ups - and that's part of the conversation. It's, IMO, necessary, as sociological knowledge always involves man-made meanings. You can't expect the one true way, here. And that's why how we look at social facts must always be carefully contextualised.

    So for example, I might say "Stomping babies is bad," is a fact within the institution of medicine, as practised in predictable roles. So if I were interested in how the "goodness/badness of baby stomping" plays out here, I could. But what I pay attention to and how I approach the matter requires some theoretic background, and that could include emotivist influence if I were so inclined. I've always felt that, in the social sciences, you shouldn't suppress your bias - you should lay it open.

    Of course, here I'm talking about a "social fact" (a topic for sociology, not medicine). The medical facts are about young bodies and what stomping does to them. The social facts are about... what? How we treat wounded bodies? How we react emotionally to the source of the wound, and we allow ourselves the luxury (not that often during an operation, I'd say). Values frame other values; it's complex.

    So, then, what's the purpose of "stomping babies is bad"? To prevent as many babies from being hurt? To legitimise your anger? To explain your sadness? As I said above: it's a bundle. And it's a bundle you can tie up in very many different ways.

    This doesn't seem like emotivism anymore though. In this case, moral statements wouldn't just be expressions of emotion or sentiment ("boo-hoo" or "hoorah.")Count Timothy von Icarus

    Yeah, I've taken off my emotivist hat here. This is where I'd need to read up on emotivism more. What I'd want to know is how they deal with these topics:

    The emotivist thesis is that there is nothing else, no facts, to moral statements, just expressions of sentiments.Count Timothy von Icarus

    That's their way of expressing themselves. But just like, as you say, other people recognise the role of emotion, I would imagine emotivists have their own way to handle things like habits, norms, legitimisation rhetoric, etc. I expect them to unravel the threads starting with emotion, here, of course, so that everything follows from there. (Stevenson's first/second pattern analysis seems to hint at that, from my limited skimming of wikipedia so far.) When you come from sociology and develop an affinity for the hermeneutic approaches, you're kind of used to navigate and translate between theories. No two sociologists see eye to eye, in my experience. There was a running joke at the institute I studied at: Two sociologists, four opinions.

    Smoking is an interesting case because neither I, nor any of the people I know who have quit, particularly miss it (maybe some social elements of it), but perhaps some people really do enjoy it immensely.Count Timothy von Icarus

    Yeah, I've never smoked, but I've encountered plenty of smokers who knew smoking was bad for them but smoked anyway. They weren't the ones who thought they should quite either. What role does "bad" play here I wonder? I amost addressed this, but decided against this. My post was long and unfocussed enough, as it is.

    Well, let me just start by asking, can people ever be wrong about their own choices? Or are we always infallible as to our own choices as respects what is best for us, and if we later regret our choices they are only bad choices for some "future us" but not bad choices for the "us" when we decided to make them?Count Timothy von Icarus

    I have no real way to approach this question. My intuitive response is a joke: "Can anyone ever be right about their own choices? Being wrong is easy." I don't mean it like that, but this is underscored by my intuition. I think I might think of "right" as "provisionally unproblematic" or something? Not really saying much here; just trying to uncover my bias here - unsystematically.

    If we can never be wrong about what is good for us, I don't think there can be any value in philosophy or introspection. Whatever we choose is right because we currently desire to choose it (so long as we always do only what we want).Count Timothy von Icarus

    Does emotivism say whatever we choose is right? Surely they're aware of conflicting emotions? In some ways, "right" seems like a magic spell to quiet that inner war. We want decision making to be easier than it is.
  • What is faith
    So on this view, "it is a fact of medical science that stomping babies is bad for them," being a value statement, would amount to "boohoo for baby stomping," but could not relate a fact or be based upon a fact.Count Timothy von Icarus

    Again, I know little about emotivism. I read up a little since my last post, mostly because I found it hard to believe that emotivism has no theories to deal with stuff like that. I came across, for instance, Stevenson's first/second pattern analysis, but I haven't thought this through enough to be comfortable talking about this. Instead, I'll just go with my intuition: what if Dawnstorm were an emotivist?

    First, that stomping babies is bad for them is not a scientific fact; it's probably a medical one. Science is to some degree at least supposed to be as value neutral as possible, but it does need its cues, as for what to do. Medicine, as a social institution, is meant to heal people, so that sets a context that sort of defines good and bad; as in health is good. This is taken from a greater social context: you ought to act in such a way that you stay healthy. And so on. So, yes, I do think it's true that "stomping babies is a fact of medicinal science" if you follow the traces of social values.

    If I were an emotivist, I'd likely intutitively see the anchoring of social values during the primary socialisation as the fundamental process here. Beyond socialistion, though, there's a visceral, non-social reaction to being stomped, which I, as a baby, would express through crying (at least as long as I still can). It's not just one emotion; it's a bundle. And that bundle develops. It would not be the case that there's a simple one-to-one relation between any particular emotion and any one particular action. By the time we're able to act it's already far to complex for that.

    Nevertheless, emotions don't just motivate moral statements. I don't even trust moral statements to do justice to the underlying bundle of emotions that tag you into your larger social context. And I do think that there are social facts that transcend any such time-space bound emotional bundle. As an emotivist, I'd first have to be a methodological individualist: I'd have to de-emphasise the social context in favour of emotional growth. I'd need a theory of how rationality ties into this. And so on. What would remain the same, though: moral statements are surface phenomena, secondary to lived social praxis. And values are the basic motivational structure of an agent - emotions.

    One thing I'm not sure about is this: I don't think values/emotion is a one way road. "Value <--> Emotion" rather than "Emotion --> Value". That is because both your visceral emotions and your social-belonging derived emotions are constrained by facts: about (a) social realities, (b) biological bodies, and (c) biographic actions you've taken (even if by mistake, such as "stepping on someone's foot").

    Still, I'm not sure I couldn't make it work. At the very least, I don't find that your argumentations dissuade me away from emotivism.

    For example:

    But you seem to be suggesting that, so long as they are being reasonable, they cannot be wrong? If 10 years later they tell me "I wish I had listened to you," is it not fair to say that I was correct in this case?Count Timothy von Icarus

    What I consider the facts here is simply that present me (I'm taking the role of the smoker) and past+present-you currently agree. But past-me probably saw this coming. What we'd need here is a theory that crosses the temporal divide: past me didn't have to suffer health problems, and present me didn't have to live a life without smoking. The different temporal positions give an obvious bias towards the present situation (which is emotionally more acute). One could say that present me has eaten well, but doesn't want to pay, for example. How do you mediate (theoretically) between the two positions? Yeah, you correctly predicted the present situation, but so did past me. It's even possible that, back then, I took this into account. Does our current agreement constitute a moral fact?

    Here I imagine some random passerby overhearing the exchange and taking this as an impetus to quit smoking. A future self made more vivid by current example provided at an opportune moment in the biography. A story to be told to friends who notice he finally quit. This is the reality we live in. We're not unique - we identify with others, we imagine future outcomes one way or another, then what we didn't imagine happens and we imagine our past selves through this lense. But emotions are sort of fundamental to all these imaginings.

    So we make decisions to take the bad with the good (because pure good is rare), and then when it's time to "pay up" we wish we didn't have to pay. Not such a rare occurance, and there are plenty of recipes to deal with this: askesis, the middle way... You're not going to invent the wheel. You can also just come to terms: I made my bed, so to speak.

    But what's the overall theory here - when priorities change? What's the temporal aspect of morality. I don't see this as a problem for the emotivist; but I feel you have to address this if you want to say that you are "right" in this situation.
  • What is faith
    To use the unpleasant example brought up earlier in this thread, that "being stomped" is bad for infants would appear to be about as obvious of a truth of medical science as there is. We might suppose that people have negative emotions as respects "baby stomping" in virtue of this fact.

    However, the dedicated emotivist often ends up resorting to claims like: "being stomped isn't actually bad for babies," and defending this claim (which I think most would judge to be obviously false) by appealing to the notion that all value judgements are just statements of emotion. But that's obviously question begging.
    Count Timothy von Icarus

    I'm not that familiar with emotivism, and I'm unsure if I'm one or not, but I wouldn't rule it out. So: do emotivists really make claims like that? That being stomped isn't actually bad for babies?

    It seems to me that there are plenty of other ways to deal with this that actually include emotions, and not such vague "isn't bad for" language that seems ill-defined to begin with. Sounds like speaking nonsense when cornered by unintuitive rhetoric.

    For example: I could say that it's pretty safe to assume that babies have a negative emotional response towards being stomped. That even if you don't have a negative emotional response towards stomping babies others have a negative emotional response towards you stomping babies, and you in turn have a negative emotional response to that response... etc.

    Moral discussions seem to be... difficult. And the rhetoric surrounding it is often already steeped in an implicit morality that rhetoric than obviously can't deny:

    Is murder bad? Well, yeah. It has to be. That's the way the word "murder" is used. Morals are more complex, though, and (as Banno said) an iterative process. If we think a parictular act of killing wasn't bad, we're not going to say this particular murder isn't bad; we're going to argue that it isn't murder. Then there's also the question if all killing that's wrong should count as murder or not. Imagine a society where "honour killings" are regulated and accepted, you've grown up in that society, but you've come to think of them as wrong. You wouldn't necessarily think of the act as murder: but if all the "allies" you can find come from outside your society and they do consider all "honour killings" murder you might find your stance to change. It's only partly a linguistic change: your own doubts find confirmation elsewhere, and you internalise that way thinking to some degree. But there are also nuances that you have to live with every day that might make it impossible for you to fully internalise that model. You might find you have more sympathy for someone committing an honour killing than for other forms of murder. You now have to navigate a moral path between a native and a foreign moral system. And you might find that others near you have the same problem...

    What you can do is reject the concept of murder to begin with (if you can): that is, there's nothing wrong with killing under any circumstances, therefore the concept of murder makes no sense. If you can really, truly live this, you're likely a very rare specimen, and you'll likely have internalised the moral rule in a different way: as knowledge that you won't be able to convince others and that you have to be very careful who to reveal yourself to.

    All of that is part of the iterative process that comprises the morals of a society.

    I'd also like to draw attention to the fact that what was at issue here wasn't actually just "stomping babies", it was "stomping babies for fun". So what about stomping babies out of... oh... frustration? The single mother who needs to work and can't sleep... It's not that stomping babies is now no longer wrong; it's that the expected conversation is likely to go off on any number of possible tangents. So we're going with "for fun" - for maximum expected agreement.

    But what's the source of agreement here, if not an immediate emotional response. Most likely some form of disgust or anger? (Well, my immediate emotional response is more fatigue, but I don't react to the content as much anymore as I react to a rhetoric pattern I've grown tired of. I'm always tempted by troll replies such as: "Sure it's wrong. I mean it's fun, sure, but I restrain myself; I'm a person of character after all." My actual internal response is actually quite a bit more graphic and gross, with the sarcasm being more acidic. It's a personal hang-up.)

    It's not actually my intention to argue for emotivism here. I'm more a social relativist: morals are always institutionalised in a society, internalised to one degree or another, with the deepest roots going to early infancy and thus not even available for discourse. You-don't-know-it-could-be-different structures of cognition (a non-moral example would be the case structure of a language: easy to use but hard to explain unless you've looked at alternatives - most languages are nominative-accusative languages, but some are - for example - ergative-absolutive: you don't need to be able to know the difference to speak English, but it's there underlying it in the praxis. I have no such example for morality, but my base-intuition tells me that's how it works here, too - though maybe the divisions here are biological in addition to social?)

    So what's the moral make-up of person who thinks stomping babies is fun? What other things are fun? Does that person accept the social prohibition? What is the self-image here? These would be important questions, and ones you can't really ask out of a putative context.

    In terms of "right or wrong" in the abstract, you always need to set values, I think. For example: what values do you need to set for stomping babies to be wrong? This sort theorising is actually pretty common: we don't just learn that something is wrong, we also learn why that something is wrong, so we can make ad-hoc judgments in surprising (previously unimagined) situations. There's always some sort of legitimisation structure underlying our morality: some things are tacit, some things are explict, there's a hierarchy of relevance, etc. Some justifications are just more likely to work in praxis than others. But even what sort of moral justifications you can think of to begin with is part of your moral make-up. For example, "mine" and "yours" might be a primal impulse, but "property" is a social institution.

    As I said in my first paragraph: I'm not that familiar with emotivism. It seems to me that your moral make-up is going to influence your emotions as much as your emotions influence your emotional make-up. But there was a time when all you had were emotions. Your moral make-up grew later - when you stopped crying whenever things were uncomfortable. Morality as an iterative process seems two-fold - psychological and social. You can't have one without the other. I think that maybe morality is tied in with the development of identity: a lived distinction me/others.

    Changing nappies is oddly relevant here: think of it from the other perspective - toilet training. We no longer need them, and we rarely think about that. No freedom-from-nappies day. We just learned what a toilet is, and not just or even primarily in theory. We use it daily without thinking. And we don't tend to talk about it much. We certainly would bring it up if we encountered someone who, let's say, bucks the trend. There's a partitioning of time and space here that's second nature now but wasn't always. That's what morality is to me - basically toilet training (but more complex).
  • What is faith
    There's also the matter of scale. I have a reasonable expectation that my plane won’t crash (although perhaps this expectation has diminished in the U.S. under Trump?). In contrast, using faith to justify the belief that the world was created by a magic sky wizard -the literalist's deity- operates on an entirely different level. How can these two phenomena be meaningfully compared? It’s not merely that faith is a poor analogy for reasonable expectation; it's also about the magnitude of the claim being justified. The assertion that we can know the will and actions of a world-creating entity is significantly different from an empirically grounded confidence that air travel is safe. Perhaps the scale of the claim says something about why faith is a necessary concept for some.Tom Storm

    I tend to think of it very much like that, but I find there's a lot to talk about here. The line I've bolded is a good starting point:

    It's not entirely a poor analogy. I agree that "scale" is a problem here, but you can correct for that in pursuit of communication, maybe. Let me try to explain (none of this is fully thought through):

    The first major difference is this: When I get on the plain I do this fully well knowing that plains can crash. I do not need re-assurance that this plain doesn't crash. I usually don't even think of the possibility, which I'm very much aware of and furthermore wouldn't deny in the moment, either. I have no fear of flying, and I'd be perfectly fine watching, say, an "Airplane" movie on a plane (which might not be appropriate to air on a flight). There's some underlying principle at work; part of it personality, part of it experience (if I'd survived a prior crash, psychology otherwise being equal, I might have a different mindset).

    At the core of this, there's just lived habit. Something I don't need to talk about. I'm far more worried about getting airsick (something I've experienced multiple times) than a crash. Why? Experience, I think. It's more imaginable than an airplane crash. Reason, I think, is secondary here, and it works because we tend to experience rare thing less often (though one might be unlucky).

    Now, if my psychology were different, and I'd be prone to worrying about crashes I might be inclined to use calming rituals to get over it, find people talking about crashes in the lobby unsettling, etc. I might over time develop a state of knowing-despite-not-knowing, an epistemic buffer between primal worrying and operating on a daily basis - a lifestyle that includes stuff like habituated selective attention and certain modes of integrating new knowledge into this practical cognitive flow.

    Now this is limited in two ways: it's limited to a single topic (aviation), and it's personal (the rituals are for me only). And in this way we have little comparison here.

    Now I'm fairly sure I have certain more comprehensive thought-rituals like that; thought rituals that don't only comprise the topic of aviation. But it's not accessible to "discoursive consciousness": it's vague and intuitive and there beneath all the daily flow of surface thoughts. And I think this sort of thing is necessary. And I wouldn't be surprised if believers pick up on that and call that "faith".

    A computing metaphor would be the OS of your consciousness. Everyone has their own perosnal operation system. And to that effect I actually don't think the analogy is a bad one. Everyone has their own sense of how the world works, as expressed in their daily functioning. But so far it's entirely personal.

    The question is how your personal OS tags into your social surroundings; are you faithful, deluded, seeking? What's the relationship between how your mind works, how your daily surroundings work socially, how others see you, how you see yoursef?

    I'm a homegrown atheist with Roman-Catholic parents, who went to school in Austria, had compulsory (opt-out system) religious education in school, which was the only time our class was divided between Catholics and Protestants (with the odd student who didn't attend either class, either because they opted out - parental consent needed, or because their religion [or denomination in very rare cases] wasn't covered locally). Now Austria is and has been my entire life a very secular country: that means the overarching daily life does not involve religion: religion is segmented off into its own meaningful province. There's a weak default assumption that church membership implies believe, but surprises are no big thing and change little. That is: I had to explain that I'm an atheist on occasion, and that's always been fine. The funny thing is that it's always also been fine in religious context; I've been to confession if it was part of religious education (optional; I could have refused without parental consent) and told the priest flat out that I was just here to bide my time, and we chatted. Similarly, I underwent "confirmation" - a catholic sacrement establishing your faith - while being open about my atheism in the preperation group. It was no big deal.

    Why am I talking about this at length here? Well, because I've grown up in that sort of environment, I'm actually more familiar with the "God-language" as presented by the local Roman-Catholic church than I am with any language that might express my inner compass better. This is a mismatch in what I can think and understand (myself) to what I can express (the other).

    The second thing of note here is the lack of hostility towards me-as-atheist has left me with a neutral view of faith. The secularity of the society around me also ensured there's enough commonality in what counts as rational between those within a denomination and those without. (The late seventies/early eighties were also big on the ecomene - so there was already a widespread different-paths-same-goal mentality around, which meant different aprroaches to the rationality of it all to begin with.) I'm fairly sure that, since I didn't distance myself, a lot of the Christians (see above - confession, confirmation) might have thought of my "atheism" as a path that leads to God through doubt. Nobody was ever impatient or pushy. People from back then I considered truly "faithful" (in the religious sense) had a calm and... serenity? to them I could never match. I'm the slightly nervous, always ironic type.

    When the ideology that surrounds you is pretty familiar in its verbage, but is decidedly not what you intuit and makes no sense, but what you intuit has little to no verbage attached to it (in your mind), it's easy to externalise faith as what others do. However, there really is something that is missing - and that's a layer of social integration, a sense of ontological security - I'm living in an absurd world that nobody can explain to me. But at the same time, I have my thought habits I'm fine with. A gambler's mindset perhaps? That would work if I could see win-conditions...

    Basically, to the extent that me being an "unbeliever" is relevant, I'm living in a world with an ideology I don't share; I'm living without ideological validation, and without the language to explain myself. Because of my above experiences, I've never sought out sub-groups to integrate into - so now I have a lived substratus of praxis, very little ideology I feel like defending (some relativism, but not really fervently), and a generally ironic attitude towards the world.

    To some degree there might well be something like proto-faith here, who knows? The problem is I don't attach to abstracts very well. I have my favourites (relativism, phenomenology - for example), but it's more like a best-of list than something I deeply identify with. If what I have is faith-like, then it lacks an obvious target.

    If fiath is "faith in..." then my "in" remains a question mark. I certainly don't have faith in airplanes; I know they can crash. And they can certainly crash with me in them. If they do? Bad luck. So what do I have faith in here, then? Luck, since I just typed "bad luck"? Maybe. But, see, there's no elaborate ideological system built around that. You just learn to live in a world where bad things can happen - one way or another.

    Take marital faith as an example: you having "faith" in your spouse implies having internalised the rules of living around you. You know what marriage is, what you should do, what you should want, what you can expect. It's not just the person; it's also the social role - the lived praxis of being married. You can make your own rules, but they'll start out as deviations from a learned default. And the faith in your spouse has a both a target and a form in a lived-through social institution which you perpetuate and modify just by living "normally". And when things go wrong, you work it out.

    But your faith has a target, and it has meaning in a specific institutional context; one you participate in. Similarly, having faith in God is meaningful in a particular social context. How much you internalised is an issue here. But with no such institution to appeal to what is the meaning of my putative faith? Where do I get it from?

    I have the personal level, same as more or less everyone, sure. But beyond that? If we're both bottles that contain liquid, I just kinda stood out in the rain, while others might have filled under the tap, or even with a funnel... Not sure how much of this makes sense; I guess I see the biggest issues being social. How much social control and legitimisation, from where, from how many sources? How much in-group/out-group conflicts do you encounter?

    So is the analogy a good one or a bad one? I feel like you can tilt it this way and that; an attempt to build a bridge, or an attempt to solidify positions (e.g. relgion vs science - not a popular conflict around me). So what is faith?

    It's an interesting question, but in my daily life it's really just a word I don't use often (I did in this thread, for obvious reasons). And that means when talking on the topic I have little at stake, but it's also never homeground. So do I have faith in... something? Maybe. Then what follows from that?
  • "Underlying Reality" for Husserl
    Once again I want to raise the question of infants and psychological development.J

    I tried to respond to this when you rephrased yourself in response to me, but I didn't know how. This is fiendishly difficult to get a hold of. I'll try to get a few things in here; not sure how relevant they'll be, though.

    None of this standard phenomenological/Kantian picture can be said to obtain until a certain developmental point has been reached.J

    I want to emphasise, here, that Husserl didn't hold with the Kantian idea of a "thing in itself". The world we live in constitutes in the mind. It's probably possible to give a phenomenological account of the psychological development of a child, but this wouldn't be addressing the "underlying reality".

    During the reading of this thread, it struck me that Husserl's ego seemed very much like Descartes' cogito, so I googled what Husserl had to say on Descartes, and I found a text called "Cartesian Meditations"; apparently only published in French during Husserl's life time. The text I skimmed over was in German (my mother tongue).

    Basically, both Descartes and Husserl start out with radical doubt; but Husserl that the phenomena present themselves in a particular way no matter whether there's an underlying reality or not: the world we live in is always and forever consituted in our consciousness. This is the starting point, and it gets ever more complex from here on out. (One problem, for example, is the other: we construe them of having a consciouness of their own, much like we do.)

    What you're interested in just seems outside of the scope of phenomenological analysis, so we'd need some other frame of reference. I'm not sure what could apply, given that - to me - the singular strength of phenomenology (as it occurred in sociological theory) has always been that you don't really need to make up your mind about the underlying reality before drawing conclusions.
  • "Underlying Reality" for Husserl
    1) Is this theory meant to be a psychological description of how infants begin to constitute objects? If so, how do infants replace the shared-lifeworld aspect that seems so necessary to the description? Some equivalent of a Chomskian universal grammar?J

    This is a difficult question for me. Back then I didn't quite understand some of this stuff, and now I don't have enough memory about it. It's made even more difficult, given that the institutionalisation of psychology and academic differentiation has progressed quite a bit since Husserl's time, so even if I knew whether he talked about psychology (I vaguely think he did; or that at least someone said he did...), we'd probably have to dive into the history of what the term would have meant in academic circles back then and how that impacts now.

    In any case, I don't think infants "replace" anything under the theory. Rather, this is an ongoing process and not stable. So for example, when a new-born child cries... is it already in a shared-lifeworld-to-be? Something to be developed from that moment on? I mean, usually a crying baby is going to be comforted by physical contact quite soon. It's not like you delevop something you replace later; it's that you just develop, and left to your own devices you just... die.

    It's an interesting question, though.

    2) Does Husserl mean that what we encounter in the lifeworld must be as he describes, or only that it may be, for all we know? A similar question can be posed about Kantian noumena: Do we know that noumena do not resemble phenomena at all, or is it merely the case that we can't know either way?J

    Again, take everything here with a grain of salt, since I'm out of the loop, but as far as I remember Husserl's phenomenology didn't have much to do with the thing-in-itself. As far as I remember, where Kant speaks of noeuma, Husserl speaks of noesis which results in "nouma", which are part of the stream of consciousness rather than part of the world.

    I think Husserl might have said that resemblance is relation between phenomena, and to ask if a phenomenon "resembles" some putative thing-in-itself is a category error. Also, the "shared life-world" in (1) is a phenomenon, something that emerges from the process of noesis. At the thing-in-itself level, if we posit something like that, the world isn't yet differentiated into perceiver-perceived to begin with.

    Personally, I think "flow" is a metaphor something we can't grasp without metaphor, and as such it might not be the only applicable metaphor. You can easily imagine that objects exist as we see them, too, but that, too, would be a mataphor, and imagining an undifferentiated flow instead has the advantage of being different - so it's harder to forget that it's a metaphor for something otherwise ungraspable.

    Or differently speaking: there's one tree and there's another, and they're both alike in some ways and different in others, so we can tell them apart and also categorise them together. You cannot make the same sort of comparison to something you can't experience. You can make any number of working assumptions - ideal forms, a material level of existance, an undifferentiated flow... The differences will relate to who you few the world: what's intuitive here is different for different people. Phenomenology needn't make any working assumption, because they start "later". But that "starting later" has to be conceptualised to ground the ideas - say with the concept of "noesis".

    Basically, I think question (2) lies outside the scope of phenomenology, and I'm not sure in what framework to treat the quesion. (I have a hunch I'd have to solve "the hard problem" to even begin having an approach.)
  • "Underlying Reality" for Husserl
    We also want to understand how this flow is not an entity, nor is it composed of entities.J

    I'm not an expert on Husserl; my perspective on this topic comes from sociology - in particular from Helmut Plessner, Alfred Schütz, and Berger/Luckmann, all of whom were heavily leaning on Husserl. I've never read Husserl (except in the form of quotes); but what this reminded me of is Husserl's distinction between "Leib" and "Körper" (two German words for "body"), which Plessner discussed as "being a body" (Leib) and "having a body" (Körper). As far as I remember, this is a lived duality: you attend to one or the other and relate to that relation thus creating a tension field (from which, among other things, the subject-object distinction emerges).

    While refreshing my memory, I stumbled on a pretty interesting article about this, which I'll save here for myself (and I hope it's interesting for the topic at hand):

    Being a Body and Having a Body. The Twofold Temporality of Embodied Intentionality - WEHRLE, Maren

    I find the body to be the best topic to think about this, because it's basically the locus of our perspective: the structure of our bodies (eyes facing forward, ears on the side of our heads...) determine our perceptive within a broader world we're part of. But the borders between body/not body emerge through some sort of worldly process. The very concept of "entities" wouldn't make sense if that difference didn't emerge.

    At what moment does the air in your lungs become part of you? This feels like a pretty silly and inconsequential question, but if we assume "entities", we'd need to answer that, or at least figure out in what we can't. If we don't normally even ask this sort of question, it's because our daily praxis doesn't recognise this as thematic. It's also the reason why losing a finger is somewhat more relevant than losing a hair, and so on. There's a tension field here that never pins things down enough to make full sense, but remains within a certain perimater so that we can just experience the equivalent of a tip-of-the-tongue experience.

    Or to sum it up: if we think of the flow, we are tempted to think of it in terms of "object/subject" - but we're actually engulfed - we're part of it. Metaphorically, we're maybe ripples that fall in on themselves and disappear - but it's all water. Trying to answer this question feels to me like a cartoon coyote running on air until he looks down... I can still reason, but nothing underpins it anymore. At some point reason stops being meaningful to me.
  • What does Quine mean by Inscrutability of Reference
    Artifacts are (for the most part) not self-organizing. A bikini isn't.Count Timothy von Icarus

    I'd like to think about this for a while. When you were talking about tigers a few pages back, you suggested there were two things that were important:

    Divisibility/Unity and Self-Organistion.

    Now you're saying that a bikini isn't self-organizing. I find this obvious at first glance, but it becomes less obvious when I look at divisibility: A bikini is already divided to begin with, in a physical sense, and is only a whole on a social background. Other clothes follow this pattern: shoes, socks, gloves... the bikini stands out by not being symmetric. So we sell pantys and bras seperately, but we sell bikinis as a unit?

    I'm thinking it might be useful to think in terms of system-integration, here, too: while we may be self-organising in terms of being an organism, we're not self-organising in terms of society, so we're not necessarily self-organising in the subsystem that includes bikinis. But that we're self-organising as organisms is part of the way society self-organises. So a bikini is only a bikini within the context of a self-organising system (such as society) that also includes us.

    How does this lead us back to Quine's inscrutability of reference? If society self-organises, and we're agentially involved with this, but also "self-centered", and if what a bikini is emerges from that self-organising process, then what we, each of us, think that a bikini is does not necessarily exhaust what a bikini is on the higher system level, so that no two people in concrete situations will ever topicalise the totality of it, and the difference in attention/meaning attribution is one of the mechanisms that give rise to inner-system dynamics.

    In other words, reference needs to be inscrutable on the organism level, as organisms aren't made to operate on higher organisational levels.

    I'm not going to defend any of this. This was mostly an exercise in brainstorming. I'm playing around, if you will. But that's not meant as a sign of disrespect; it's how I best think through abstract topics that don't really come intuitively to me. It's a way not to reject them outright.

    While I'm just putting stuff out there:

    t would be more in line with popular trends in physics to say something like: "the universal fields are in flux cat-wise."Count Timothy von Icarus

    While I was googling terms in order to better understand this thread, I came across Karen Barad's agential realism. Sounds like a variation on this, maybe? Basically, if I understood this correctly, the relata in a relation don't pre-exist, but emerge from an "intra-action" of... not sure what.
  • What does Quine mean by Inscrutability of Reference
    I've never heard of permissivists before this thread, and I'm not yet done with reading Count Timothy von Icarus' link about "objects", but it turns out that if I had to choose between those options, that's where I would gravitate towards, though I'm not sure how far I'd go. This is a thought experiment I've had along those lines some years ago:

    Imagine a butterfly on a flower. Draw an imaginary globe such that part of the butterfly (and the flower and whatever else) is inside the globe and part outside. Watch the butterfly fly away. Now figure out some maths that allows you to run a exact simulation of the universe, except that you have to follow one rule: what's inside the circle will stay together. You can break time and space if you have to. If this is possible you have a universe that's materially the same as the one we live in, except it's also entirely incomprehensible. That does sound pretty permissivist, doesn't it? The fun thing is this actually helps me make sense of the world (intuitively; I can't explain how).

    ***

    As for those semantic triangles:

    My thoughts tend towards the idea that what we have in our minds is "knowledge about the world", which is a web of concepts, a world view. Seeing, for example, tree activates the tree node. Hearing the word "tree" (or reading it) also activates the tree node. But hearing the word "tree" also activates the word-node, while seeing a tree only activates the tree node. And when we add new information, there are two possibilities: we construct word-first concepts or we construct experience-first concepts. All concepts are ultimately experience-first concepts ("ultimately" here simply meaning that at one time there were no words). If you're born into a language community, though, when faced with a "new" experience, you'll likely at first try to express it with the words at hand. Maybe there's a so far ill-understood word that applies? If not, we can always ask a what's-that-called question. If presented with a "new" word, we can ask a what-is-a-[word] question, to which the answer will either consist of other words or experiences (a picture, a demonstration etc.).

    I've always thought of the Sausseurian model as zeroing in on the thought-signbody (signified-signifier) angle and ignoring the object, not because it's not important, but because it's not part of the discipline of linguistics. Under the structuralist model, we look at the nodes. Take away one node, and we have to distribute the content among the neighbouring ones. It's a valid if limited way of looking at language.

    Now, if you want to bring back the object into discussion under the Sausseurian model, the only real opportunity is to treat the object as an alternate sign (which it isn't if you expand linguistics to include pragmatics - Austin, Grice et. al). And that's how you basically get Derrida. There's no ultimate signifier, it's all differance, and we cling to our binary oppisitions so we don't get carried away by the current. (I think Derrida still has something to say, but I'm losing that sense with people like Lacan...).

    Quines inscrutable reference as illustrated by gavagai is fairly intuitive to me. Makes sense. People don't need to know what they're talking about; they need to get things done. So:

    Others, perhaps you and I and maybe Dawnstorm, think that there may be multiple ways to divvy up stuff, each of them capable of being coherent if not complete.Banno

    Yes, that's pretty much me, too.
  • What does Quine mean by Inscrutability of Reference
    Well, "cultural" would tend to imply a diffuse, collective project, right? But surely a man stranded on a desert island can come to recognize new species of flora and fauna there, and abstract their properties from concrete particulars, or even come to name them, all in isolation.Count Timothy von Icarus

    Sure. New words crop up all the time. Someone (or a group of people) would have named the computer mouse "mouse", for example. I maybe wouldn't call it a "project", though. It's less directed, more just a process of people living together - an iterative process to be precise. When you use a word you both reaffirm it and change it ever so slightly.

    I'll pick out a line from your response to Apustimelogist, because it struck me as interesting:

    However, words generally try to focus on the actual, not the potential.Count Timothy von Icarus

    I have a hard time formulating my thoughts. On the one hand, it's clear that we can only name what's there (or what we thought of, see science fiction/fantasy for example, but that doesn't impact your point I think). But on the other hand, a word needs to be general enough to accommodate the unforseen, or we'd have far more neologisms than we actually have. That is: a certain openness must be baked into language for it to be useful.

    Take a look at Apustimelogist's latest post about evolution. That's basically the old paradox: if you remove a grain of sand from a heap and keep going, when does the heap stop being a heap? In other words, when do you need a new word? Chesterton, in your quote, doesn't seem to like considering grains of sand in a heap, if that makes sense.
  • What does Quine mean by Inscrutability of Reference
    are cats and dogs best thought of as "cultural abstractions?" Or are they just abstractions of a certain type of organism.Count Timothy von Icarus

    What work does the "just" do in this sentence? They're cultural abstractions of a certain type of organism. If you're living in a society it's culture (or subculture) will influence how you abstract. (It will also influence how those organisms will act, which is another, more indirect, source of influence on how you abstract.) They're not abstractions of instituional facts, but the abstractions themselves are institutional facts. (That's something that's often left unacknowledged in current discourse on gender, for example - where we're talking about organisms.)
  • What does Quine mean by Inscrutability of Reference
    The emphasis on "sign" is problematic, in that it supposes that the main purpose, or fundamental element, in language is the noun.Banno

    Syntax as pattern, semantics as what we do with the pattern?Banno

    Hmm... I'm more familiar with linguistics than philosophy, but I'd say both syntax and semantics are patterns and how we use them. I think the actual real life interpretation can't complete until we add the third level of analysis: pragmatics. That's the huge contribution of mid-twentieth-century language philosophy: notably Austin, Searle and Grice.

    I'll get to my take on Quine through all this; I've never read him, only about him, so there's that.

    In linguistics, syntax and semantics are different ways words relate to other words. In syntax, we look at how words work together to make a sentence, regardless of what they say. Meanwhile, semantics is about what the words used typically mean ("lexical semantics"). Which words can you replace in this or this slot. Note that it's not about sign bodies. Ambiguity can be both semantic or syntactic. The textbook example is:

    We saw her duck.

    Syntax:
    a) We [personal pronoun, first person plural] saw [verb, past tense, indictative] her [possessive pronoun] duck [noun].
    b) We [personal pronoun, first person plural] saw [verb, past tense, indictative] her [personal pronoun, accusative case] duck [verb, bare infinitive].
    c) We [personal pronoun, first person plural] saw [verb, present tense, indicative] her [possessive pronoun] duck [noun].

    Lexical Semantics:
    Two different words with the same sign body:
    "to see" vs. "to saw". And "duck (n.)" vs. "to duck (v.)"

    Syntax can change the meaining of a sentence, without touching lexical semantics:

    1. The cat sat on the mat.
    2. The cats sat on the mat.

    The suffix -s indicates plural. Thus "cat" evokes one cat, and "cats" evokes more than one cat. That's a difference in meaning, but it's not expressed over different word choice ("lexical semantics"), but over syntax (plural suffix "-s"). You'll probably see how this is one pivot point for different theories to conceptualise the study of meaning. (Not all theories go this route.)

    So I have one major problem with understanding this quote of yours:

    The emphasis on "sign" is problematic, in that it supposes that the main purpose, or fundamental element, in language is the noun.Banno

    It's a problem with the lexical semantics of the word "noun": I do not know what you're referring to. I suspect that it has something to do with "nouns are words for things" and reification, but I can't construct a coherent meaning.

    In lexical semantics we're basically creating a dictionary, a list of words we must know so we can use them. That is they're all, at this stage, decontextualised. Reference is not a reference to things, on this level, but a reference to abstractions: content words refer, function words (such as "the") don't. [Again, this is conroversial.]

    For example, if I modify 1. above to read:

    3. The dog sat on the mat.

    then I haven't changed the syntax at all, but I've certainly introduced a new word. Since I just mention the sentence as an example, and I don't actually say anything about animals and mats, I'm not referring to real life set of affairs. I am, though, referring to certain common cultural abstractions: "cat", "dog".

    If I were referring to a real life situation but couldn't quite remember who sat on the mat, I could say:

    4. The mammal sat on the mat.
    or
    5. The furry animal sat on the mat.

    Or any other combination.

    And if I quite clearly remembered the cat, I could say:

    6. The feline sat on the mat.

    4./6. are purely semantic changes (though "feline" is morphologically different from "cat", being derived from an adjective, but we're not talking morphology...), and 5. also includes semantic changes.

    When you want to know how people refer to things using words, you're not using that model, though it might be part of your methodology if you so choose. You also need to know what people do with words.
    For example, you'd know that "What circumstances do you have in mind?" is not among the expected reactions to "Could you open that window over there?" even though the inventory-level interpretation would allow for the response. Language occurs in context.

    So, on to the "gavagai" example:

    The anthropologist would have two problems here:

    1. the Lexical level: Am I making the same abstractions as the native?
    and
    2. the situational level: When I'm pointing towards the rabbit, am I paying attention to the same thing that the native is paying attention to when he sees me pointing?

    And I think what Quine is trying to illustrate with that example is that we can't ever answer either of the two questions with certainty, because any clarification attempt runs into the same problem.

    Not sure where Quine goes from there, but I think that real-life interactions lead to satisfaction among the participants, and when everyone's satisfied (actually, when everyone assumes of each other that they're satisfied while being themselves satisfied) people make working assumptions about what the words mean which they maintain until they have reason to modify it. And it's this sort of process, repeated over and over again by lots of people that lets people approach, assymptotically, some sort of ideal abstraction: people are constantly bringing into being and modifying what they assume is already there - as a concerted effort. So, yes, I'm a constructivist on that matter.

    As for the triangles: I like the first one @Count Timothy von Icarus posted best, as the dotted line an the bottom makes sure to emphasise that relation between signifyer and signified is an imputed one. And I also like that the "thought" sits on top. I think the source is Ogden/Richards The meaning of meaning, but I'd have to check to make sure [it doesn't say]. I like that, because I tend to think of thought as a process: not one thought, one clear-cut piece of mental content, but a stream of consiousness, classified and edited by analysis, so we can think about that.
  • What are the top 5 heavy metal albums of all time?
    It's just a convenient label that I made up, though I'm sure other people made it up before me. It's hard to be original. By "Hippe Rock" I just mean bands that sound like Jefferson Airplane, The Mamas and the Papas, Pentangle, etc. Perhaps Folk Rock or Psychedelic Rock might be a more appropriate terms.Arcane Sandwich

    Ah, gotcha. It's all fluid anyway. And I can't say I know Black Sabbath well enough to have much of a view on their sound. A lot would have come out of 60ies psychodelia. That Deep Purple Album that has April on it and that Hieronymus Bosch cover is deeply routed in it. Led Zeppelin is basically blues and folk with more guitars, at least early on. And so on. This makes genre very hard to gauge.

    For example, some tracks on Bowies The Man Who Sold the World feel not that much "softer" than the genre stuff at the time (Try: Width of a Circle, Saviour Machine, She Shook me Cold). A lot of it feels like evolved blues.

    It's interesting to see how things evolve, sometimes even on one record. Try Alice Cooper's Easy Action, where you get songs like "Shoe Salesman" along with "Still no Air" (which is closer to what he would become).

    I also never felt that much of a difference between Born to Be Wild and Smoke on the Water, for example. I probably can hear them when listening for them, but they don't matter in my reception.

    My favourite era in music would probably be around 1967 - 1973. A lot was going on, and little was settled.

    No one had mentioned Uriah Heep until yet, great band. Blue Öyster Cult was mentioned but we didn't dwell too much on it, I don't know why. Awesome band.Arcane Sandwich

    Hm, the thing is Uriah Heep, to me, doesn't feel like an album-band. I don't have stand-out albums for them. I generally love a few songs on every album, and usually there are no bad songs (Gypsy, Bird of Prey, July Morning, Return to Fantasy...). Maybe it's just hard to choose an album, because of that? Or maybe it's just me who feels that way. Heep came up mostly with "The Magician's Birthday", which surprised me, to be honest. (It's an album I've heard but don't own; I've looked at the track list and can only remember Sweet Lorrain.)

    With Blue Oyster Cult, I tend to gravitate towards albums: I like A Fire of Unknown Origin and Imaginos, for example. (There are plenty I don't know.)
  • What are the top 5 heavy metal albums of all time?
    I would say that both Coven and Sabbath sound more like Hippie Rock than what we usually think that metal sounds like. In that sense, I would say that Motörhead sounds more metal than both of those bands.Arcane Sandwich

    I'm not that knowledgable about genre, but "hippie rock"? This is just a random comment triggered by this paragraph, but I just had to remember that "Motorhead" (the song) was first a Hawkwind song. I've heard that Lemmie's sound with Hawkwind came from playing the bass as if it was a guitar. Not sure where, and not sure if it's true. You hear a lot of things. It's just that... if Hawkwind isn't "hippie rock", I'm not sure what you're talking about.

    Out of curiosity, I've looked over a few top-lists online to see if I even know enough metal albums. Turns out, I know mostly the classics/progenitors (Black Sabbath, Led Zeppelin, Uriah Heep, Blue Oyster Cult). I sort of feel like I can't make much of a contribution. If I like a metal album, it's probably not very metal - heh. (My favourite Black Sabbath album, for example, is Sabotage. That came up on maybe one list; it's mostly Paranoid, Black Sabbath or Master of Reality, and don't know the latter two).

    It's an interesting thread to read. When I have the time, I'll listen through some tracks. (Thanks for Afroman; he's brilliant.)
  • The Philosophy of Alignment, Using D&D as an Example
    Well, in terms of D&D, this is mostly a rule-of-thumb to be interpreted in any way practical by the Dungeon Master. In terms of genre history (F/SF), I'd say it's a marriage of Tolkien and Moorcock, but the overarching morality systems here are not that compatible, I'd say. In terms of gameplay, I'd say the alignment system is primarily social - how you slot into society. So for example, to be a paladin you need to be lawful good; but I've always found this to be a requirement towards effort rather than personality: You should strive to be lawful good, no matter how difficult this would be for you psychologically. That makes a difference. But there is still a psychological component to this: it's not how other people expect you to behave, but how you think you should behave, and there's overlap here.

    I've got a degree in sociology, and what the D&D alignment system has most reminded me of is a typology developed by Emile Durkheim in his book Le Suicide. Durkheim was trying to establish the discipline of sociology as an academic discipline at the time, and we was relying on Comtean positivism. He chose the topic of suicide, because it's a very personal topic and at the time was considered the domain of psychology. If he could show that sociology has something to say about the topic, he could clearly demarcate what sociology as a positivistic science has to say about society.

    He examined suicide rates of different countries and came up with four types of suicide, three of which he considered relevant (and one of which he only named in a footnote). These are: egoistic/altruistic and fatalistic/anomic. These are two pairs, and the relevant dimensions are:

    - integration into society
    - regulation by society

    I've always thought that integration (egoism/altruism) corresponded nicely to good/evil, and that regulation (fatalism/anomie) corresponds neatly to law/chaos. It's certainly not the only way to look at it, but it just feels like it could fit.

    Note that under this view "evil" would label a lot of people as evil who we don't particularly consider evil: it's more a measure of outsider status. And this might not be the intention. So for example, someone who hasn't internalised any of the local values, and on the surface keeps to the "laws of the land" would be lawful evil, even if he showed basic human traits such as compassion to people in need.

    Another problem is that alignment would be relative to any particular group: a member of the mafia could be lawful good when the reference group is the Family, but neutral evil when the reference group is wider society. You social integration/regulation spreads out in concentric circles, so to speak, and different allegiencies can create inner conflict. So it's not clear at all, if the social model is what we're after (depends on what we need the model for, for starters).

    No conclusion really, just thoughts.
  • Yukio Mishima
    Sounds like "mono no aware" (loosely, the surprising poignancy of things), which includes an in-the-moment appreciation of beauty, sort of like being lucky to be there in the moment? An example would be the art of fixing bowls with an inlay of gold or silver to highlight where it once was broken (rather than hiding it as well as we can, over here in the West). I'm hardly an expert in Japanese culture, but I'm certainly not used to have this concept be accompanied by such violent language. I'd guess it's an expression of passion?
  • How do you know the Earth is round?
    What's your preferred method of guaranteeing you're travelling the same direction?flannel jesus

    Try lots of times, and if you never find the edge of the world... (It's not practical anyway, since you need to fly. This means you need to make a pilot license, and then you'd need to trust your instruments...)
  • How do you know the Earth is round?
    Travel in one direction until you end up where you start. (Unless you think the flat Earth was Pacmanesque warp points?)
  • What are you listening to right now?
    I'm in this kind of mood:





  • p and "I think p"
    Moreover, if "I think" was required for self consciousness, it would be odd, right? Because some animals are definitely aware of themselves but don't have language.fdrake

    So what's the relationship between thought and language? I've often found myself confused on that topic when reading philosophy, as if there's a basic assumption of thought being inherently linguistic? I can't find a good approach to this thread because of this confusion.

    To my mind, there's this "stream of consciousness". When we think of "thought", I think we think abstract from this stream of consciousness and we structure and order it in some way. And language helps with this because sentences are artifacts that spring from this and have a fixed form, as opposed to the flow. But language (as in sentence-forming) isn't the only thing that's fixed in a way to help us structure our stream of consciousness, as is apparent to me when I build a lego model or solve a sudoku (as no words accompany this kind of activity; my stream of consciousness is "silent").

    To me, language is an extra that may or may not accompany thought - this is my experience. But to "pin down" a thought I need language. Language fixes a thought with a sentence, and then you can think the sentence and think more complex thoughts, because you can store the bundle (by repetition) while bypassing the sentence. Maybe. So some thoughts we have may well be impossible without language, while still not being associated with a particular sentence.

    But with self consciousness being rather basic, I have no problem with ascribing non-linguistic thought to animals without language. But I'm unsure if we disagree about anything, or if I just have a broader intuitive concept of what makes a thought.

    I get even more of a head ache when I remember that "p" in "I think p" is likely for "proposition"... I've left that aside for now, deliberately. Because, when I make a post like this, I'm always insecure about my sentences representing my thoughts (or my thoughts being stable to begin with).
  • The case against suicide
    To make a case against it you'd have to engage with why living would be preferable when it's not a requirement to be alive.Darkneos

    I honestly don't understand what you're after, though. "Preferable"? So I consider suicide: (a) Do I prefer to continue living, or (b) do I prefer to die? That's a choice. "Requirement"? Someone or something requires me to live. Who? What? How does that impact the choice I'm about to make (as soon as I stop dithering)? Or would you like some convincing philosophical position that makes the choice moot?

    The two poles aren't equal, here. It's not a choice between to equally attainable options, where you can also just walk away. Vanilla or chocolate ice cream? Meh, I want strawberry. Maybe next time. If one wonders whether one wants to die or not, one is necessarily alive. You don't need to make a decision to go on living: that's the default state. When I was suicidal, I was constantly dithering until I was no longer suicidal. I never made a choice, so I still live. If I'd made the choice to go on living, that would, presumably, have changed the way I went on with my life.

    In real life situations, rather than being between life and death, the choice is usually between taking different sorts of action: there are quite a few ways to go out, and there are quite a few ways to go on. A lot of the time, people may have decided to kill themselves, but they don't go through with it because they can't find a good method (success-rate too low, too painful, leaves too much of a mess for others to clean up...). Some people might kill themselves because there's an easy method available (e.g. the gun in Daddy's locker), and because the way forward has no visible path. People don't pick between life and death in a cosmological slot machine. They decide act: one way or another. (Or, as in my case, make no choice at all.) It's a rare philosophical suicide who chooses between life and death on some underlying requirement.

    That doesn't mean that there's no discussion here; it just means that, because over the course of my life, I've read a lot about suicide for personal reasons, I tend to have my head filled with the practicals. So what could a requirement for life even be in principle? The way I see it living things live and eventually die. Any choice occurs during that stretch of time. "To live" is thus not a choice. The child that wasn't born doesn't get to choose life. The child that does get born, doesn't get a say either. So the requirement must somehow be ex post facto: it's a requirement for the living to continue living. And they do anyway: until they die. So it's not so much a requirement to continue living (which is automatic), but a proscription: don't take actions that shorten your life. But then we're not quite with suicide yet. See, that can apply to any risk taking behaviour, too: don't smoke, don't be a fire fighter etc. So maybe it's "Don't set death as your goal?"

    But if it's about goal setting, what do I make of this line from your OP:

    Desire for pleasures only applies if you are alive, if you die there is no need for any of that. Same with love, friendship, food, money, etc.Darkneos

    Pleasure and Death are alternative goals you can set. As you say, they're mutually exclusive. What you're saying sounds to me like "Given that I'm dead, why should I set as a goal any of those things that can no longer matter to me?" But this makes no sort of sense to me: first, you can't set any goals once you're dead. Second, once you're dead that-which-matters-to-you is n/a. You're gone. It's a category error. It's not that things no longer matter to you; it's that mattering has ceased.

    This is a long and maybe pointless post, but I'm having trouble pinning down a perspective from which it makes sense to tackle your question. I hope you understand my troubles otherwise we're bound to talk past each other.

    (Besides this, there's a secondary question I have: what if there's a requirement for life, but I don't like that requirement and kill myself anyway? But that's a different post.)
  • The case against suicide
    I'm going to question what a "reason for living" even is to begin with. I was suicidal from, roughly, 12 years of age to... maybe 16 years of age, and I was quite vulnerable to a relapse for at least a decade more, I'd say. It's hard to tell. I'm over 50 now, and memory isn't... reliable?

    I've never been looking for a "reason to live," though. What I was looking for was... determination. Either way: determination to get myself in order, or determination to end it. I think if I'd found determination, I wouldn't be here today. Being a wimp saved my life, for whatever that's worth. I grew out out the suicidal mindset, but the language stayed with me. I still think every now and then, I should just end it. But I've lived through wanting to be dead, so when it comes up now (I don't say this out loud to any one), I'm quite confident that I don't mean it. Wanting to die just feels different.

    Something I've often wondered, though, is this: what if I'd really found "determination"? What would I have done? Would I have killed myself, or would have gotten my act together? It's possible, for example, that if I had been the person who could reach the determination to kill myself, might I have been a person who didn't want to kill himself? I'm quite content to never find out, because quite frankly I don't want to go through something like that ever again.

    I don't have a "reason to live", though. And I don't feel like I need one. I find that life is... naturally persistent. I've been living all my life, and I'll be living until one day I won't be living anymore, which is a stretch of time only available as abstract protection - I may call it death, but since it's not part of my life it's not a state I'll ever have to contend with. Dying though... Dying is part of life, and a lot of the ways to go are unpleasant. Unless you die really quickly, or just drift off while asleep you'll have to contend with dying. Dying is far more frightening than being dead, to be honest.

    So I just muddle through from day to day, enjoy what I can, and take on the rest as it comes. Life is value neutral, though it acquires secondary value - as a perceived binary switch - through the balance of things enjoyable and not. You can switch it all off, but if you do you're dead, and the question of whether it was worth it or not won't apply anymore. While I'm here, I might as well make the best of it, no? Won't always succeed, but, well... that's life. Because I used to be sucidal, and because the language never really left me, though, I have to stay vigilant. You see, a good internal "life sucks" can be quite cathartic, but say it just once too often, and it becomes this... habit, and it takes over the way you think. That's quite frightening. From someone who's been through it: life spent brooding about wanting to die is far more scary than death can ever be. It's a state of mind I don't ever want back.

    But at the same time, all this talk about "love", or "life is good"... it all feels hollow and unreal to this day. It's ineffective. At the same time, though, some of it is demonstrably true.

    As long as you think only of yourself, you will keep coming back to the same miserable thoughts again and again.unenlightened

    Oh, yes, have I ever been through this. Around ten(?) years ago, I remember saying that not much worked when I was in deep, but what ultimately helped me was "doing things and watching people". That's how I phrased it, and it got a laugh out of who I think might have been a suicidal teen. It's really simple. In theory that is. Your wordview's quite a prison; tailored to keep you in.

    So if people ask for a reason to live, what is it they ask for? A surefire plan to go through life without suffering? A teleological end so that your live will have had meaning once it's gone? A pot of gold at the end of a rainbow you can chase even if you know it's not there?

    To me, looking for a reason to live sounds like trap to keep brooding. Life is value-neutral. Without it, you have nothing - which is sometimes good and sometimes bad, and when it's gone, it's neither good nor bad, because value has gone out with it. (Er... yes, we have social effects that outlast us and cast tendrils back in time to influence what we do while we can still do things, but my post's too long as it is.)

    So, yeah, what helped to get back into the groove was "doing things and watching people", as a younger, wiser me has put it. Life won't necessarily get better, but the bad things get easier to bear, and the good things get easier to enjoy. The latter I found especially valuable.

    Not easy, though. Not easy at all. A song that gets it, but promises too much:

  • Critical thinking and Creativity: Reading and Writing
    I don't know that I'm caught up in a stigmatisation of telling. Or that I agree with a default of showing.Amity

    This I find difficult to talk about. First, I did quote what you literally said, but the "you" in the line was supposed to be the generalised you (like "one says"). It's so difficult, because the phrase means different things to different people, and it's not even always clear how the rhetoric relates to the praxis of writing.

    There's the rhetoric with its personal impact and its social impact, and then there's the personal meaning of the phrase, which is part of the personal impact but not all of it; there's what the writer actually does, which again relates to both the personal meaning and the personal impact of the phrase, but the show-don't-tell part of writing isn't a thing on its own. Whether you're "showing" or "telling" in a particular section of text depends on how you interpret those terms (part of the personal meaning of the phrase). This then means that the personal meaning of the phrase goes into the praxis potentially twice, once as a generative rule and once as a corrective rule. And it's not clear that the generative rule and corrective rule are the same, even though the phrase of origin is definitely the same.

    Then, beyond that, none of these rules are meant to be absolute. Nobody says that. They're meant to be rules of thumb. So figuring out whether an author who favours "show don't tell," in his discourse about writing also favours it while writing isn't easy - you first have to figure out what the line means to the writer, and then you have to figure out how many exceptions are too many.

    None of that would matter much. What really matters in the end is the text. But then there's the social level: no matter what the rule means to any specific writer, the phrasings are socially "out there". "Show, don't tell," is a phrase you can google. And the discussions around the phrase cover various predictable meanings. And the time spent on figuring out what this vague line means could be spent writing and developing an intution for what to do (though some people find that hard to do without guidance and thus seek out rules...)

    Now once the phrase is out in the open, people who don't yet have an understanding of the line will encounter and hear it, and what they hear is a sentence of "do this, don't do that". So when they approach the problem of what to do when writing they do so with that particular topic framed as one thing to do, and one thing not to do. They'll eventually figure out that something you do have to tell (i.e. the thing you should not do according to the phrasing), but by that time, showing is already the default. You're usually showing, but somtimes you have to tell. However, that's a judgement that doesn't fit all styles equally. To top it off, some people are natural showers (their "native style" tend towards that), but they might still worry they tell too much.

    So:

    I could have phrased it better and I could have expanded...
    I was trying to say that both were needed.
    Amity

    Yes, but there's a way to talk about this we all participate in. Me, too. I sort-of vaguely half reject the rule, but that's also participating in the lingo. When I reject the rule, it's no clearer what about I reject than what it is that others like about. One thing you should know about me is that I have a chip on my shoulder when it comes to these righting rules. That doesn't mean I disagree with everything proponents of the rule say, or that I think you can't write well while keeping that rule in mind, or... or... or... If I go online to talk about writing, I'll always go into rule-blaster mode - and it'll never quite come across how I want to (if I even know how I want to come across). I've built up a lot of frustration that way, and that's why I've been bowing out of writing forums.

    A message-board experience in creative writing sounds like my idea of hell. I guess some forums are more helpful than others.Amity

    Oh, it was lots of fun. I'm more of a short story writer, but I did finish a very rough draft of novel, which I doubt I'd have finished on my own. I had some excellent feedback.

    Hmm. I'll have to take your word for that.Amity

    Nah, don't take my word for it. That's precisely the kind of nonsense I catch myself saying when I go into rant mode. Here's again what I wrote: "it's just that people suddenly started put the same few stock movements in place of the same few stock emotions." This is mostly based on a subjective impression by a biased mind, and it's now all around a decade ago, so on top of that it's a memory. First, I'd edit out the "suddenly". I'm fairly certain nothing about it was sudden. Second the body-movement/stock-emotions part is more of an excerpt example (to be sure, I could probably find examples, but that doesn't say anything). See, when talking about why I don't like those rules, I find myself doing the same sort of thing I don't like about the rhetoric that surrounds them. I'm vague, I'm inaccurate, I make mistakes (I don't think I made one in this post, so no example for the time being), and so on. I demonstrably do know a lot about writing, but I'm hardly the only one, and on a message board I tend not to be as careful as I should be. I end up saying stuff that I find embarrassing (like that line, for example). So, no, don't take my word for it. Never take my word for it.

    So, a quick way to connect and evoke. A short-hand without the need for detailed explanations.
    Handy, especially when words are limited as in a micro/mini story.

    The repetitive use of 'Turned his/her head' isn't the same kind of short-cut. It's just unimaginative.
    Amity

    To be sure, I called "putting the same few body movements in place of the same few stock emotions" a cultural-shorthand. This what we're comparing to "turned his/her head". The head-turning I'd call just a bad habit. It's between me and my writing. But if you see many people make the same type of edits for a variety of texts, when different texts would need different approaches, that's a different problem. Do they edit their own texts the same way? Is this sort of behaviour triggered by the message board environment? I don't know any of that, so I can't classify it on a personal level. I don't know if it's even a habit, and if so, if it's a writing habit, an editing habit, a critiquing habit, all of it? Whatever it is on the personal level, it's a cultural shorthand on the social level.

    So the differences: Mine: personal level, about writing. "Theirs": social level, about critiquing (and sometimes editing). And in terms of judgement: Mine: flag for a re-write. "Theirs": Do it like this to improve your text.

    In the process of laying out this difference, I've noticed another aspect about my line above that's nonsense: "same body movements for the same stock emotions" is not only judgemental, it unintentionally judging the writing I meant to defend (the "stock emotions" are what occur in the original writing, as opposed to the "body movements" which occur in the edit). Really, it's good not to trust me too much.

    Goodness. That is quite an obstacle for anyone, never mind a creative writer. I can't imagine how difficult that must be. Having no inner eye means not being able to visualise. This is key to imagination and perhaps links to empathy?Amity

    It's not that bad. In fact, for most of my life, I never noticed that I didn't have an inner eye. I thought when people talked about that it was more of a metaphor than it actually seems to be. I can visualise to a minimal degree: if I close my eyes and concentrate very hard I can create a micro-second flash of an image. Research about aphantasia indicates that the "inner eye" can be trained. It's not particularly difficult to engage in creative writing with aphantasia. In descriptions, I tend to focus on a few key properties when writing; extended descriptions in fiction I read tend to bore me if they exceed my capacity for detail-retention. I sort of space out, then. If I want to see stunning scenery, for example, narrative is never going to cut it for me; I prefer the visual arts - where I actually have something to look at. I used to just put it down to taste - which it still might be, who knows?

    Why did you stop creative writing? Don't you miss it? Have you considered taking part in TPF's Literary Activity - either as a writer or reader, both?Amity

    I stopped creative writing when I got a job I didn't particularly like. I just felt too drained to actually write. I don't particularly miss it. I figure I'll pick it up again when I retire. I've still occasionally generated story ideas, for example. I just don't feel like actually writing. Even when I was writing, I usually didn't share what I was writing (and what I shared on writing forums was usually written specifically for community activities). I've considered taking part in the Literary Activity here, but I think participation would overwhelm me - too much time and energy (I mean, just look at the size of this post, and it's not even about a particular piece of writing). Also, one thing I've noticed is that I don't like reading fiction on a computer screen. No problem with academic articles, blog posts, forum posts etc. No problem with poetry. But fiction? For some reason it doesn't quite work for me, on a screen. Weird.

    I read that Chekhov is the culprit who inspired the concept of 'Show, don't tell'.
    "Don't tell me the moon is shining; show me the glint of light on broken glass."
    Amity

    I've heard that. It's such a beautiful way to put it. It's so very much like Chekhov that the reflecting glass is broken. I bought a book of his short stories which I really enjoyed. A lot of those rules go back to something authors said. I find those examples really interesting in that sort of context.

    Maybe "show don't tell" is more like "tell the effects, not the fact." For example, instead of saying it was a dark and stormy night you describe indistinct shadowy movements, the trees swaying, rain pounding on the conservatory roof, and a door being blown open.Jamal

    Yes, and all that description takes time, which means you'll get into the meat of the story later. Or maybe the style's more exposition heavy (e.g. Marquez)? Can you lead in with that sentence, if you still give all those description afterwards? On message boards, blogs, etc. these "rules" tend to mean sort-of-but-not-quite the same thing. Nearly everything you can dislike has a name:

    Exposition? Falls under "Show, don't tell," for sure, but more specifically "info-dump". Conveying information through unnatural dialogue? Google "As you know, Bob" (the Bob is optional, but if a name's there that's usually the one). And so on... Actually, the admonition against As-you-know-Bob dialogue is one of the few I've heard I find very hard to dismiss: it's very specific, and exceptions would have to be very deliberately crafted (I'm sure they exist).
  • Critical thinking and Creativity: Reading and Writing
    The 'show don't tell' - has a point but, of course, some telling is necessary.Amity

    "Show, don't tell," is one piece of advice that's... vague. The problem is, since you're in a medium that almost always works with text, the only way to really show something is to tell about something else, so the author might think they're showing, but the reader might be reading one abstraction-level down, and thus read it as telling. So what does that piece of advice mean, in the end?

    There's an intuitive space that's almost always telling, and one that's almost always showing, but there's a lot of overlap in the middle. Often, there's no clear difference between showing telling. For example, a simple line like "He picked up the phone," omits a lot of details, and whether you get a showy or telly feeling from it depends on what's going on in the scene, and how important the event is. But that's not always in the text; it can be in the reader-side interpretation. (Of course, a single line occurs in a wider text, and it's that wider text that's showy or telly, and not the line alone. I'm just simplifying because it's easier to make a point.)

    The basic question here is (a) when do people interpret details that are in the text vs. (b) when do people imagine details that are not in the text. And what do you, as the author, want? This, too, ties in with point of view in some narrative context: for example, a telly line like "this made him very angry," might be a misinterpretion of unrevealed details by an unreliable narrator.

    The problem with standard rules is that they often guide attention in a rather limiting way. When you edit with a rule mindset, there's a danger that you lose the big picture. "Show, don't tell," in my message-board experience, discourages lines like "this made him very angry," and would render situations in which this would work as exceptions. And beginning writers "must know the rules before they can break them."

    So at the moment you say "The 'show don't tell' - has a point but, of course, some telling is necessary." you're already caught up in a rhetoric that stigmatizes telling and sets showing as the default, when what you really need is an understanding of how many details to use and when. It's not clear whether "show, don't tell," is helpful or harmful. That depends on (a) how you learn to interpret the line, and (b) what sort of style your intuitive voice tends towards.

    For example, when I was still writing, I noticed that my characters were "turning their heads" a lot when something caught their attention. All of them. When I wrote "turned his/her head", that was usually me putting in a short cut. It's a physical detail, a sort of behavior-icon for some recurring type of events. It's not only repetitive, it's also not taking into account the character's body language. So I have this private little rule that says "beware of swivel-head syndrome." So... should I peddle this rule? Should I just assume that many people share the same problem? Should I stigmatize head-turning?

    Not really, no. It's a problem I have. I can't just put it out there. However, "swivel-head syndrome" is, as an unintended consequence, encouraged by "show, don't tell," as a rule. It doesn't have to be, but that's been my message-board impression. Don't tell me something caught their attention, show them turn their heads. Out of the frying pan, into the fire. Again, it's not an inevitable consequence of the rule; it's just that people suddenly started put the same few stock movements in place of the same few stock emotions. Like a cultural short hand.

    So noticing this trend, I could abstract from my "swivel-head syndrome" personal rule, and say something like "Know the body language of your characters!" But if that caught on (I doubt it would; it doesn't tell you what to do), it would likely be distributed as a slogan, and it's context would eventually be lost, and it would create its own set of problems.

    (Aside: One of the reason "know the body-language of your characters!" is useful for me is because I have aphantasia. I have no inner eye. I can't see my characters at all, and often don't even know what they look like until some setting interaction fixes a trait. So making up random body-language in keeping with their personality helps me add some visual touches to stories. My swivel-head syndrome is a side-effect of getting lazy in that process. People with a vivid inner eye are almost certainly not going to profit from that rule, given that they probably just need to visually imagine their characters. It's very involved.)
  • Critical thinking and Creativity: Reading and Writing
    Care to say more?Amity

    Sure, but let me address the following first:

    A bit harsh, no? We can all be prick-ish and think we're right. Difficult to let go of own ideas/beliefs when challenged. But wonderful to be surprised by an 'Aha!' moment when reading or listening.Amity

    A bit harsh? Yes and no. It's not that I thought that's how I came across to others. Sometimes maybe, but they can counter; that's fine with me. My problem was that the more I got embroiled in arguments, the more I found myself saying things that... I didn't really mean. I did mean them to some extent, but the matters-of-fact here are... difficult, and the moment you put something into words, you can think of a few ways that could be wrong, and so on and so forth. It became exhausting to argue a position more vehemently than you mean to, but at the same time feeling that if you let go the rebound of the opposing position would smash your right into a wall. In the end, I figured what I have to say isn't all that important, since my core point that underlies all the creative writing stuff is that people have to find their own way. I just retreated.

    As for the "chaos" comment, that sort of follows from what I just said: people need to find their own way. Writing seminars can certainly be part of that, but I find that... a lot of the advice I've come to expect works against that. There are those well-meant slogans: "Show, don't tell", "Don't end a sentence with a preposition," "Avoid the passive voice, adverbs, etc.", "a protagonist needs a goal"...

    Taken all together these sort of rules converge on a style. More then once I saw authors put up their writing for criticism, get a few predictable remarks (e.g. there are too many adverbs), then edit the excerpt, post it again, and then get better responses. I once asked one of those writers which version they personally liked better; they said they liked the new one better, though they might just be in the high of the moment. The thing is this: I almost always liked the original version better. The edited version might be smoother, but usually they lost voice. What remained is that uni-voice style. Some authors naturally fit into that style (I think David Mitchell of Cloud Atlas is a good example), so it's not that the style can't produce a good voice. I'm not against that style. The thing is, though, that in direct comparison there's something to an authors original voice that gets lost when it's edited down to an industry standard.

    Another anecdote: When defending adverbs, my favourite example comes from the final paragraph of James Joyce's short story "The Dead". It's just a beautiful use of adverbs, and it contains sentence structures that would not be possible without adverbs. One time, though, I quoted what I though was an ingenious example of use of adverbs in Salman Rushdie's Midnight's Children. The reply I got was that this was unreadable. Fine. I'm not going to argue against someone's taste. I loved the section, someone else did not. I moved on. Years later, though, I read an interview with Rushdie where he basically said something along the lines of having used to many adverbs in his earlier fiction. I wasn't sure that included Midnight's Children, though it's likely. The book is full of adverbs. Thing is, I really liked the style, and here the author himself aligns himself with the... prevailing tend.

    To be sure, it's not a loss. Rushdie's later books are still fun to read, and the older books won't go away. But it's sort of exasperating. It's like there's a set of industry standards slowly forming... taste. It's like these writing rules are slowly becoming true through... taste formation?

    And now go back up to the introductory paragraph: I don't actually think any think any of this is true. Real life is more complex, and I think I'm being melodramatic. One other thing that changed, for example, is that with the rise of Amazon, it's become harder to find the books I'd like to read in bookshops, and I don't buy stuff online. So I'm sort of out of touch. For all those reasons, I don't really want to be believed.

    But at the same time, these anecdotes really happened. I've seen rough but interesting texts polished into a smeblence of professionality, but losing that initial spark in the progress. More often than I ever wanted to, I've seen texts being polished until they're utterly dull. If I were a slush-pile reader, I probably wouldn't have accepted the original versions, but I'd have remembered them. The edited version I'd have passed over without a second thought. So now, when I read a potentially interesting book that's ultimately not very memorable I wonder if that happened here, too; if somewhere hidden in this version is an interesting original that's been edited out. See, it's entirely possible, likely even, that other people (including the author) really love the result. It's possible that that's just how they write, and that I just don't get it. That would be too bad for me, but all in all it would be all right. However, if there's really an original version out there which - for all its flaws - I'd have liked better - than that'd make me a little sad.
  • Critical thinking and Creativity: Reading and Writing
    I agree that there is overlap. However, I don't think that short stories are 'encouraged to spin out of control'. TPF's Literary Activity ( previously Short Story Competition) is a case in point. The latest: https://thephilosophyforum.com/discussion/15585/literary-activity-dec-2024/p1Amity

    Oh my, this was six years ago? I don't remember this post at all, and I had to go back to read this thread for a while to see why I was saying what I was saying and what I could have meant by it. So it was about creativity in academic writing?

    I'm quite fond of chaos in literature, and I find that - since I came online in early 2000s - a certain brand of "creative writing" seminar style has taken over writing forums, so that I grew bored of them and abandoned them. I was also a bit of a know-it-all and a prick back then; didn't much like my way of communicating any more...

    Basically, I think my main point was that in writing short stories you're allowed to let your mind wander even if it doesn't go along with your original impetus of writing the story. The same process ruins a philosophical thought experiment.

    It amuses me to see that, back then, I'd only written 50 posts!Amity

    Interestingly, I still only have 244 posts (including this one). I'm not the most proficient poster, it seems.
  • The Cogito
    Why assume that the thinking thing , and all its activities, is the most important and most characteristic part of being a subject?J

    One of the reasons I tend to stay out of cogito-ergo-sum threads is that I never read Descartes and am only passingly familiar with it. I find the topic interesting, though, and this line is a good lead in for a problem I've always had with the response to this line.

    You see, I came across this line in my childhood. I already knew enough Latin to parse the line, so I must have been between twelve and fourteen, not quite a teen yet. And for a long while this has been (a) intuitive, (b) banal, and (c) rhetorically witty. I quite liked the line. Only later did I learn that my intuition may not have aligned with Descarte's, and it certainly didn't align with a lot of other people's.

    So on to your quote: according to my intuition, the thinking thing's importance is contextual. While I doubt (a form of cogitating), I can't doubt that I doubt. It's like a plug. A moment of certainty. As soon as I pull back only slightly, out comes the plug and life flows back in. Which is why "thinking thing" is a rhetorical stand-in. At the moment of "cogito-ergo-sum" you're certain of your existence, but nothing else. It's a holiday from doubting, but little else. Nothing can follow from it, since at that point no other interest can be cogitated about. You gained certainty at the expense of your worldview. No meaning is left. Cogito-ergo-sum is a dead end. You can pull back, but you can't take your certainty with you. But you, the radical doubter, have a place of rest. However the you-that-needs-such-rest only exists by virtue of its connection to a world full of doubt, and you take back that feeling and rationalise it. For example:

    The Cogito points to the indubitability of the disunity part.frank

    Pulled back too far, but if that's the way you make sense of it...

    What does this mean? Is it unwarranted to conclude that he is a thing that thinks? Isn't thinking essential to being human?Fooloso4

    Pulled back too far, but if that's the way you make sense of it...

    And so on. (There are many more examples in this thread, I just picked two from the page I'm currently on.)

    I did come across a take on Descartes that resonated with my intuition once, but I forget what it was (a vague memory of "you can be certain you have a toothache, but not that you have teeth"). I want to say it's Ortega y Gasset, but I really am not sure. In any case, the collapsing of a world-view into the cogito and the reconstruction of the world-view in daily praxis feels quite compatible with phenomenology as I understand (which is not as far as some others on this board - I'm not a well-read philosopher).

    Not sure I made much sense here, given that I'm not sure how compatible I've ever been with Descartes or his reception, but that's where I've always been.