Comments

  • What is "gender"?
    Just here you said that a male might have just masculine traits. Could a female have just masculine traits? Or does the definition of 'female' preclude that?bert1

    I think the problem here is that it's not clear what it means to have "traits". Phenomenological constructivism says people construct a world view, and within that you make sense of the world in a way that fits what you do. So what you have to ask yourself is how how it would be possible for someone to have only "masculine" traits to be constructed as female within this world view.

    Stuff you take for granted is only going to enter awareness when it becomes problematic. My initial hunch is that if you see someone who "only has masculine traits", you're likely going to just assume male, as some sort of "background radiation" of meaning.

    But even more importantly: by the time we notica "feminine" trait about a "man", we've already categorised the "man" as "man", and that's why a "feminine" trait is remarkable enough to enter awareness and even focus as feminine in the first place.

    However, for a sociologist, the question is primarily empirical. If you do think of someone as female, but when you try to assign traits you can't think of a single stereotypically masculine one, why is that? That would be a very interesting situation that could lead to refine the particular theory.

    Note that we think of, say, a penis as a male attribute, not a masculine one. That's a social coding of some sort, part of what we think of fundamental. The thing is, though, that usually genitals are not on display, and not all people wear clothes that make the genral contours plain in view, yet, for the most part we slot people into male/female with no hesitation. We're probably wrong about that categorisation now and than without ever noticing, because attention is fleeting when it comes to passes-by. And we only notice that we're constantly assigning gender, because the process sometimes fails, and we take a second look to figure that stuff out. So the stuff that comes up in discourse about whether or not the trans condition is real, like genitals or DNA, are probably not the traits we primarily use in day-to-day life to make those judgments. They're tie breakers that work as long as worldviews are compatible.

    So let's start with your (2): how people feel inside. Children acquire their worldview while living with their parents and peers (and their wider social circle beyond that). At some point they probably acquire a sense of what it means to be boy or a girl. But they've been gendered in other worldviews before that: and if you've just gone along with how people who gendered you in <i>their</i> world view treat you then your behaviour is going to be compatible with that worldview at the time you acquire the distinction in question, and the distinction is just one among many largely unproblematic facts about the world you take in. That does not mean that you can't sometimes "buck the trend". You can be a boy and play with dolls, for example. Depending on your parents views on propriety, you're going to run into different amounts of "trouble", the lightest probably being a short moment of surprise, or maybe even none at all. The degree to which a boying playing with a doll is noteworthy, is the degree to which the act is stereotpically feminine. You don't get an affirmative reaction to a girl playing with a doll in the same way, unless you've been "worried" that she's not "sufficiently" feminine. But all of this occurs on a baseline of maleness and femaleness. And that's your (1).

    Appearance matters quite a lot when it comes to gender assignment, and if we're not sure we have couple of more privacy-intrusive methods to check: genitalia, DNA, etc. Biology. But the way we look at biological sex is heavily influenced by our interest in the topic. The categories we use to describe sex are inevitably gendered.

    Trans people are, compared to cis people, very rare. They know they're transe because of how they feel inside, but that's hard to communicate, because other, much more common world views don't include that sort of discrepancies. So to figure out what to look at when it comes to biology you'd need to listen to them, but to listen to them you'd need to take them seriously, and accept that your failiur to understand is your failure to understand, and not, say, a delusion of the person who feels something - to you - incomprehensible. Someone who responds to "I'm a trans-man, you're a cis-man," with "that's stupid; you're obviously a woman," isn't likely to be in favour of funding research as to the biology of trans people.

    It's not that what what we know about sex is wrong, it's because sex is gendered through a cis bias, that we the categories we have to describe sex are insufficient for the needs of trans people. I've looked into then recent research at some point, and thought that was interesting, but I'm not enough of a bilogist to understand that sort of stuff easily, and I haven't retained much. But there's definitely a gendered compenent to how we research sex and what we look at. So when you conclude here the following:

    It is senses 1 and 2 that determine a person's gender, and sense 3 only adds masculinity and femininity to that. So what I'm questioning is that sense 3 is not really about the male/female opposition, and wholly about the masculine/feminine opposition.bert1

    I'd say that we (cis-people) are used to use (1) to legitimise our gender, but it's really (1), too. But because we're the majority it stands largely unchallenged and doesn't often our awareness. A trans person (and a genderfluid and agender person) would be more aware of (3), simply because they keep clashing against the mainstream. The main struggle is not to be agreed with; it's to be understood in the first place, or even to get people to realise that they're misunderstood. And it's difficult to talk about because the gendering of sex also heavily influences our vocabulary. That's how we get the new prefix "cis-". But it's difficult to promote the term when cis-people generally don't have the experience that pushes the entire problem-area into awareness. There's a whole baseline of how some people relate to their body that we can't intuit. The same is true, presumably, for trans-people: what is it like to blind to that area? The difference is that nearly everyone they meet will fall into that category, from childhood on. I've heard time and again what a relief it is to find other people with a similar experience.

    Gender then is the entire constellation. What and how many constellations do we find meaningful? What do we attribute to biology, etc. A social construct tends to only enter awareness if it's problematic, and the mainstream gender conception becomes problematic when we ponder trans people, intersex people, other constellation in other species (write a SF story about sentient slime molds, for example), other orderings of the same biological matter of facts in other cultures or sub-cultures etc. A social constructivist would abstract (3) from a set of compatible world-views, I think. (It's even more complex, because gender is only part of any given worldview, and worldviews might otherwise be largely compatible.)

    I'm probably not explaining this very well, since I'm... unsure myself. It's been more than 20 years since I read any of the literature. I left university in my late 20ies and I'm now nearly 50. And it's really hard to understand in the place, because sort of have to imagine a world-as-is beneath a world-as-experienced, while also maintaining that you can't really do that.

    Think about animated films for example: Robots, Cars, Brave Little Toasters... they're all gendered, without, logically, having a sex. We create the illusion of "sex" with very few signals, without actually assuming the underlying biology (since none of those have an underlying biology). How does that work? Gender is a sort of narrative we use to explain sex: without gender, sex esists, but is meaningless. Does that make more sense?
  • What is "gender"?
    I would be interested if you wanted to start a thread talking about the philosophy of social constructs more generally, since it's an area I'm lacking in formal education and a discussion of it would be informative.Pfhorrest

    I'm unlikely to make a thread, as I'm a slow reader and thinker, and if something's my thread I'd feel compelled to reply to everyone who replies to me, and that would probably take up more time than I can manage. I'm not primarily a philosophyer, to boot, and I'm coming from the sociological side, and that means that I'm additionally not very confident I even can lay out the underlying philosophy. For example, phenomenological constructivism takes off from Husserl, but I'm familiar with Husserl via his sociological reception, which is already a bias.

    Some of the questions you have, though, I think are pertinent to this thread, especially when it comes to the difference between "gender" and "sex" when it comes to the "male/female" pair (rather than the "masculine"/"feminine" pair), which I'm going to address in my reply to bert1.

    I can address a few basics, here:

    I'm particularly interested in something that seems to be implicitly believed by many of the kind of people who usually talk about social constructs, but not explicitly claimed so far as I'm aware: that not only are some things merely socially constructed, but everything is, there is no objective reality at all, and (most to the point I'm curious about) that all talk about things being some way or another is therefore implicitly an attempt to shape the behavior of other people to some end, in effect reducing all purportedly factual claims to normative ones.Pfhorrest

    This here is difficult to untangle, because I'm not sure who you're referring to. But generally that's part and parcel in the topic. They're sociologists. Sociology is a pretty young academic discipline, and there's a sort of we-can-say-things-about-this-too attitude common here. They're competing with economists, psychologists and so on. Philosophy isn't a competitor, but more a sort of grounding. The origins of sociology lie in Comtean positivism, and constructivist schools tend to openly disagree, but not all who disagree with positivism are necessarily constructivists.

    I'm not sure I'd say constructivists reduce all factual claims to normative ones, but there's definitely a trend in that direction. It's definitely strong, for example, in the Frankfurt school, who take the phenomenological perspective and modify it via communication, which they frame through Marx's historic-materialist dialectic. The idea is that every individual constructs a worldview for themselves, which is the basis for everything they do, but they don't develop that in isolation. I'm going to say this again in my reply to bert1, and use the example of gender and sex, so I'm not going to go into much detail here. (Expect a new post, since this post is likely going to be too long otherwise. Does this forum have a wordlimit per post? I've run up against that on other forums...)

    Basically, the important sociological concepts here are:

    Max Weber's "verstehende Soziologie" (Again, I'm not sure how this is translated into English. Wikipedia uses the header "verstehen" on the English page, so maybe they don't even translate that? The basic meaning of "verstehen" is to understand. Weber is talking about subjectively intended meaning, but he's also talking about constructing, methodologically, "ideal types" against which you compare empirical action.)

    Schütz's Husserlian analyisis of everyday life.

    Marx's historic materialism.

    George Herbert Mead's social psychology (and maybe William Thomas' "definition of the situation").

    There's a bit of pick-and-mix going on, and I'm not sure I've captured all of the important stuff. (I'm trying to figure out, for example, if there's some anthropological lineage as well, for example. Or maybe Mannheim's sociology of knowledge? A lot of this is one way road towards sociology, though, and probably not too interesting for philosophers?) I've never really tried to map "constructivism" in sociology before, so that's hard for me as well. Take nothing I say at face value. I'm out of the loop for too long, and I've never really been an expert to begin with.
  • What is "gender"?
    I have not heard that usage, not even once. At no point have I ever heard a man referred to as a woman because of the roles he performs. Do sociologists do this? Do they go around calling male cleaners female?

    Are cleaners female (or lumberjacks male), by definition, in this sociological sense?
    bert1

    I'm not sure how got this from Pfhorrest's post. The social roles in questions aren't occupational; they're gender roles. male/female is the distinction in question, and it combines with other distinctions:

    Age: Man/Woman vs. Boy Girl
    Family: Father/Mother/Son/Daughter vs. Man/Woman/Boy/Girl
    Occupation: Waiter/Waitress (vs. the generalised occupational profile)

    And so on.

    "Gender" tends to refer to two distinct adjective pairs: male/female vs. masculine/feminine.

    And gender expectations aren't generally strict. In fact, if a male person only has masculine traits, people tend to think of him as hyper-masculine rather than as the norm, and when it occurs in adolescents we tend to think of it as "a phase". There may be strict elements, though, depending on where and when.

    It's my impression that the current discourse about being trans doesn't reject the mainstream construct, but treats it as insufficient. Most trans activists, for example, would be fine with a four-way distinction "male/female" and "cis/trans", while also being aware that this might not help genderfluid or agender people. The problem is that a social world traditionally structured for cis people isn't really equipped for the trans distinction (see public bathrooms or locker rooms), and in practise treating a trans woman like a trans woman isn't always possible because the mindset isn't widespread enough yet.

    Finally, remember what I said above about strictness and hyper-masculinity/feminity? Well that's an area that tends to affect trans people differently than it does cis people. A cis-woman who is "too masculine" is a "deficient woman". A trans-woman cannot be "too masculine" under the same mindset; she - no HE - is mistankenly, deludely, or dishonestly claiming to be a man. What gender construct you buy into and apply (automatically and unreflectively for the most part) heavily influences the social reality you see.

    The term "social construct" doesn't only have one meaning in sociology, but if nothing's changed the most common usage tends to come from phenomenology. Husserl - Scheler - Schütz. I think the most-cited text could have been Berger/Luckmann's The Social Construction of Reality when I graduated in sociology in the early 2000s, but I'm not sure. It's definitely a defining text, though. It's not really that important here, and it's also not the whole academic picture. I'm just mentioning it in case your interest runs deep enough so you have a place to start your research, should you want to.
  • A Nice Derangement of Epitaphs
    "Soup latrine", since it is a malapropism, does not occur elsewhere in the object language, or at least does not occur with any where near the of "soup tureen". Empirically it is not a good candidate for the metalanguage interpretation.Banno

    Ah, so the right is empirical to some extent. I'm slowly getting there, I think (your reply to creativesoul about imperatives is helpful as well).

    ***

    About chess: I find the comparison hard. There's only one mishap I can imagine that's unambiguously a semantic mishap; more on that later.

    A pawn taking a step back could be a "semantic" error, or it could be a "syntactic" error. That's not me being undecided; I think chess blurs distinction. If we take pieces as the comparative equivalent to words, then how they relate to each other is a syntactic relation, but in terms of the game that's also their only meaning. That's because, unlike language, chess has a clear procedure to "end the game". Language is open purpose, you can do with it what you like. But a game of chess is over when no more moves are possible (or when someone gives up, or when the only possible move left leads to an eternal loop). So a pawn taking a step back is unforeseen in the rules, and that's both a syntax error <i>and</i> it's also a semantic error, because the pawn "doesn't move like a pawn", which is really its only meaning. (Aside from flavour meaning: it would make no sense for the chess-as-war aspect to have 8 kings on the frontline protecting a single pawn. That's not what's at issue here, though.)

    Ways to play seem more like a best-practices thing; more comparable to rhetorics than semantics.

    What I think comes closest to a malapropism in chess is the following:

    When setting up the board put the knights where you'd normally put the rooks, and put the rooks where you'd normally put the knights, and then play the game according to initial postion rather than according to the look of the pieces. You'd have a piece that looks like a knight but behaves like a rook. That's pretty much what a malapropism is: it's a mishap about appearance, and it works because of the arbitrariness of the sign. As long as your knight-looking piece moves like a rook, it's a rook in all but looks. The biggest challenge is habit: if you're used to playing chess with a knight-looking knight and rook-looking rook, you might confuse the pieces based on habit. That's an additional challange, but it doesn't really ruin the game. Same rules and same pieces; just a mismatch in the "lexicon".
  • A Nice Derangement of Epitaphs
    Thus the meaning of a word is the "contribution" it makes to the truth or falsity of sentences it could appear in.

    Does that make sense yet?
    Srap Tasmaner

    Sort of. I still feel it's a little awkward; starting in the middle, so to speak, and then figuring out the meaning of words and texts both on the basis of sentences (if that's what happens). But at the very least I can work with it, I think. Need to let it settle for a while.
  • A Nice Derangement of Epitaphs
    ↪Dawnstorm, by way of a start on explaining T-sentences.Banno

    I do think I get that much, and I also get:

    The interpretation is not word-for-word; it is holistic.Banno

    The problem I have is a different one:

    There are many different possible sentences that contain "soup latrine". There may not be a limit to them. Assuming that "soup latrine" is a malapropism for "soup tureen", all the truth conditions would hold the meaning we associate with "soup latrine". Since only the left side of T-sentences is the actual utterance, it shouldn't matter what word is on the other side:

    A) "This is a nice soup latrine" is true iff this is a nice soup tureen.
    B) "This is a nice soup latrine" is true iff this is a nice soup latrine.

    The only difference between A) and B) is whether I use the word "soup latrine", too (in the meaning I have heretofore associated with "soup tureen"), or not. The interpretation, being holistic, remains the same either way. How do T-sentences deal with word meaning? Why aren't A) and B) synonymous? How do I interpret a host of difference sentences (A1...An) in which the systematic difference is that there's "latrine" on the left, and "tureen" on the right?

    By virtue of the T-sentences "This is a nice soup latrine" and "This is a nice soup tureen" should be synonymous, so I should be able to write "soup latrine" in my attempt to tansliterate my holistic interpretation, too, right?

    Now try explaining why this doesn't work without recouse to convention. Am I making a mistake here? Where?
  • A Nice Derangement of Epitaphs
    We ought add Davidson's own semantic theory. It looks to me to be the best candidate for a prior interpretation.Banno

    As far as I can tell, Davidson wasn't very influential when it comes to developing pragmatics as a field, even though this article would have fit to some degree, so maybe I just never came across the name. (Wittgenstein was the only one I knew before studying linguistics.)

    Is it worth setting it out here?Banno

    It'd definitely help me understand the article. I've done a little research on truth-conditional semantics, and the most glaring omission is that I can't seem to find out how it deals with word meaning, since truth claims seem to require sentences/clauses. I actually meant to ask, but I forgot.
  • A Nice Derangement of Epitaphs
    It seems pretty clear though that (3) there, the ideal we're aware of, is not the same thing as (2),Srap Tasmaner

    Yes. Look at peeving culture. People often put forward pet-peeves unaware that they're guilty of the same "sins". I remember an anecdote of linguist David Crystal, I failed to find online and thus is of dubious authenticity: He said that a particular violated rule wasn't really a rule people actually use, and to make his point he pointed out some instances in her very own usage. Rather than abandoning the rule she broke out in tears. (I wonder if I misremember the anecdote, the linguist involved, whether it's not online, or whether I just don't know how to find it.)

    ...and where (1) fits in is unclear.Srap Tasmaner

    That's a tough one. What complicates the matter is that there's also "traditional grammar", the scholarly approach to correct language that preceded linguistics and is still the main strain in schools, where we all learn how to use language "correctly", after we've already acquired not only our particular language, but language itself. (We don't just learn to speak English, we learn to speak.)

    The structure of (2) we are largely unaware of, so it's more likely that (3) is something else that (2) generates alongside linguistic behavior. And (3) can readily grow from simple correctness to the art of rhetoric.

    But maybe it goes in the other direction!
    Srap Tasmaner

    After the initial language acquistion as a toddler, it probably goes both ways, with judgments being made when what's already internalised is problematic in a stiuation (including being corrected by others). A lot of it is down to how we teach language in school, and also that language change means that old and new usage exist side by side and it's never quite clear what will survive. Some admonishments like "not ending a sentence with a preposition" and "not splitting infinitives" have been around for quite a while, so it's very likely that both the usage and its criticism is going to stick around for a while longer.

    There's this idea of a language war between descriptivists and prescriptivists, but that would give you two insane positions: either "there are no mistakes," or "usage doesn't matter". Basically, you just navigate a linguistic landscape, accept some rules (and maybe internalise them, or maybe just pay lipservice), and discard others (maybe as a deliberate choice, or because your word habits are too strong and you just forget). Rules can be internalised from (3) into (2), and (1) can make hypotheses about when that happens (but I'm not sure how good (1) is at that currently).

    Davidson's article appeared in a time when (1) generally became more interested in usage on various fronts. The rise of pragmatics, and of usage based grammar theories (such as cognitive grammar, construction grammar, or functional grammar). I sort of see it as a child of its time, and it's really older than me.
  • A Nice Derangement of Epitaphs
    1. Davidson's principles (1) - (3) are a good description of lexical meaning.
    2. Davidson's argument shows that (1) - (3) cannot account for linguistic behavior.
    Therefore
    3. We lose nothing by giving up the idea of lexical meaning.
    Srap Tasmaner

    This looks like a pretty good summary to me. Here's a key question:

    What's the relationship between "first/literal meaning" and "lexical meaning"?

    Davidson doesn't really address this directly, but I think there's a difference, here. First meaning is defined by the interplay of prior and passing theories, and - I think - "lexical meaning" would be part of the prior theory, but it wouldn't be it's entirety, because "lexical meaning" remains some sort of super-situational ideal, an abstraction.

    Take this section where he looks at whether a prior theory could be what we think of a natural language.

    An interpreter’s prior theory has a better chance of describing what we might think of as a natural language, particularly a prior theory brought to a first conversation. The less we know about the speaker, assuming we know he belongs to our language community, the more nearly our prior theory will simply be the theory we expect someone who hears our unguarded speech to use. (262)

    I think there's an idea implied here, that the more we interact with specific people, the more we modify the prior theories we bring to conversations with them, but they don't impact "the theory we expect someone who hears our unguarded speech to use". It feels like a natural language is something a prior theory will diverge from the more we interact with a particular person. Or in short, that we expect Mrs. Malaprop to make malapropisms is part of the prior theory we bring to a conversation with Mrs. Malaprop, but it doesn't modify what we think of as a "natural language".

    But he doesn't really talk about what it is he thinks of as "natural language". He keeps saying things like "in rather unusual ways" with the assumption being that there's a "usual" way we all think of language that's obvious.

    And this is where I'll again have to emphasise that I come at this article from a linguistic perspective and not a philosophical one; maybe in philosophical traditions there actually is such a thing, and I just don't understand it. So I have this megpie mind; I snatch what's useful from philosophy and discard the rest. Linguistics basically started in earnest as a discipline with Saussure, and it turned into a systematic description of language, where signs interact with each other to make for a whole structure. Since early linguistics was tied up with anthropology, one way to look at it is to find a formal way to describe human artefacts. In other ways, lingistists aren't really doing anything that avarage language users aren't; they're just more systematic and ask questions that arise from being more systematic.

    That's never been quite enough to account for all data, though, so linguists would look towards the philosophy of language, say Wittgenstein's language games, Austin/Searle's speech acts, and Grice's co-operative principle, and also towards linguists such as Jakobson and his functions of language, and would establish a discipline called pragmatics, so that we now have:

    [Syntax, morphologly, phonetics, phonology, semantics] describe language, and pragmatics describes how people use language.

    That was the mainstream standard organisation when I went to university in the 90ies, but pragmatics wasn't actually fully established, I think, until the early 70ies.

    Then there's another distniction: linguistic analysis can be twofold: synchorony and diachrony. How language is used at any one time, and how language changes. Usually a synchronic approach would describe a fairly rigid set of rules, and a diachronic approach would then show how rules are broken, subverted, played with, so that langauge changes. (An example would be the migration of the "n" from the noun proper to the indefinite article "a": a nadder -> an adder; a napron --> an apron). Those approaches are seen as complementary, so a described, more or less rigid set of rules isn't taken to determine actual language behaviour.

    So one problem I have is that my intuition seems to clash with Davidson's. I might agree with a lot of things he's saying, but I might never have held his view of what a natural language actually is. For example, I think one difference between my instinctive approach and Davidson's might be the following:

    We both see language as an overly rigid structure. But where he expects language rules to determine behaviour (something he doesn't find in real life), I expect that rigid structure to be some sort of ideal type of a structuring principle; something people use to both create utterances and compare other people's utterances to, and something that will on occasion fail: people make mistakes, people don't find the words to express what they want to say and approximate with the best words they can find (and on failure to communicate try alternate ways of expressing themselves)... and so on.

    It's not a surprise to me that you can also play with language. And we can learn by playing. For example, there's this little tale, "Ladle Rat Rotten Hut", which was written to demonstrate the importance of intonation to interpreting words (it's a rewritten version "Little Red Riding Hood"; sadly the audio link seems to be broken - the story's meant to be both read and heard. Here's a youtube link.) It's the perfect case for a passing theory, too. And it's also clear why the theory will remain a passing theory (but maybe turn into a prior theory for whenever you engage with the same text again). None of these new words we'd expect to spill over into dialect, though they might spread as in-jokes for an in-group.

    I actually meant to be briefer and more concise this time round.

    Basically, I think Davidson is saying that prior and passing theories establish first meaning, which in turn can have consequent meanings due to the compositionality of language (to understand the Shakespear sonnet, we must first understand "foison" and "tire"). What "first meaning" has in common with "lexical meaning" is that it's not necessirily identical with the intended meaning; where it differs is that, unlike "lexical meaning", "first meaning" is always situational. And the way Davidson analysis first meaning sheds doubt on "lexical meaning", though it's possible to import "lexical meaning" into a speaker's prior theory.

    So when he finishes with these words:

    And we should try again to say how convention in any important sense is involved in language; or, as I think, weshould give up the attempt to illuminate how we communicate by appeal to conventions. (265)

    I think, the first clause is his conclusion, and the second clause his bias.

    I think he's largely right, but I'm not sure I understand what he thinks a natural language is supposed to be, and I think we (Davidson and I) start in completely different places on that topic, which is why I have trouble reading him in detail.
  • A Nice Derangement of Epitaphs
    If Davidson wishes to preserve a purported distinction between what a speaker means and what their words mean then literal meaning cannot ever be what a speaker means(but it quite often is).creativesoul

    I think that's wrong. Preserving the distinction merely means to preserve the analytical category. If you do that, you can say that what the speaker means is what the words mean. If you don't preserve the categories, you can't say that in terms of this particular theory, because you lack the tools. He's just describing the analytical framework, here. (The sentence Srap Tasmaner pointed out and I missed about making a distinction between what is "literal" and what is "conventional"... that is really odd, though. I'm not sure what to make of this.)

    Is that right?

    I rather suspect we use it, regardless, and become more adept over time.
    Banno

    We can make similar inquiries about all the principles. There's a point at which we don't know enough about a language to use it, and then there may come a point that we do. In this thread you're using English; could you say the same, in say, Hindi? Basque? Ancient Egyptian?

    Still, I think that's an important point you make.
    That is, learning a language and using it are the very same thing. After all, have you stoped learning English?Banno

    I don't think it's useful to conflate usage and learning, although learning usually involves usage. One of the things to bear in mind for example is that if a great number of people fail to "learn" a certain feature, we might be looking at language change. (Examples from the past: a nadder --> an adder; a napron --> an apron.)

    Also, learning details about a language you're already speaking tends to work a little differently from acquiring a language you don't speak yet. It's especially interesting to look at first language acquisition. We tend not to remember what it was like to not speak any language at all, but there was such a time. What's it like to learn that there is language? (Do we? Some people thing it's innate.)

    But further, and deeper, if you could learn a language before you used it, that would imply that there was a difference between knowing a language and using it. I can't see what that could be like - how could you show that you know a language without using it?Banno

    This is where I start to be out of my depth. I can see that my ad-hoc phrasing above isn't useful for that sort of questioning, but I can't rephrase it really, because I'm not sure what I'd want to achieve by doing it. I could maybe talk about types of usage? Like approaching an unknown language via a text book for second language learners?



    That sounds like a plausible reading.
  • A Nice Derangement of Epitaphs
    Interesting that you're from a linguistics background. I'm curious to know what you think about the adequacy and/or sufficiency of the three principles proposed for successful communication/interpretation.creativesoul

    I'm not really from a linguistic background; I just come at the issue from a linguistic perspective. I do have a university degree, but it's in sociology, and whatever formal education in linguistics I have I acquired in the context of a sociology degree (it's more complicated than that because of the way university studies were organised, but that's close enough). It's just that after graduating, I never did anything with my degree, and I kept up a sporadic interest in linguistic on my own.

    About the three principles: I think they're all trivially true, but what's important is how you use them in a model of language, and I haven't quite yet figured out Davidson's model (and I probably won't just from this one article). He uses "first meaning", and I'm not quite sure what that means, so that's an additional difficulty I have.

    When you start out studying any of the humanities, one thing you learn pretty quickly, is none of the terms probably mean what you think they mean, and different people use them differently, so knowing roughly what sort of theoretical background to expect helps you a lot in understanding a text. That's why it matters to me that I'm not very knowledgable about the philosophy of language. I have all the caution but none of the background when it comes to interpreting the text.

    I'll have to run through the principles with what I think of as "lexical meaning", instead of Davidson's "first meaning". I think that's not quite it, but it should come close enough for the purpose here. "Lexical meaning" of a word is just the word it has outside of context. One can think of it as a dictionary in the mind.

    So, yes, "lexical meaning" is systematic. For example, an apple is a type of fruit, but a fruit is not a type of apple. The hierarchy involved here is an example of the systematicity we're talking about.

    And, yes, "lexical meaning" is shared, as is apparent when I ask you for an apple and you give me one.

    And, yes, you have to learn a language before you can use it. And what you learn, are conventions. This is actually the most complex topic. In anthropology, colour terms are the go-to example, because it's easy to see that different languages order a spectrum differently. (Early linguistic is quite bound up with anthropology.)

    But that's all pretty trivial. It depends on what you do with that in a language model, and the assumptions you make about what a language is can differ wildly. So when Davidson says "Probably noone doubts that there are difficulties with these conditions," I agree, but what difficulties you run into vary by the model you use. Sure language is systematic, but how systematic? Sure a language is shared, but what does sharing a language look like in practise? Sure a language is conventional, but how much do those convention enable/restrict your language use?

    An easy example: If you study linguistics, you'll hear early on that the relationship between the sign and meaning is arbitrary, but then you'll immediately be told that onomatopoetic expression might be an exception. Are they? There's clearly still a level of arbitrariness, because, say, animal sounds are usually linguistic imitations of the real thing, but they still differ by culture. I think that's where the difference between a philosophy of language and linguistics come in. Philosophers tend to be interested in the topic, while linguists tend to be interested in those topics when they become problematic for their theories and research.

    So when Davidson concludes that there is no langauge because of malapropism, I'll have to first figure out what is he expected. It's entirely counter-intuitive for me: there are language conventions, but unconventional language use doesn't automatically preculude understanding. For example, if a non-native speaker were to say "I hungy," you might still understand that he's hungry, even if he doesn't acutally use the auxiliary verb and forgets an "r". So to claim that a language is largely conventional is not to claim that if you deviate from those conventions, you can't be understood. We're not computers who return a syntax error for a simple typo. (And this is where I might inject that programming languages are more systematic than natural languages. That shouldn't be a surprise, but this is something you should consider when interpreting principle 1 within a theory.)

    So, for example, Davidson says this:

    Ambiguity is an example: often the ‘same’ word has more than one semantic role, and so the interpretation of utterances in which it occurs is not uniquely fixed by the features of the interpreter’s competence so far mentioned.

    Here's where I'd just look at what I have as a model that I try to get as close to the real thing as I can. So when I notice that there's ambiguity, I'd just look at how we typically resolve ambiguities and add that to the model. Semantic Field theory, for example, helps a lot. "I took the money to the bank," includes two nouns, "money" and "bank", and because they're thematically related (part of the same "semantic field") "The pirate buried the money near the bank," feels more ambiguous, even though we still have "money" and "bank" - but "pirate" and "buried" suggests a river bank as a very real possibility. Beyond semantic field theory common sense would tell me that a pirate isn't likely to bury money near a institution that deals with cash. But once I have to consult common sense to resolve an ambiguity, I'm already aware of it. There's been a disfluency in interpretation. I have a model that would likely lead to misunderstandings, but that's no problem because, well, in real life there are misunderstandings. I don't need a model of language that's more systematic than the real thing. I don't need a model that's completely shared. I don't need a model that's totally formed and restricted by convention. Because the real thing isn't like that either.

    The interesting line here is "uniquely fixed by the features of the interpreter's competence". At that point, I'm guessing that he thinks there's a unique thing like "linguistic competence", as opposed to a more general competence. So later he says that:

    nterpreters certainly can make these distinctions. But part of the burden of this paper is that much that they can do ought not to count as part of their basic linguistic competence.

    If I compare that to my intuition, I'd say he's got a much narrower and more specific idea of what a "linguistic competence" is than I have. As a result I have to be careful not to impose what I think on his text. It's a question of phrasing. So by the time he ends with:

    In linguistic communication nothing corresponds to a linguistic competence as often described: that is, as summarized by principles (1)–(3). The solution is to give up the principles.

    I'm careful. I still don't quite know what he means by this, or what he expected language to be like. But I connect it to the rise of a couple of linguistic theories from around the mid-eighties to the early nineties (cognitive grammar, construction grammar, functional grammar), many of which were designed in opposition to Chomsky's Universal Grammar program (where there's a deep structure that all people share, and transformation rules generate the surface structures). So he may have just given up on some sort of "linguistic competence", a feature of a person's mind (?), that I never believed in to begin with, so what I would have thought of when reading about those three principles would have been pretty different anyway. For example, it doesn't make sense to me that we'd switch off our cognitive faculties that aren't directly involved with language when speaking, and I certainly don't see a need to integrate functions into a "linguistic faculty" that other cognitive tools do pretty well already. There's some sort of specialisation going on (and some of it is typically brain-related, as Brocca's or Wernecke's aphasia shows), and acquiring your first language seems to be easier and more formative than later language acquisition. But it's still not clear to me how much of language-cognition is specialised. If there are two positions that say "much of it" and "little of it", I'm more inclined towards the little-of-it spectrum.

    So my intuition is that three principles hold up pretty well, but it's definitely possible to ask too much of them, and I think Davidson might have realised he asked too much of them. The question I have, is that so, and if yes: what did he expect a "linguistic competence" to do all on its own?

    I left university in the early 2000s, so I'm almost completely out of the loop and have been for a while. Computational linguistics and neurolinguistics should have had some interesting results since that time, I would suspect, but I know little about any of that. If I did, maybe the post would have turned out even longer.
  • A Nice Derangement of Epitaphs
    A single use is enough to 'summon up a passing theory' - I think this speaks a great deal to how convention can be single-use.StreetlightX

    "Summoning up a passing theory" facilitates understanding, not necessarily agreement, not even necessarily provisory acting-as-if. And classifying the usage as a "malapropism" actively prevents consensus: a malapropism not a permissible variant. "Flamingo" can't be both a malapropism of and a synonym for "flamenco" (not in the same mind, at least). In this sense, the concept of "malapropism" hinders passing theories from undermining convention.

    What sort of situation would you describe as a single-use convention:

    A makes a malapropism; B parses it as such:

    a) B corrects A.
    b) B lets it slide.
    c) B uses the malapropism repeatedly to make fun of A.
    d) B decides to play along
    e) B doubts his judgment, and passes over the topic.

    And so on.

    I've just noticed something about Davidson's notion of first meaning.creativesoul

    I have trouble understanding Davidson notion of first meaning in the first place. This is one of the places where I wonder whether I'd have better understanding if I was more knowledgable about the philosophy of language. But I come from linguistics, and this feels like a mess. What you've been pointing out is part of it, but I don't necessarily think he's being inconsistent. I just don't get that entire part.
  • A Nice Derangement of Epitaphs
    "I dance the flamingo" is true IFF RussellA dances the flamenco.

    ...and at issue is what conventions permit the move from flamingo to flamenco.

    Grice might have us do so by inferring your intent in making the utterance; but as I explained above, intent is not as clean a tool as Grice seems to suppose.
    Banno

    How do you get the truth condition without reference to a convention? I agree with the drive of RussellA's argument, but there's something I think is not exact:

    Malapropism is exhibited in the relationship between two sentences, in that a sentence exhibits malapropism if it is different to the sentence the interpreter was expecting.RussellA

    I agree that the malapropism is exhibited in the relationship between two sentences, but I disagree that it's about what the interpreter expects, because that might be wrong, too.

    "Flamingo" isn't a malapropism because the hearer expects to hear it. It's just an incompatibility in two ideolects at that stage, and that incompatibility could be resolved either way. The speaker could convince the hearer that the dance is, in fact, called the "flamingo".

    Without convention, you have no malapropism, you have simply an unresolved conflict between ideolects that could - in theory - be resolved either way.

    I think it follows that the term "malapropism" and the ralted concept is an utterance-external convention to keep the utterance-internal convention locked in. But a convention isn't absolute: it's dependent on lasting consensus. It may be more efficient to codify a recurring passing theory into a new prior theory than to try to convince a great number of people that they're wrong.

    There are a lot of language wars around; "I couldn't care less," vs. "I could care less," for example.

    When I hear "I dance the flamenco," what goes on in my head might be analysed thus:

    "I dance the..." sets up the expectation that what follows is a dance. At this point, I may or may not pay enough attention to the actual utterance to hear that the other person is actually saying "I dance the flamingo," instead "I dance the flamenco." I could, on account of phonetic similarity, mishear the utterance. That is: I arrive at the correct interpretation by mistake.

    If I do hear "flamingo" instead of "flamenco", my prior theory fails, but I can't yet assume why. Maybe there's a dance, the "flamingo", that I don't know? Maybe what I thought was called the flamenco is really called the "flamingo"? Maybe "flamingo" is a cutesy nickname for flamenco I'm not aware of? Maybe the speaker misspoke? Maybe the speaker has an "incorrect" prior theory?

    This isn't just one "passing theory"; these are many? So why do I select the malapropism one? Common sense? A desire to be right? In any case, because word meanings are conventional there are "tie breakers" so to speak. Dictionaries, dance experts, and so on. All of that involves social conventions that have to do with language.

    And I can have prior theories and passing theories, for example, about the reliability of any one dictionary, though they would not be - strictly speaking - linguistic prior theories.

    Basically, with Gricean non-natural meanings, you need conventions to fix truth values, or else you have just unstructured conflict. The rest is just a question what you mean by "linguistic", and that was a question that was definitely in the air in the mid-eighties (with the creation of Langacker's "Cognitive Grammar", or Fillmore's "Construction Grammar", as opposed to hugely popular "Univeral Grammar" by Chomsky).

    I'm enjoying this thread, but am a bit shy to respond because I'm not very familiar with the philosophy of language.
  • Discussions on the internet are failing more and more. We should work on fixing that
    In my opinion one of the root causes of this is that we have a natural tendency to identify with the ideas that we store in our brains. We love ideology and we defend our informational catalogue with everything we got, because acknowledging a good argument means that we were wrong and that we need to let go of an idea, a part of our personality.

    A much better way would be to identify with our way of thinking instead of our knowledge. Critical thinking skills are becoming more and more crucial in this age of informational floods. And these "tools" with which we can analyse the value of new information should be the centerpiece of our identity.
    Hirnstoff

    I'm not sure I agree here. The "informational catalogue" is intricately tied up with "the way we think". I think it's two sides of the same coin, really. You rightly call these things "tools", but the more you identify with "tools" the more they become thought habits. Being right still becomes personal; you just go from being right about things to being right about how to go about things.

    What I'm saying can be summed up like this:

    Identify with your knowledge ==> You're wrong.
    Identify with your tools ==> You're stupid.

    It's not an improvement. We need to relativise our tools and learn to figure out what tools other people use and see if there are tools we both can use. That's why science was successful: it's a tool many people can use. But that usefulness decreases the more you identify with the tools: it becomes a sort of scientism: if science can't explain it, it should be disregarded.

    If people are suspicious of "critical thinking" there might be a reason. Any tool you use needs to be open to inspection. Less identification, not more.

    Maybe I misunderstand you?
  • The More The Merrier Paradox
    For instance, if the probability that the observation O is real, say, 90% then the probability of O being real if all 3, X, Y, and Z observe O is 90% * 90% * 90* = 72.9% and the probability that O is not real = 27.1%.TheMadFool

    Probability that O is real: 90 %
    Probability that O is not real: 10 %

    If all three observe O, the probability that O is not real is 10%*10%*10% = 0.1%. So the likelihood that O is real must be 99.9% right. But wait. 90%*90%*90% is 72.9% (as you correctly calculated). 72.9%+0.1%=73%. We're missing 27%.

    In your way of calculating you're missing 75% when you assume a probability of 50%, and 27% when you're using a probability of 90%.

    Instead of realising we're missing cases, you proclaim a paradox. That's not good maths.
  • The More The Merrier Paradox
    What is the main issue here? Whether the observation O is real/not real, right? What model do you propose other than that which has to do with the probability of O being real/not? As far as I can tell, P(real) lies at the heart of the issue where P(A) means the probability of A. :chin:TheMadFool

    You haven't quite made clear what "observation O is real/not real" means.

    Let's say you see a unicorn, and you ask others if they can see it, too. How would you describe this in terms of probability-prone variables?

    This is what I would do:

    There are three observations: O(x, y, z). Variable O can have two values:

    "sees unicorn"/"does not see unicorn".

    We know the values of the variable. The input comes straight from our experience. So O is not a random variable. No probability. It's either "sees" or "doesn't see", and we get the values by asking.

    The second variable is the event E, which also a binary: "there is a unicorn" and "there is no unicorn".

    What you're doing is basically estimating the likelihood of E with a coinflip: you set E as a random variable. It's random, because we have no way of knowing the value because observation doesn't count.

    But what we're doing when we ask others if they're seeing the unicorn, too, is not asking whether the unicorn is real.

    Your case is this:

    O(x) = sees Unicorn
    O(y) = sees Unicorn
    O(z) = sees unicorn

    E = ?

    You calculate the probablity for E the following way:

    First you assume that

    P(E) = 50 %
    P(~E) = 50 %

    And then you calculate P(E(x))*P(E(y))*P(E(z)) = 12.5 %

    However it's far from clear what it means to cross E with person X, Y, Z. There's no reason to bring in the observations at all. P(E) is always 50 %, as per your assumption.

    Even in the case:

    O(x) = does not see Unicorn
    O(y) = does not see Unicorn
    O(z) = does not see Unicorn

    P(E) would still be 50 %, since you derived the likelihood simply from the two logical possibilities.

    O(x) simply inspires the question, but E is an independent random variable. Basically, if there is a unicorn (E), then there is a unicorn no matter what values O take. There is no E(x) that corresponds to O(x), for example. There's only E.

    What you propose looks like a game:

    No matter who sees the unicorn, they all flip a coin, and if the coin comes up heads they say the unicorn is there, and if the coin comes up tails they say it's not there. But they'll only accept that the unicorn is there by full consensus, so they keep flipping coins until it's all heads or tails. In that case the likelihood that the unicorn exists or doesn't exist, as per consensus, would be equal, but only because there are exactly 2 ways the game can end. With a different likelihood the probability changes according to how many constellations end the game, and how many of those constellations are dis/favourable.
  • The More The Merrier Paradox
    However, remember that I'm only concerned about the principle of repeatability which is basically the belief that the probability of an observation being real increases with the number of observers.TheMadFool

    Then you should choose a mathematical model that's up to the task. An independent random variable with the sample space of {Real, Unreal] isn't it.

    While there might be a lot going on in between, I only have to consider the worst case scenaro (everyone [all 3, X, Y, and Z] observing something not real) and the best case scenario (everyone [all 3, X, Y, and Z] observing something real).TheMadFool

    Not if you treat O as an independent random variable. If you do that the math forces you to consider those cases, lest the math be rendered useless.

    You <i>can</i> ignore those cases of coure. Let me show you:

    RRR - 12.5 %
    RUU - 12.5 %
    RUR - 12.5 %
    RRU - 12.5 %
    URR - 12.5 %
    UUR - 12.5 %
    URU - 12.5 %
    UUU - 12.5 %

    Turns into:

    RRR - 12.5 %
    RUU - 12.5 %
    RUR - 12.5 %
    RRU - 12.5 %
    URR - 12.5 %
    UUR - 12.5 %
    URU - 12.5 %

    UUU - 12.5 %

    And your probability that O is real remains 50 %, because 12.5 % are 50 % of 25 %.

    You're not going to get very far if you don't understand the maths. There is no paradox.
  • The More The Merrier Paradox
    You're right. Some people are more likely to hallucinate than others who, in turn, are more likely to observe the real. That means I have to calculate probabilities for each possible scenario.

    However, in my defense, I'd like to point out that the variations are not so extreme as your numbers suggest. The Bell curve should be good enough to allay your concerns - most cluster around the mean.
    TheMadFool

    The bell curve isn't very relevant to my point. It's about distributions. And you'd first have to clearly define the variable that's distributed (the mean of what?). None of that is very relevant to the point.

    You're talking about a paradox that doesn't exist, because you <i>don't</i> have a clear grasp of your variables.

    Let me try to explain it again: You're multiplying your 0.5 probability as if it were a independent random variables. Of course, they're going to get ever smaller, no matter if they're all real or all unreal. Mathematically, what you're calculating is equivalent to coin flips. Three times heads in a row is more unlikely than two times heads in row. Same goes for tails. That's the sort of maths your using.

    What you're ignoring is the likelihood that O is real when X sees it, unreal when Y sees it, and real again when Z sees it, and so on. The more people you add, the greater number of possible events you ignore.

    Your 12.5 % is the chance that O is real for all three people, and the chance that O is real for none of them. The other 75 % distribute over all the other permutations (like, for example, real for X, but unreal for Y and Z, or real for X and Y, but unreal for Z).

    If that state of events doesn't make sense to you (i.e. you're not a relativist about what's real), you've used inappropriate maths.
  • The More The Merrier Paradox
    The problem is that you're defining O as constant, but then treat it as a variable during the calculations.

    Basically, you're saying that O can either be real or not, and that's how you justify your 50 %, but then you treat O as a variable for the reliability of O(x), O(y), and O(z). Of course, both likelihoods are going to become ever smaller, because you're only considering the extremes. It's easier to show you what I mean, if we choose different values.

    Let's say the likelihood that O is real is 90 % and the likelihood that O is not real is 10 %.

    You're only considering the following cases:

    O is real: 0.9 * 0.9 * 0.9

    And O is not real: 0.1 * 0.1 * 0.1

    There are a number of cases you're ignoring:

    0.9 * 0.1 * 0.1
    0.9 * 0.9 * 0.1
    0.1 * 0.9 * 0.1

    And so on.

    It's understandable that you'd ignore those cases, since in all those cases people would be hallucinating things that are actually there. But your maths describes those cases and ignoring them gives you results you falsely interpret as a paradox, because you're not looking at the whole picture.

    There's a theoretical problem here you have to solve. What do you do with cases in which all people see the same thing but only some of them see something real? Your math doesn't allow you to ignore those cases.
  • Abortion, IT'S A Problem
    I would assume there are languages out there that lack the personal pronoun altogether (as I'm told is the case for Japanese), but I don't think we can then say the Japanese don't fully recognize the difference between people and hats.Hanover

    Japanese have plenty of personal pronouns, or none, depending on how your linguistic theories define the terms. All the Japanese pronouns are structurally identical to Japanese nouns, so you could say there's no need for the word-class, but there are functional equivalents to pronouns in English.

    For example, there are two third person singular pronouns in Japanese: "kare" ("he") and "kanojo" ("she"). I'm not entirely sure, but I think their both derived from nouns for boyfriend ("kareshi") and "girlfriend" ("kanojo" - identical). There's no third person neuter pronoun that I'm aware of, and Japanese has a tendency to use proper names or nouns where we'd use pronouns, so the pronouns are quite a bit rarer than they would be in English (also because you can generally drop the subject of a sentence).

    Japanese pronouns are a nightmare to learn, since you need to be able to properly judge your social standing as well as the formality of the current situation. For example, a boy talking to his friends might use "watashi" for himself ("I"), but it'd probably sound feminine (he'd be expected to use "boku"), but if he'd talk to a stranger on the street "watashi" would be gender neutral (and "boku" would be a social faux pas). I don't speak Japanese; I just looked into it at university to see a different system (and I like to watch anime).

    None of that impacts your point. Languages encode different things differently, and what's not encoded can still be expressed. So the question remains how language relates to cognition. And that's a huge question. A one-to-one comparision between word-classes is often not going to be useful, because it tends to rais questions that are irrelevant to the topic (like "does Japanese have pronouns?"), and simultaneously narrows down the question too much.

    I'm German. We have grammatical gender in German. The definite article "the" splits in three: "der" ( the - masculine), "die" (the - feminine), "das" (the - neuter). The German word for "girl" is "Mädchen", and the noun is neuter. There's a grammatical reason for this. The -chen suffix is a diminutive, and all diminutives are grammatically neuter. I'm perfectly fine with this. I don't even register a problem when speaking. "Das Mädchen" (grammatically neuter) refers to a girl (conceptually feminine). There's no conflict at all in my mind. However:

    Rules of grammar would dictate for consistency that I use the neuter personal pronoun when refering to a grammatically neuter antecedant. I refuse. It feels outdated to me, and I'm uncomfortable using "es" ("it") for a girl. The notional antecedant overrides the grammatical antecedant for me. I've gotten into trouble for this in school, but not reliably.

    So why am I completely comfortable with a gender neutral article, but not with a gender neutral personal pronoun? That's a cognitive question about the relationship between formal grammar and language in use. Saying that grammatical gender (wherever it's encoded) is a 1:1 correspondence to notional gender is clearly wrong. But saying that grammatical gender is irrelevant to notional gender is also clearly wrong. That's a difficult question even within one language, and it becomes even more difficult to answer once you compare languages.

    It's an interesting topic, though.
  • Deep Songs
    When movements go wrong:

  • Is anyone here a moral objectivist?
    There was an excellent thread a while back on Lisa Feldman Barrett's way of looking at emotions as socially mediated categories for raw affects.Isaac

    Thanks for the pointer. I'll check it out when I have the time and inclination.

    I just don't think it's possible to privately interpret one's mental states to an extent where one can form propositions about them without recourse to social modes of interpretation. So for me to say that my hitting old ladies is moral would require that I am first fluent in the social activity of interpreting some behaviours as 'moral' ones. This is an activity like any other, they do not arrive pre-labelled. The act of labelling (and this goes for any of our thoughts) is a piece of socially learnt behaviour.Isaac

    I'd agree to this. Just to be sure: I don't think of "society vs. person" as a dualism. Society is the result of lots of people interacting (when looked at from below), and "identity" (In a more basic sense than current identity politcs would have it) is a process of positioning yourself (when looked at from above). Because of this, I'd have to add that it's <i>also</i> impossible that social structures and artefacts exist if they're not being enacted/interpreted by knowledgable agents. There's something reflexive going on here.

    When I convince you that the earth is flat, this is just as wrong as it was before. When I convince you that hitting old ladies is morally good, that's still wrong, but not in the same way. The entire system has just shifted a little to it being right. (Of course, it's very, very hard to convince people to begin with, and because of that it's unlikely to ever gain "critical mass", even in a subculure.) It's very likely always going to be wrong. But the dynamics involved make change possible in principle.

    I've sometimes used 'moral' as shorthand for 'morally good' so hopefully this shouldn't get in the way too much.Isaac

    So have I. It's hard to shed everyday usage.
  • Is anyone here a moral objectivist?
    No, we do different things with some of our desires than we do with language. Calling those desires 'moral's is a linguistic event. It's you talking to me at the moment, It's a social interaction and so it has to involve only social meanings for us to be able to communicate.Isaac

    Okay, after around 1 1/2 hours of trying to puzzle out this paragraph, I think I might actually start to understand where you come from. Is you take on this issue derived from or at least compatible with Skinner's Behaviourism? How public events teach us to tease apart a holistic private experience into lingistic concepts.

    When I hear "linguistic" my linguistics side takes over, so I was constantly looking in the wrong direction (if I'm right here).

    The answer for me is that society has labelled certain types of objective 'moral' ones, just like it's labelled certain wavelengths of light 'blue'.Isaac

    The way I use "moral" it's more akin to "colour" than to "blue". "moral" =/= "morally good".

    I know you'd like a definition from me, but there a lot of things I haven't figured out yet, and I have no "research goal" to guide a provisional one. For example, I've hinted in this thread that I think psychopaths can't act morally, but I'm not actually sure I really think that (for example: does morality necessitate perspective taking, or is a consequentialist approach sufficient?). As a result, I may be inconsistent across posts. Were I to attempt a definition at this point, the problem would get worse.

    I'm not quite sure what you're asking here, but I'll have a go at answering it.Isaac

    I disagree with nothing you said in the following paragraphs. I had the impression that you're taking the public sphere for granted, as if it weren't made up of lots of private experiences we face in behavioural aggregate. But if my Skinner epiphany is in any way getting me closer to your perspective, I have a direction to think in now. I'll need to let this settle for a while.
  • Is anyone here a moral objectivist?
    Ah, OK. Then yes, I'm saying there isn't a difference. In short, morality is a social concept, the language used to describe it is social too and so private meanings make no sense. One can only speak about one's morality using the public definition of what morality is and that definition cannot refer to a private feature otherwise it's not a useful word. Wittgenstein's beetle and all.Isaac

    This isn't a beetle-in-the-box situation. We do different things with morals than with language. which was my first post in this thread was meant to demonstrate. "Using a wrong word" is not a moral failure.

    How would they know? As per the private language argument, unless their behaviour is publicly acknowledged to be labelled 'moral' how would they privately maintain a criteria for their behaviour to class as moral and still expect the word to play a meaningful role in communication?Isaac

    I do think you have a point there somewhere, but I also think my focus is somewhat different and we're not entirely talking abou the same thing.

    Any real-life decision is utlimately private, and only through lots of private decisions is there something like a public sphere. I don't think action points only upwards, so to speak. It's no more warranted to impute a public sphere than it is to impute private experience.

    And even private experience is partly socially formed. My conscience is a home-grown trace of my social history, for example, but it's also partly informed by my personality (I don't like conflict, for example, and that would certainly have an influence on what I'd feel bad about; "I should confront this person, but I don't have the energy."). Some basic urges are socially formed. Toilet training comes to mind. Walking on the sidewalk, too.

    Few actions are purely moral. Most have an instrumental aspect, too. Whatever we theorise about the socially accepted moral goods is abstract, anyway, and needs to filter through your private decision making process to become an action (or a tragically long hesiation).

    On the other hand, when it comes to meaning I'm not looking for similarity between people so much as compatibility: as long as our actions proceed without a hich it doesn't matter what the beetle-in-a-box is like (if it's even there). But, well, incompatibilites do occur, and at least for me it's not always easy to spot whether there's a misunderstanding or a disagreement. See this discussion for plenty of examples.

    How do you know? I mean how do you know it's a 'moral' compass, and not just any old compass?Isaac

    Okay. I'm hungry. There's a banana on the table. I don't like the banana and decide to hold off on eating it. There isn't a moral component in the decision I can find.

    I know my little sister is looking forward to eating the banana. Now a moral component enters my decision. I have one more "excuse" not to eat the banana. I like to think well of myself, so I'd like to frame it as a moral decision. But this also makes me de-emphasise that I dislike bananas. Then I can ask why I'd think better of myself if my motives aren't "being picky" (oh hey, there has been a potentially moral angle on it all along, and I didn't notice) but being "considerate". My motivation is a sort of compound, though, so whatever I wish to think about myself isn't all that important. A panel of disinterested observers could tell me how I consistently act, though...

    Yeah, how do I know? Maybe I'm just not hungry enough to eat a banana. But there is a constellation, and the ways to arrange the pieses are, to an extent similar, and extended observation can get you a clearer picture. I don't think purely moral actions exist, and I also think completely amoral actions are rare. So the question is most likely "how do I know the ratio?"

    I probably don't, but I can guess and feel hurt when other people laugh at my guess and guess again.

    Yeah, that's actually where I'm going with this. Once we accept that 'moral' is a publicly defined term, we simultaneously accepted the mess and the dynamism (like your definition here, by the way), we have to accepted that one a thing is 'moral', that's alk there is to it. There's 'moral', not 'moral', and 'sort of moral, fuzzy at the edges'. But there's no way if working out that fuzziness, there's nothing most moral, it just us what it us, a messy, community defined group.Isaac

    So where do you place protests, criticism, and conflict, if the moral realm is all public sanction? Don't forget that every single one of us is part of each other's context, even if only in some very minuscle way. How do topics (like, say, trans rights) enter the public discourse? I can't imagine explaining any of that without morally interested agents. (Meme theory maybe?)
  • Is anyone here a moral objectivist?
    Not entirely sure what distinction you mean here.Isaac

    Easy things first. I'm talking about the distinction between being wrong about language, and being wrong about morals. I can't figure out how to read you and still be able to tell the difference.

    I think we agree 'bad' doesn't mean anything on its own beyond a vague indication toward a negative. One can be a bad actor, but a good person. One can be a bad person but a good actor. So bad and good only mean anything relative to some objective or ideal. Something which is morally bad is bad relative to ideals of morality (behaviour, character...). If I've understood you correctly, we're on the same page here.Isaac

    Yes, as far as I can tell, we're on the same page here.

    The word 'moral' has to have some public meaning for it to be useful. It has to identify some publicly available set of behaviours or ideals IR characteristics, otherwise it would serve no purpose and be impossible to learn how to use. So I don't see how it can mean 'whatever behaviours you think fit'. That would be a private meaning.Isaac

    That one, I think, needs some unpacking. First, I think this is the place where I should lay open my bias. I've studied sociology on university, but the discipline I fell in love with was linguistics. So while I'd roughly agree that the word "moral" has to be useful when referring to the public, I also think it has to be applicable on all social levels from the individual, upwards, since a person has morals, and any grouping has morals, and there's no guarantee that they're the same, since not every behaviour that differs from public morals is immoral or amoral.

    A person's bahaviour that doesn't conform to the public set of rules, for example, can be classified in three distinct ways:

    a) moral (person acts according to private moral compass)
    b) immoral (attempts to act according to a moral compass, but fails, maybe due to a lack of will power)
    c) amoral (psychopaths see morals as an external imposition)

    Now those are psychological terms, as they pertain to the way individuals make choices. There's a social level, too:

    a) moral (in accordance with some superindividual set of rules - a culture or subculture)
    b) immoral (deviant)
    c) amoral (actions that have no moral import; chosing to eat a hotdog over a burger)

    The easiest way to resolve this via separate lexical entries. (During analysis we'd be calling only one of those sets "moral", but we'd have to decide beforehand which one, to avoid confusion.)

    I've noticed about myself that I when I say someone acts morally, I mean that the person acts according to an inner moral compass, regardless of whether that compass is aligned with the morals of a greater group. When I mean to say that someone acts in accordance with a group's morals, then I say it like that. So a psychopath may act in accordance with his cultures morals, but he doesn't act in accordance with any inner moral compass. My speech habit is to say a psychopath doesn't act morally, even he chooses to stick to his culture's rules.

    If possible, I'd like to find a way to use the word moral on both the personal and social level, via some coherent theory, but... it's hard. I believe that people recreate social structures in their daily conduct, and by that I mean that a culture usually incorporates not only typical moral rules, but also typical moral conflict (e.g. pro-choice vs. pro-life). As such parameters shift, but some rules are more stable than others. Lines like "abortion is murder" or "(online-)piracy is theft" are emotional appeals to less controversial rules, but you can craft rational arguments about why this should be the case. And these discussions are part of the environment in which we develop our personal morals, the younger the more potent, I think.

    I lean towards a dynamic meaning of moral that has something to say about all the levels. Individuals who move through space-time as social vortices who accumulate and disseminate morals through their behaviour. And any analysis should account for all levels, if possible.

    So a person who personally thinks he should follow all of society's rules, and has not particular confidence in his own judgement, would act in accordance with society's morals, but he's also likely to encounter plenty of criticism as a "stickler for rules", and will be asked to lighten up. You cannot analyse this under the aegis of morality, if moral only has the public meaning.

    I feel like I've been rambling, but I'll leave this as is, or I'll never finish this post. To summarise, I definitely think that the term "moral" needs to deal with the public sphere, but I think that ideally it should deal with the entire social spectrum.

    You might want the public meaning to be something more than just an arbitrary set of behaviours, maybe publicly available membership criteria such that our violent student could make an argument that his behaviour fits the definition. But, as I said to SophistiCat, it seems highly unlikely to me that the meaning would be so pure, given the language's history, but even it was, it would still have to have boundaries in order to be a useful word at all.Isaac

    Nah, I'm perfectly fine with it all being messy. As I said above, though, I think we need to be careful about the word's scope. I consider morality to be some sort of never ending process where specific rules are both input and output of thinking-feeling agents' actions. It's going to be messy (not sure about the extent to which it is arbitrary).

    Hitting old ladies is far from any of the ideals or standards within the general public definition of moral, so doing so is morally bad.Isaac

    In moral discussions, people tend to chose non-controversial rules, so controversies are going to feel implausible. You need to suspend disbelief, though, if you're going to use such examples for thorough study of what the concept could mean. If my personal morals demand to hit one old lady per week, and I do that, I'm not acting amorally. I'm not acting immoraly with respect to my own moral compass, but I am acting immoraly with respect to society's standards, and I'm going to have a hard time hitting old ladies in prison. (Maybe I'm secretly relieved, because I don't like hitting old ladies?)

    There's really nothing you can say beyond that, if you're not aiming for universalism.
  • Is anyone here a moral objectivist?
    Hopefully the former, especially as I wrote 'work' where I meant to write 'word' (new phone, different keyboard).Isaac

    I didn't even notice the typo (so much for careful reading...). And I'm still not sure what you're saying here.

    In order for the student to merely 'disagree' here, rather than be wrong about the meaning of the term 'morally bad' he must have his own private meaning of the term 'morally bad', one which is in disagreement with the one the rest of the language community uses. If, on the contrary, he does not have a private meaning of the term 'morally bad', then he must acquiesce to the meaning determined by the language community, and that does not include hitting old ladies.Isaac

    "Morally bad" represents a "negative moral evaluation". "Hitting old ladies" is a state of affair prone to moral evaluation. A person who doesn't evaluate hitting old ladies negatively would be using the term "morally bad" incorrectly if he said from his perspective that hitting old ladies is morally bad, but he would be using "morally bad" correctly if he said from his society's perspective that "hitting old ladies is bad". Moral evaluations are always tied to a perspective. The meaning of "morally bad" isn't private; the personal evaluation tied to the word is, independently of whether there's agreement or disagreement (or indecision, or indifference).

    You're seem to be getting rid of a useful distinction, and I can't figure out why? What do we get in return?
  • Is anyone here a moral objectivist?
    You haven't answered the question though. I wanted to know why you confidently allowed the student to have his own private meaning for the term 'morally good', but you're deeply suspicious if he tries to claim his own private meaning for the term 'hitting'?Isaac

    I didn't anser that question, because that's not what I intended to say, and - to be honest - I don't think I I did. I called it a linguistic failure. Being wrong about "good" (he's not wrong about morally; the adverb's appropriate to the situation) and being wrong about "to hit" are both instances of linguistic failure. (I do allow him a private meaning for both words in some limited context - say, a diary written in code.)

    Basically, I misinterpreted your question, and I'm still not sure why you'd think I allow a private meaning for the term "morally good".

    You do seem to mingle language and morals at a deep level, in a way I don't quite understand. Sure, they're entwined, as you say, but it's generally not hard to follow the distinct threads, horrid tangles notwithstanding. Also, both language and morals involve social rules, so if you abstract enough you may end up in a place where they're the same, but they also use a lot of their usefulness at terms.

    For example:

    When the grocer delivers potatoes, you 'ought' to pay him because that's the meaning of the work 'ought'.Isaac

    This seems needlessly hard to parse or outright wrong. I don't know which.
  • Is anyone here a moral objectivist?
    So if, in the first example, the student says"I understand that you think it's 'hitting' to push my fist toward an old lady this way, but I disagree," why does no one treat it as a disagreement? It's not, he's just flat out wrong about what hitting is.Isaac

    Well, he's certainly flat out wrong. Whether or not he, in addition, disagrees is an empirical question. Personally, decontextualised like in this thread, I'm more likely to imagine irony designed to dismiss your intervention.

    If it really is a disagreement about the word "to hit", I'd be inclined to think that he's trying to find a "loophole in the law" rather than to act morally. But that, too, is an empirical question. Very unusual people do exist.
  • Is anyone here a moral objectivist?
    If a foreign student learning English pointed at someone hitting an old Lady and said "stroking", you'd be inclined to say "no, not 'stroking', that's 'hitting'". If they then said "morally good", why would you not similarly correct them and say "no, 'morally bad'"?Isaac

    If a foreign language student sees someone hitting an old lady, intervenes, and says "No, no. Morally good," we have a likely a language problem. - A linguistic failure

    If you see a foreign language student hitting an old lady, intervene, and he says "I understand that you think it's morally wrong to hit an old lady, but I disagree," we likely do not have language problem. - A moral disagreement

    If you see a foreign language student hitting an old lady, intervene, and he doesn't understand why, we likely do not have language problem. - A moral failure

    There can obviously overlap, but that's the gist of it.

    The relation between discursive ethics and pratical morals is a rather interesting topic on its own, I'd say.
  • What School of Philosophy is This?
    Avery
    • don't believe that objective moral facts exist, because I haven't seen any compelling evidence that they do exist.
    • believe that objective morals are not required for a logical model to explain the world.
    • things start to make a lot more sense when you remove objective morals from a model of the reality.


    Have you tried out social relativism?

    A social relativist would say there are "objective moral facts", but they're probably not what you think of when you say the above. A social relativist would say that moral facts are a form of social structure. For example, when you walk down the street and notice an open door, you're unlikely to walk in. There's a range of likely reactions, but you're only going to make an overt moral decision if your situational curiosity comes into conflict with "this is not my house; I have no business entering". But even if you're just walking by, maybe without much curiosity, there's a habitual moral layer to your behaviour.

    An accidentally time-travelling cavement would have a very different reaction, because he'd been socialised in a very, very different moral environment.

    Wanting moralist right/wrong rules I think is very common in humans, because we use abstraction to navigate our enviroment. The problem with fixing moral rights and wrongs is, though, that our theories about what's right and wrong are a crucial part of our moral upbringing, and to the extent that they influence our behaviour, gaining new moral insight keeps the moral environment changing. Moral decisions are only perceived as such when they're problematic, and if a particular group of people solidify a typical problematic situation into a norm, then there's a change in the moral environment for these people. And that change causes new unforseen problems. There can be no universal right/wrong rules for this reason, but some constellations of rules can be more stable than others.

    So I think there are objective moral facts, but they're not about what's "really" right or wrong; they're about complex moral behaviour.
  • Privilege
    And if you hadn't said what I quoted right now, I'd probably not have realised it but: the headstart you get isn't your priviledge. You get the headstart because of your (unnamed) privileadge (I'm guessing it's shoe size, am I right?).Dawnstorm

    Eh, I'm not sure that's quite right. I'm having second thoughts. It's mostly a terminology problem, but I'm pretty sure priviledge is attached to a factor, but it's not quite the individual advantage either.

    (What's the etiquette for such a case, where I have amendments to make, and I'm still the last post. Editing my post? Quoting myself?)
  • Privilege
    But focus on only one race, one game if you like game theory.Alejandro

    I'm not sure how to deal with that example, though. Sharing the cash is a stop-gap measure at best, and flaunting your priviledge at worst.

    A few other things:

    Eliminating my headstart can only be done by me right?Alejandro

    Generally, no, you can't eliminate your head start. It's baked into the system. In your example, you can walk back to the starting line, sure. In real life, you can't stop being a male, white, or straight.

    And if you hadn't said what I quoted right now, I'd probably not have realised it but: the headstart you get isn't your priviledge. You get the headstart because of your (unnamed) privileadge (I'm guessing it's shoe size, am I right?).

    However, if I do it, I will lose the race because I do not have the same abilities that the others may have.Alejandro

    Right. Now you're underpriviledged, and the headstart is "affirmative action". You're doing the right thing by investing in your skills. Affirmative action will cease once you catch up (or so the theory goes). Except it's only one race, so it's pretty pointless to invest. (And anyway, it's your example. You could have been an excellent runner with a headstart, who can afford to take it easy.)

    Finally, if it's only one race, and you're slow (and thus underpriviledged), and you don't get a headstart, that's all right, too, since one of the winners might give you 50 dollar to teach him you're expertise.

    It's not about one race. It's never about one race. It's about a repeated and systematic pattern across many, many races. So I don't really know what to say about solution, other than it doesn't change much (and in some cases it might be better to keep the money so you're not the well-meaning but cluesless guy who rubs salt into a sore wound by offering a trifle).
  • Privilege
    What if me, a privileged and slow individual, trade $50 with anyone that teaches me how to run faster and win? I still have my privilege, I am not renouncing it, but now I am using it as a tool from which someone else may benefit.Alejandro

    So, you, a slow runner, keep track of the guy who does not have your advantage but still almost beats you, and give him $ 50,-- you wouldn't have had without your headstart so he can teach you how to run faster, and so that in the next race, he'll have an eaven harder time catching up. He may get bitter, and since he now has $ 50,-- his motivation to run hard drops a little. However, all the others will race each other (not you), so the fastest of them can have $ 50,-- from you. You get $ 50,-- with each race, and expand your advantage, until you're good enough that your investment is just a charity holdover to flaunt your status, with lots of guys behind you to think: "That should have been me."

    That's not a solution; that's the problem.
  • The idea of "theory" in science, math, and music
    What I meant was that the idea of a "mode of limited transposition" didn't exist as something worth naming until exactly the moment when it was used by Messiaen (at least that's my understanding).

    I don't doubt that there's math in music, but I think the process by which math becomes musically relevant is fascinating and potentially understudied (or I just haven't found the right resources).
    Halley

    It's... difficult. I'm not a philosopher actually, but music theory has always reminded me somewhat of linguistic theory (something I know a little better). What they have in common is that we have lived systems we learn, and theories about how they work that feed into learning.

    So basically, you have this theory and it can be used in different ways:

    1. Describe what people are doing
    2. Use the theory as a method of learning (which impacts what people are doing and creates a feedback loop)
    3. Scour the theory for logical possibilities and see what people could be doing but aren't.
    4. Use it as a set of rules to judge the value of the product

    So there's a question in what ways a descriptive music theory is useful. For example, the following video discusses basically what theoretical framework is useful when describing the harmonics of "Sweet Home Alabama": Tonal Harmony? Modal Harmony? Or something else (with Tagg's tonic-outgoing-medial-incoming loop as an example)?



    The problem here is that what sort of description you find most useful often depends on what you hear, and what you hear may be influenced by the theory you've been exposed to.

    Similarly, you can write a piece of music with a particular theory in mind. And then you can make an adjustment because it sounds nice, but it's not entirely clear what you've done in terms of theory. Now, someone else comes along and doesn't like the adjustment you made. That person could easily try to use the obvious non-conformity as a legitimisation for why it doesn't work, but that would involve dismissing that it works for others, and dismissing the question of why it works for others as unimportant (since they're wrong).

    So you have the same theory as a set of regularity interpreted as (a) a description of what's going on, (b) a generative aid, or provisory template, and (c) a system for judgement of good and bad. I see that all the time in both composition theory and linguistics.

    Scientific theories for behaviour-external fields such as physics don't have (b) and (c). If you see an apple floating up into the sky you won't punish it for violating the laws of gravity; you'll try to figure out what's going on. And gravity, unlike language or music, isn't something you meaningfully participate in. (You do participate in gravity, obviously, to the extent that you're a physical object).

    I'm not familiar with Messiaen. It seems to me, he's been scouring the formal properties of the theories for things that are logically possible, some of which have been done before (the wholetone scale is identical to his first mode as far as I can tell?), and some maybe haven't. You could, in theory, compose musics with a random scale (random variables: number of notes (1 -12), location of notes on the chromatic 12-tone scale). Even then, there a still elements we're taking for granted (for example the primacy of single notes a half-step apart), and they're ingrained in culture (for example in the form of instruments). That's not a problem if all you want is a generative method you can deviate from at pleasure, or if you want judgement standards. As a descriptive set of rules, though, you might encounter music you can't describe or will "misunderstand" if you use that particualr theory, and that's something you need to be aware of.

    Not an expert in anything, but these are my thoughts about the topic.
  • Moore's Puzzle About Belief
    As you say, it's a silly statement, but also a true statement. That's the puzzle.Michael

    For me, the puzzle is why this is a puzzle, but then I haven't read Moore and know little about him and the context of this puzzle.

    Here's the thing: "It's raining, but I don't believe it's raining," isn't necessarily a silly statement. What if we're talking narrative present (present tense for past events) with an intrusive narrate. "It's raining (as I, the narratar), but I (my past self) don't believe it's raining." When speaking, you need to take perspective into account in a way you don't have to if you consider well-defined philosophical propositions. If you ignore perspective, you can create plenty of absurd situations:

    Bill: I'm Bill.
    Joe: No, I'm not.

    Both are correct, and yet B seems to contradict A. What a puzzle! This constructed situation is nonsensical, because it entails that Joe can use "I" correctly but can't parse it when someone else uses it. (Not sure if cognitive impairments exist that make such a situation plausible.)

    Similarly, Moore's puzzle is the result of assuming things about Macintosh's knowledge and then decontextualising him so that his knowledge is only partially relevant.

    It's just not a puzzle that you can't truthfully speak the truth about whether or not it's raining if you don't know whether or not it's raining. Macintosh could gamble on it, though, if his intention is to speak a true sentence, rather than to speak the truth about rain. Basically, Macintosh would be betting on himself being wrong about rain. That this leads to real-life absurdity doesn't automatically cause a philosophical problem. It depends on what problems you want to explore (and this is where my ignorance of Moore limits me).

    I'm not surprised they say this puzzle helped develop pragmatics. It's definitely relevant.
  • Belief in nothing?
    Okay. But if you were saying, "So far no one has been able to convince you that World War II actually occurred"...where would that leave us?Frank Apisa

    "World War II" is a valid value for "occurring". Even if we had no evidence, the meaning is fairly straightforward. "God", the creator god of the monotheistic religions at the very least, is different from that. If "God" created everything there is, then existance is a product of that process, and to say that "God" exists either sends me into an Escher painting equivalent of meaning, or it's an incomprehensible mystery for which I have no intution.

    In any case, the logic for the empirical world, which I'd be prone to apply to things like "World War II" doesn't apply. If it did, most theists I know wouldn't be able to believe in God; as it is, when I outline what sort of God I don't believe in ("bearded man in the skay") then they say they don't either.

    God concepts are manyfold, and Shintoist kami are very different beings from the monotheistic Gods, but there's also this spiritual, transcendental whiff to it that I have trouble understanding. I always end up at a point where there is no discenible difference between any one God existing or not. The only difference I can see is the word-behaviour of the believer.
  • Belief in nothing?
    This is true for everything. X either exists or does not exist. It is a mutually exclusive proposition.Frank Apisa

    My entire point, though, is that so far, no-one's been able to convince me that "God" is a valid value for X in that instance.

    A lot of this question comes down to sentence structure and which words are used. “All Gods are fictional” and “Gods don’t exist” are synonymous, but add “I believe” to the front of each and (perhaps?) one negates itself, while the other does not. Hence why I say it comes down to sentence structure and essentially the logic behind grammar.Pinprick

    Language isn't that logical, though, when used in the wild. If you insist on thorough grammatical logic within philosophy, you either have to be very careful how you phrase things, or you create a insulated bubble, where your conclusions have little to do with the world we live in.

    Under the assumption that "All Gods are fictional," and "Gods don't exist," are synonymous (which is not a given in every context), you could lead someone to commit to the positive phrasing and thus have them have a belief. Intuitively, I'd consider that move a rhetorical trick rather than anything philosophically meaningful.
  • Belief in nothing?
    They do not "believe" C...they KNOW C.Frank Apisa

    ?

    You are supposing that A is a positive statement...and B is a negative one. But that is not so. Both are positive statements. If made as assertions...BOTH would bear a burden of proof from the person making the assertion.Frank Apisa

    This isn't about the burden of proof. It's a negative statement, because it negates a positive statement. I brought this up precisely because the relation between the syntax and the semantics isn't as straightforward as it appears.

    If I were to claim that the platypus doesn't exist, that would be negative statement, but the burden of proof would be on me. Whether or not a claim is positive or negative in syntactic structure doesn't really impact the burden of proof.

    I'll demonstrate why I brought this up with my reply to Pinprick.

    Perhaps?

    All Gods are fictional.
    No Gods are fictional.
    Pinprick

    That may work. The question, then, is if "All Gods are fictional," are semantically tied together with "Gods don't exist," phrased once with a positive and once with a negative structure. If so, can you say that there is a "believe in nothing"?

    I don't really have an answer to this myself, except that I think it pays to make a difference between intutive concepts, semantics tied to word structures, and the structures themselves.

    I think this because 'theism' is defined - definite - insofar as it's a 'conception of divinity' that consists of distinct truth-claims about g/G, and therefore, to my mind, are not "meaningless" ontologically, epistemically or ethically. g/G, I agree, is meaningless, but what we say about g/G - if it's proposition - is not. (Obviously, I exclude noncognitive theism, for instance, from consideration and give the mainstream / classical theists their cognitive due.)180 Proof

    I tried to address this in a longer post, but I talked myself into a corner and got confused.

    Basically, I view statements like "God exists," to have the structure of a statement, but its social function is appellative rather than referential. A complex of behaviours is tied to ritualistic verbiage. Except that's clearly not how the theists around me see it. And that's where my confusion enters.

    Note that I make a difference between "undefined" and "meaningless". It's my impression that God being "undefined" (or "undefinable") is part of the mystery and thus meaningful to theist. I can't mine meaning that way. I don't know how that works. And that's what makes me an atheist.
  • Belief in nothing?
    A: God exists.
    B: God doesn't exist.
    C: God may or may not exist.

    Some people believe neither A nor B, because they believe C.

    For me the concept of God holds no meaning, so I reject A, B, and C as the same sort of alien stuff. I reject the topic itself as meaningless in both theory and practise, not the propositions as untrue.

    Also, how you phrase things often determines what the negative is:

    To use Cobens example (abridged because I'm too lazy to type it all):

    A1: There is a feline that hasn't been catalogued by science.
    B1: There is no feline that hasn't been catalogued by science.
    C: There may or may not be a feline....

    A2: Not all felines have been catalogued by science.
    B2: All felines have been catalogued by science.
    C2: All felines may or may not have been catalogued by science.

    Can anyone here think of a way to phrase "God doesn't exist," as a positive, to which "God exists," would be a negative? I can't.
  • About This Word, “Atheist”
    But, if you want to suppose someone saying that either "yes" or "no" is correct for a "yes or no" question...go with it.Frank Apisa

    That's not what I'm saying, though. I'm saying that my hunch is that it's possible to be fooled by the grammatical structure of sentence. Just because you can formulate a yes/no question for gods' existance doesn't mean that this formulation is a valid treatment of the concept of God.

    For what it's worth, I do think I'm overshooting my mark by treating all god concepts the same. Even translation is difficult. A monotheistic God is rather different from the Greek lot, and they're both pretty different from Shintoist Kami. I'm shaky on this all, because I'm generally not bothered by any of this in my daily life.

    I mean what about:

    Does the Mellow-winged Staggerthwart exist? (Can you answer the question with yes/no, before figuring out what this is supposed to be? I just arranged random words, here. There's no meaning to it.)

    Or self-referential: Does existence exist?

    Not all sentences of a certain structure are necessarily valid representations of... well, anything meaningful. It's an empty phrase that traps people in an uresolvable conflict and sorts them into two sides, where emotional intensity is substituted for content. The divinity aspect allows people on either side to shift goal-posts at will. People can be umpires in the game, but they can't do anything about the goalpost shifting, because it's in the rules.

    Goalpost shifting is easily possible about nearby aliens, too, but it's not in the rules. I realise that the burden of proof, here, is on me, and since it's just a hunch (with ever-decreasing certainty about different God concepts), I don't quite know how to do this or if I can at all.