Comments

  • Mathematical Conundrum or Not?
    An answer is correct if and only if its value matches the chance that an answer with that value will be selected.Michael

    I think that formulation is incorrect, because if this truth condition yields "true" for more than one value, the chance to be correct <i>overall</i> is greater than for any of the individual values.

    Take for example:

    a) 25 %
    b) 50 %
    c) 50 %
    d) 60 %

    A has a value with a probability of 25 % to be chosen, so it's correct. B and C both have both have a value that has a chance of 50 % to be chosen, so they're correct, too. But that would render the chance to be correct overall at 75 % and according to the problem's formulation, none of them would be correct. But if none of them is correct, then the way we arrived at the correctness of the individual values isn't valid, as it doesn't address the problem.

    I don't even know how to formulate this problem in mathematical terms. I don't understand the truth condition.
  • Belief
    I'm still waiting for you to explain the problem mentioned in the first sentence above. It does not follow from the fact that we have all sorts of knowledge about bricks that that knowledge is problematic for treating a brick as a physical object.

    I really have difficulty with the way you're employing the notion of perception. Perception is not equivalent to understanding. We perceive a brick. We understand it as "a brick". The dog perceives the same brick. He doesn't understand it as(something called) "a brick".
    creativesoul

    Sorry for making you wait. I'm too slow a writer, reader and thinker - and this thread outpaces me. Also sorry that my answer's likely going to be unsatisfactory since simply catching up with the thread takes up most of my forum time.

    I think percpetion is a complex mental activaty that involves understanding what we see at various steps. Every individual, whether human or dog, faces the same stimulus: a brick. But we're not perceiving something and then interpeting it; our interpretations don't come after perception; they run simultaneously to the point that by the time the brick enters our consciousness it's already a brick - fully integrated into our full perceptory state (that includes everything we see, hear, feel...). It's not that we see a physical object that is a brick, it's that we end up seeing a brick and sometimes it's relevant that it's a physical object. Our interpretations of what we see guides what we pay attention to and sometimes supplements what's not there (I'd need to find evidence in experiments for that and don't have the time) - by the time we "see" an "object" a lot of interwoven mental activity has taken place, so that you simply can't say (other than analytically) that by the time you've isolated a brick as an object what you see is merely a representation of what's there in the physical world.

    Seeing isn't just "burning the image into the retina", and if it is what you're seeing is not yet "a brick". And perceiving isn't just "seeing" - integrating various input, I think, is already a meaningful activity guided by interpetation.

    I don't think that's all that different for dogs either, maybe a ted less complex (but maybe not).
  • Belief
    The object of belief can't be a physical object anyway. I believe that brick. That makes no sense. I believe that the brick is red. That makes sense.frank

    What's your criterion for what makes sense? Grammar? And what is it to be the object of belief? Must it be limited to what would render the sentence grammatical?Sapientia

    I'll start from here, because it's easiest for me.

    Whether or not a physical object can be the object of belief cannot be determined by saying that "I believe that brick," makes no sense. "I believe that brick," is ungrammatical and has no immediate meaning (though you might guess at any itended meaning depending on the context of the utterance). It's ungrammatical because the pattern is "I believe that [clause]", but here you have "I believe that [noun]". A noun doesn't always describe a physical object. ("I believe that justice," is equally meaningless.)

    The usual phrasing when you make a single entity the object of believe is with "in": "I believe in bricks," or "I believe in a/the/this/some brick." These result in grammatical sentences. What remains is a question of meaning. What would I be saying, if I said "I believe in bricks?"

    If you say something like "I believe in bricks," do you have to be able to analytically detail what it is that you believe in? If we take the definition of this thread of believe as a propositional attitude, "I believe in bricks," would be a blanket formulation that references but does not spell out a bundle of propositions. But do you have to be able to provide an exhaustive list, before you can be said to believe in "bricks"?

    In my first post, I used the example "I believe that God exists," rather than "I believe in God," precisely to avoid this problem. But it's sort of important.

    If you can believe in single entities without being able to detail an exhaustive list of implied propositions, what does this mean for the act of believing. Is "believing" this way the commitmental equivelant of blank checque?

    Once again, the question is whether a proposition is a sentence, or a special type of meaning expressable in varying degrees of success by varying sentences. If it's the latter, you might well "believe that brick" (a non-native speaker, for example, or a very small child might not know better), but you'd be advised to actually come up with a better formulation. If a proposition <i>is</i> a sentence, rather than simply being expressed by one, "I believe that brick," is ungrammatical nonsense, and not a proposition at all.

    This is also the tie-in with belief in non-human entities (from dogs to thermostats). If a proposition is not a sentence, but expressed by one, then maybe propositions can be expressed also by actions, or maybe even by mere behaviour.

    And finally, there's a problem with treating a brick as merely a physical object. When you see a brick and recognise it as a brick, you activate knowledge about bricks you have. The knowledge about bricks that you have also prevents from seeing the brick as it is: brick-naively, so to say. What you see is always already an object-subject relation. This is especially the case with human artefacts, like bricks, which are made to purpose. Seeing a brick as a brick is not so different from understanding the meaning of a word, or not understanding the meaning of a word but recognising it as a word whose meaning you don't understand. So in that sense believing "that brick" could be affirming your learned world view, while centering your attention on a brick. Whether or not it's useful to stretch the term "belief" this far, again, is a question of what you're intending to do with the word. I could designate that sort of meaning to "I believe that brick," using an ungrammatical and thus unintutive phrasing to highlight an unintuitive concept.

    If that's too long and confusing, my central point here is this: You can't just assume that a proposition is identical with its phrasing. Saying that a proposition has stable meaning, no matter how you formulate it, and saying that proposition is identical with its phrasing has different implications.

    Physical objects are out there in the world and can be perceived by anyone (capable of perceiving physical objects), but you can only perceive them as a specific type of object (say, as a brick), if you have that type already in your mind. If you come to an object naively, you'll still have a world view, and your attempt to deal with an object will eventually create a type. As soon as we have a type, there's potential for calling that belief. I wouldn't, but it's not absurd.
  • Welcome to The Philosophy Forum - an introduction thread
    Thank you for the welcome. I'm mostly hanging back and reading: I'm a slow writer, and by the time I have something to say threads usually have moved on and someone else has said what I would have.

    It's nice to have a forum where people talk to each other rather than at each other.
  • Belief


    "Semantic field" is a term used in structural linguistics and anthropology, and it's simply the range of meaning associated with a word or a set of closely related words. It's not the most precise concept out there, and it's theoretical in the sense that you cannot meet a pure semantic field "in the wild", because it's always already organised (say into a word, a set of words, a taxonomy...). It's a useful concept, I think, when comparing things like languages. I found it personally useful when figuring out the technical terminology of linguistics and sociology, since the same "sign body" (say "adverb", or "social role") doesn't always cover the same things (i.e. it depends on who uses the term).

    You say a definition can be wrong, but before you can determine whether or not a defintion is wrong, you'd need to know what it is you're talking about, and that's sort of the problem in a thread titled "What is belief?" What I also meant to say, but what I probably buried a bit too much in excess verbiage, is that I think "A belief is an attitude towards a proposition," is an operational definition - not a theoretic one. It drives at methodology rather than meaning. Normally, such a line is connected to a theory that sheds light on all the short cuts in the operational defition. For example, the question of whether a belief needs to be linguistic or if it can be pre-linguistic would have been addressed in the theory. When I first replied to the thread, I probably took it to be a shortcut something like "A belief is an attitude towards something that's expressable as a proposition," but I didn't properly think this through until you brought it up (even though other people have been talking about pre-linguistic beliefs and I nodded in appreciation when I read 's post, here).

    It's a bit premature to say a definition is "wrong", when we can't even be sure yet, whether we're talking about the same thing. Some people might indeed only use "belief" for propositional attitudes in its most literal sense, and whether that's sound or not depends on what other words they use and when and how. It's not like we can encounter unmediated beliefs and ask what they are: we encounter things that imply belief - behaviour, linguistic and otherwise. Or artefacts that represent language (like a forum post).
  • Belief
    It only follows that there are no pre-linguistic and/or non-linguistic belief unless propositions existed prior to language. That alone is more than enough ground to warrant our dismissing the above belief statement, because there most certainly are such things.creativesoul

    That depends on how we organise the semantic field, though. In an experimental set-up, for example, I could see "A belief is a relation between an individual and a proposition," as an operational definition derived from a theoretical definition - you'd need a well-founded theory of how the linguistic faculties connect to the pre-linguistic faculties of the mind. That is: we believe a lot of things we never formulate, but it is possible to formulate them and test them this way.

    That there are different ways to organise the semantic field is a key problem in this thread. If we're interested in a semantic field that we might describe as "taking things for true", we may come up with different words: knowledge, belief, assumption... But even if we have the same words, they don't necessarily relate the same way in different people's usage.

    JTB for example sees "knowledge" as a subtype of "belief", but it's equally possible to see them as distinct cognitive behaviour - two flags planted on a continuum so that one either knows or believes, even if there are cases where it's hard to tell which applies.

    The more I read this thread and think about it, I lean towards a definition that keeps knowledge and belief separate and that has us generate "belief" to the extent that knowledge becomes problematic. What got me thinking more along these lines is ' specific example:

    When I enter the room and see the pens and papers I know there are pens and papers. Once I start thinking in terms of belief, then doubt enters.Janus

    I tried to think about this in terms of 's diagram of JTB, and failed, precisely because of a basic difference in the way the terms are used. If we see "belief" and "knowledge" as distinct cognitive behaviour, with belief arising out of problematised knowledge, I think we need to broaden the context.

    One of the things I think is important is the relationship between meaning, truth, reality, and motivation:

    I walk into a room, and there's an apple on the table. It's not a fruit, but a wax-simulacrum. If I never found out that fact, did I "know" that there is an apple on the table. Rather, if I notice the apple in passing, but it's not in anyway relevant to me in the situation, then the proposition "There's an apple on the table," might be true on the abstraction level relevant to me: that is the differentiation between a fruit and the wax simulacrum of one is irrelevant to what the proposition means to me.

    But that means that all knowledge, belief, and truth - as it occurs in the world - is context bound. And since contexts can change, truth is not a stable thing, and it gets complicated to figure out whether "There is an apple on the table," is true or not. Complicated, but not impossible. What we have is an intricate truth system attached to a proposition.

    And this is where the linguistic nature of propositions come in: the sign body of the proposition "There is an apple on the table," remains a constant, even if context changes. In real life, we re-contextualise all the time. Socially, we negotiate meanings, and as our own motivational structure changes, so might the elements of the truth system "There's an apple on the table," that we pick out as relevant. That is a photographer might be fine with a wax simulacrum in a way a hungry person decidedly will not.

    Now, as soon as we topicalise the proposition "There is an apple on the table," we enter the meta realm. We might be arguing just for the sake of being right, or we might have motivations that make it important that the proposition be true (e.g. I might win a game, if it is, and the rules haven't foreseen the ambiguity). That is: "belief" can, in situations like this, rescue a proposition from being false, by ordering the semantic field in a way that makes it true. (Side note: This is only hypocritical if the semantic field was ordered in a different way, not if you differentiate from an unspecified level of abstraction.)

    So, basically, "belief" has two general meanings:

    a) Belief that facilitates action in the face of uncertainty: Belief in P to interact with the reality that P represents, or

    b) Belief that takes P as symbolic for some related goal: deciding the outcome of a game, group membership...

    I think (a) can reasonably be pre-linguistic; (b) can't.
  • Belief
    I find this thread extremely interesting, but since I'm no experienced philosopher, I also find it hard to follow, since I don't always understand the terms an expert would. I apologise if I don't always follow up on posts, but I sometimes need to take the time and read up on related concepts, and by the time I'm done there's almost always something else to read up on.

    For me, having a belief about believing is motived directly from social interaction, where different people are comfortable with different levels of certainty, and if you can only take one course of action some people might prefer to minimise risk while others might prefer to maximise (potential) reward, and this in turn is dependant who feels what outcome the most. So "belief" might be a factor that gives people advantages through various avenues: less anxiety, less time spent thinking...

    Now, the degree to which belief needs to be justified in the first place is a matter of social negotiation, too. I'm not quick to make up my mind. The result is that not only do I not often get my way, by the time I get any way I'm usually not sure what my way would have been, and in a sense this means I always have to deal with other people's decisions. This can lead to frustration and motivate a world view that suggests that "all belief is unjustified". But I'm not sure I actually belief that, see?

    But I do see a continuity here: belief about belief is not that different from the belief that the sky is blue or that sandwiches are nutritious. It's just that the more abstract terms become, the harder it is to describe and circumscribe the referential objects as well as the concepts in our minds. And this is why we have this thread to begin with. What is belief?

    So, if we talk about animals in terms of expectation and frustration of expectation, as Janus suggests, then I have to ask why we don't do the same for humans? Do we reach limits? Is there something we can't express? And if so, is the same true for animals, but in different ways? It's very hard to imagine what human language use would like from a different system, perhaps one we can't understand. When we hear a word, we hear a word. When we hear a language we don't understand and whose prosody we're not used to, we may not know where one word ends and another begins, but we still recognise language. The less "like us" things become, the more meaning disappears, but how do we deal with it?

    When we talk in terms of expectations for animals but belief for humans, what becomes difficult is comparison. In a sense, making that distinction is a comparison in itself - but what it means isn't clear other than that humans are different from all other animals, which is trivial (and true for all other animals as well).

    So when we move away from humans towards thermostats on belief-similarity slide, how do we map the journey semantically? What about belief do we share with apes? With mammals? Vertebrates? Life? Inorganic Matter? At what point does the comparison stop yielding results.

    One thing about the thermostat discussion that's drawn my interest is the formulation "The thermostat believes it is cold." What struck me here is the word "cold". The thermostat activates at any temperature we set; the distinction between warm/cold doesn't come into it. This is a judgement that goes away from the very specific temperature. It's an abstraction, and one that has different implications. There's a hidden should-proposition in that word: the thermostat should activate because it is cold. And it's not a should-proposition we can lay at the thermostat, because it's us who set the temperature. All the thermostat can "belief" is that it is time to activate (according to its setting).

    But if we set activity pairs (activate/don't activate in this case) as an indicator for belief (and the thermostat has two belief settings: it's time to activate/it's not time to activate) - then what does that mean for the distinction between value judgements and facts. Theoretically, the thermostat can be wrong about the specific temperature, but to what extent can it be wrong about "when to activate"? Does the origin of the setting matter?

    When a dog who misses its previous owner refuses to eat, have the should-settings changed? When *I* refuse to eat, because I miss someone I can directly detect whether my should settings have changed: I should eat, but I can't bring myself to. When it's you, I can ask. With a dog? There is no shared language, but does that mean there is no dog-internal language whatsoever? How would we know?

    What I wonder is whether we need an inside-view to talk about belief, and if so, when we stop granting an inside-view. I'd say the gradiant is one of similarity with the only inside-view we know directly (our own) as the initial point of comparison.
  • Belief
    Wouldn't we distinguish instinct by the fact that it doesn't link up to any proposition? Or would you say that it actually does?frank

    No, I agree. Instinct is just an impulse to execute a specific behaviour. I think belief is more complex than that.

    It's just when I go back to the edited original post and read:

    So, that John is hungry, and that John believes eating a sandwich will remove his hunger, we have a sufficient causal explanation for why John ate the sandwich.Banno

    I personally run into a problem, because I think both eating and believing are component actions that branch off the same development. We recognise a sandwich as edible the moment we see it; it's not an instinct, because a sandwich is an artefact we create. That is, if I follow the section about action I end up with belief as an internal modelling of the world (a concept already brought up in this thread) rather than with belief as a propositional attitude. But at that point it's not much more complex than an instinct, sort of the flip-side of one: if the instinct is to eat food, the associated belief would be simply the ability to recognise food. That precedes any proposition, though.
  • Belief


    Clarification question: Are "Belief X causes action A," and "Instinct causes action A," two mutually exclusive propositions?

    I'm asking because different definitions of words lead to different slots in a causal explanation: under some definitions "belief" and "instinct" can occupy the same slot.

    I have this little narrative in my head:

    A: I'm hungry. There's an apple on the table. I eat the apple. I'm no longer hungry.

    B: I'm hungry. There's something on the table that looks edible, but I'm unsure. I either choose to take a risk, or I form an ad-hoc belief that surely this is edible (to avoid paralysation from anxiety).

    But that would result in it's own definition that has something to do with the bracketing of risk. You might - under such a scenario - model belief as the deciding factor in a battle of basic emtions (e.g. fear of starving vs. fear of poison). It's not that you think A or B is true: if you're completely honest you have no idea. You've just decided to chose A over B, because inaction is disastrous either way and psychologically unable to face the risk head-on. Belief mitigates the risk of inaction and drives you to act. (In a slightly different take, the ability to form believes might keep Burridan's Ass from starving.)

    If you think that blief is something more basic, though, this won't work - for example, what decides which "belief" you form? The belief that what you see is nutritious? The belief that what you see is poisonous? Certain learned cognitive preconditions might come into it (in addition to the relative strength of the respective fears), and you might want to call those part of "belief". But in that case, they wouldn't be just "propositional attitudes".

    Am I making any sense?
  • Belief


    Well, there is a problem here.

    "X is hungry" restricts X to objects that can have the attribute hungry. This includes both humans and dogs. This isn't controversial.

    But if we then ask why being hungry leads to eating certain things and not others, we look for explanatory principles. What motivates us turn towards "belief" when we talk about humans, but "instinct" when we talk about dogs?

    There are quite complex discussions on that with regards to leaning and coming equipped with the knowledge; it's not the details that matter here. Rather: for our purposes,what we're doing is to position "belief" and "instinct" as rival explanations. So what is the relationship? If "hunger" is roughly the same humans and dogs, why would the underpinnings for eating be so very different?

    That is: can we assume belief in human actions, when the behaviour is learned, automatic, uncontroversial, and usually not formulated? My default assumption is that when chosing what to eat, we're not that different from dogs, where it doesn't actually matter whether we had to learn what is "good to eat" or came equipped with it.

    I think guessing at beliefs from behaviour, we might actually be overextending the reference for "belief". Or differently put, I'd probably reverse this: "I am hungry. I believe eating X will satiate my hunger. Therefor I eat X." to "I usually eat X to satiate my hunger. Therefore I believe X satiates my hunger."

    What makes us do things? Instinct, habit, etc. Belief is a factor, but usually only when we actually contemplate our actions. My hunch is that the belief gets activated only when someone or something casts doubt on the things "we usually do". (Under quotes because I consider thought-habits a form of doing, and I'm not quite sure of the range of referential objects I'd associate with that.)

    This would also solve the question of taste, here: if you set an apple and a banana before me when I'm hungry, I'll always go for the apple, because I don't like bananas. No belief comes into it, but there's no significant thought going into that decision either. If you replace the banana with a brick, my mind's not going to be busy thinking "Well, I'll have trouble digesting the brick, so I go for the apple." My mind's going to be busy questioning your motives for offering me a brick. Is this a Monty Python's skit? If I take human agency out of the equation, I'll just ignore the brick completely and take the apple. Basically, my semantic register doesn't tag the brick as food, and doesn't tag the banana as "good", and there's a decision hierarchy in place that makes me pick the apple. Belief might come into it with "brick vs. apple", while taste might come into with "banana vs. apple". But it's essentially the same process of elimination.

    I think beliefs are attached to actions, and may sway decisions in the presence of doubt, but they don't motivate decisions. I think it makes more sense to place "belief" into a sort of feedback-control system rather than a motivating system.

    Whether or not it's a category error to place "instinct" and "belief" as rival explanations for action depends a lot on how we define things. But my default reaction is to treat it as a category error. In simple terms: I don't think "belief" is something as basic as "instinct"; they operate on different levels.
  • Descartes: How can I prove that I am thinking?
    Maybe I should stay out of this thread, because I've never read Descartes myself, but here's a reply based on what I've read about this:

    • Thinking isn't the basis of your existence. It's the only thing you can't doubt (and that makes sense to me, since doubting is also a form of thinking: if you don't think you don't doubt, and there's no problem left to discuss - not that that's any sort of argument; it's just a good place to stop.)
    • You don't prove that you're thinking, you just intuit it. And unlike many other things you intuit, you can't doubt it away. (If you can, I'm immensly curious to learn how.)
    • It doesn't matter whether or not anyone controls your thinking. If you're not thinking there's nothing to control. I always sort of assumed this was about direct experience, and about thought in particular because radically doubting things is a thought process.

    Again, this comes from someone who's never read Descartes, so take this post in accordingly.
  • Belief


    Under these definitions: do I have to understand the proposition "God exists," to be an agnostic? Or differently put, is not understanding the proposition "God exists," sufficient to make me an agnostic? Is the difference between not understanding a proposition, and understanding a proposition but believing it to be undecided (or undecidable) relevant?

    When faced with a proposition, how do I find out what it is that I blieve? If I believe that two contradictory propositions are true, but I am unaware of the contradiction - do I hold at least one mistaken belief or am I wrong about at least one of my beliefs? Is this meaningful distinction in the first place?
  • Belief
    Banno
    • A belief is a relation between an individual and a proposition.
    • The individual must understand the meaning of the proposition in order to correctly be said to believe that proposition.
    • The individual thinks the proposition is true.


    Given this formulation, how would you distinguish a belief from a working hypothesis?

    For example, I'm an atheist. I intuitively reject the proposition "God exists," and so it's not that hard to maneuvre me into situations where I commit myself to saying "God does not exist," is true. Is this already a belief, or is it a clue that I hold a belief that is incompatible with the proposition "God exists?" and it is politically expedient to claim that I believe "God does not exist."

    Am I rejecting the proposition "God exists," without commiting to its negative? Is what I'm really rejecting the relevance of the proposition, rather than it's truth value? That is, I don't care and don't want to spend the time to figure out what I believe?

    "God does not exist," works well enough for me as a working hypothesis: I act as if God does not exist. But acting as if God does not exist is not the same thing as believing that God does not exist. Imagine that theists don't exist. Obviously, I would not have to be an atheist. In many cases I would act the same as I am now, but in situations where the theism/atheism divide is relevan, I do act differently. A working hypothesis like "God does not exist" is only of use, because theists exist (I'm not motivated to invent theism just to deny its existance).

    If we define "belief" as a propositional attitude, I have a problem, here: I wouldn't be able to hold an intuitive belief that I find to express in words, but that's pretty much what I experience. I'm uncertain about a lot of things, and that I react more vehemently against theism than say materialism is at least partly down to a defense mechanism against perceived social control. If it's possible to figure out intuitively held beliefs by making propositions and observing your reactions towards them, then beliefs must precede propositions in some way - that is rather than a belief being an attitude towards a proposition, a belief would have to be something more foundational - something that gives rise to your attitudes to propositions.

    I find "belief" harder to define that way, but it addresses a second problem I have, here: namely that you have to understand a proposition to believe in it. Intuitively, I don't think so. You can believe that a proposition is true, because you trust the person who utters it. Now, you can easily rephrase things to make it fit: for example:

    I do not understand propisition A.

    I understand the proposition "Person B understands propostion A," and think it is true.

    I understand the proposition "Person B thinks proposition A is true," and think its true.

    With these addtions, I could believe a lot of things to be true without understanding them. All I need to do is "trust an expert".

    But I think if I do this something gets lost. I have an ill-thought-through hunch that we generalise "trusting experts" from childhood on (the first probably being our parents), so that there's always some sort of social component already included. That is: "belief" may be a mechanism to restrict doubt, so that we don't find ourselves eternally unable to make decisions.

    In other words, maybe by judging "propositions" we tag as "important" we're really picking our team; maybe "beliefs" are prepositional predispositions rather than attitudes? The likelihood to respond to a certain preposition either favourably or disfavourably? That way, you wouldn't form an ad-hoc belief everytime you say "that's hogwash!"

    I apologise if this doesn't make much sense. It's just that if I see my shoe laces come untied I bend down and tie them. If someone were to formulate that in propositions, like "Your shoulaces are untied" (fact: true/false), "You should tie them," (value judgement: true/false) I can have attitudes to those propositions, but I have a hard time to consider them beliefs just on the ground that they've been formulated. However, when you formulate those propositions beliefs do come into play. So I sort of think that beliefs are pre-linguistic and valuable even if not (fully?) understood.

    (I've actually considered that we substitute understanding for belief - that is, we ignore things we don't quite understand in order to contain doubt enough to render us capable of decisions - people with a greater tolerance for doubt would need less belief [tautology?], and we would be predisposed to defend our beliefs because losing them would render us incapable of decisions. The tolerance for doubt might differ not only by person but also by topic. But all that's even more tentative than the rest of my post.).
  • A question about the liar paradox


    True, you can rephrase this in many ways. What I'm addressing is the connection between syntax and self-reference that TheMadFool is trying to establisch here:



    See Number 3.

    The difference between your example and the single-sentence versions lies in the type of reference, I think.

    Your example is endophoric (1. is cataphoric and 2. is anaophoric). The single-sentece versions are exophoric: you reference an object in the real world, which just so happens to be the sentence in question. I'm not sure any of this makes a difference, but if it does, that would be *very* interesting, though.
  • A question about the liar paradox
    I'll accept that because ''this'' may be defined to self-refer.TheMadFool

    You can rephrase the liar sentence:

    "The sentence I am uttering right now is false."

    "What I'm in the process of saying right now is false."

    What matters is that the subject of the sentence refers to the sentence it occurs in. No single component of the sentence need be self-referiatial by itself for that to happen.

    I don't understand why you want to define "this" to self-refer.
  • A question about the liar paradox
    But, ''this'' isn't like ''I''. If we stay true to the definition of the word then ''this'' doesn't apply to itself and it should for the liar paradox to be one.

    Of course we could invent a self-referential word e.g. ''thes'' and define it as such and the paradox would appear.

    If one were to be as exact as possible the definition of ''this'' doesn't include self-reference. It is grammatically incorrect (I'm not a linguistic expert).

    However, people do use ''this'' as you have (''this Australian needs a bath'' :D) but note that such forms of language are classified as referring to oneself in the third person. It isn't completely an instance of self-reference. People would find it odd to hear someone refer to himself in the third person.

    So, I still think the liar sentence is grammatically incorrect.

    However, as I mentioned above we could invent a self-referential word like ''thes'' and the liar paradox still is a problem.
    TheMadFool

    "This sentence is false," is only self-referential on the sentence level. "This" on its own refers to nothing at all; it's a determiner in the noun-phrase "This sentence", and that nounphrase is also not self-referential (It can't be because a noun-phrase can't be the referent for "this sentence").

    Finally, the syntax can only tell you that "this sentence" refers to a sentence that the speaker indicates. The sentence is not inherently self-referential. You could point to any other false sentence while saying this. There's nothing in the syntax, though, that prevents you from picking the sentence the nounphrase occurs in, making that sentence (but not the nounpharse itself, much less "this" alone) self-referential. The liar sentence is perfrectly grammatical, and the syntax is pretty much irrelevant, except that it allows the sentence to have a self-referential interpretation.

    Formally, "This sentence is false," is self-referential under the liar-interpretation because the sentence's subject refers to the sentence it is a subject of. To be able to do this, the subject cannot refer to itself (and thus be self-referential on its own).

    The liar sentence is perfectly grammatical.