• Janus
    17.4k
    The way a lack of intent affects meaning can be seen by imagining that you see a handwritten note with poem written on it, stuck on a wall in a bar. You ponder the meaning of the poem, but then someone tells you it was computer generated. That's when you realize you have a reflexive tendency to assume intent when you see or hear language. You may experience cognitive dissonance because the poem had a profound meaning to it, all of which was coming from you.

    The problem with using ChatGPT is that it's processing statements that were intentional. It's not just randomly putting words together.
    frank

    In your first paragraph you seem to be saying there is no intent there, and in your second paragraph you seem to be saying there is intent there.
  • Banno
    28.5k
    Charity is basically about attributing intent to the speaker.frank

    Odd, that you say
    Charity is basically about attributing intent to the speaker.frank
    Then use a quote in support of that, that does not mention intent

    We can indeed use a presumption that the speaker's beliefs are much the same as our own in order to interpret their utterances, and thereby surmise their intent.

    Charity is supposing that others have much he same beliefs as we do.
  • Joshs
    6.3k


    The missing bit is that a description of an intentional state is not a description of a physical state.Banno

    If we’re trying to capture the meaning of a statement and the meaning is encoded in intentional terms, how does switching over to an account in terms of physical states not lose the meaning?
  • frank
    17.9k
    We can indeed use a presumption that the speaker's beliefs are much the same as our own in order to interpret their utterancesBanno

    How would that work? Could you give an example?
  • Banno
    28.5k
    If we’re trying to capture the meaning of a statement and the meaning is encoded in intentional terms,Joshs

    Are we? Davidson's aim is to set out the meaning of some utterance, not to set out folks' intent. Their intent can be quite incidental.

    Davidson's reply is that there is no law-like relation between physical states and intentions.
  • Banno
    28.5k


    Jenny says "the cat is on the mat"

    Jenny often uses "the cat" to talk about Jack, the black cat. So she says things like "The cat's bowl is empty" when Jack's bowl is empty.

    Jenny uses "...is on..." when one thing is physically on another.

    Jenny uses "the mat" to talk about the prayer rug near the door.

    So I offer the following interpretation: Jenny's utterance of "The cat is on the mat" is true If and only if Jack is on the prayer rug. (notice the T-sentence)

    I can now proceed to check this interpretation as more information becomes available.

    From this I can also infer that Jenny believes that Jack is on the prayer rug. Might further infer that she intends to scold me for allowing him to do so. But these are post hoc. following after from the interpretation.
  • Joshs
    6.3k


    Davidson's reply is that there is no law-like relation between physical states and intentionsBanno

    A ‘physical state’ is a certain kind of language game. An intentional state arises within another game. Each offers their own kind of meaning. Davidson seems to be fine with settling for the physical state language game , without recognizing what he may be missing by excluding the other game.
  • J
    2.1k
    ...A better interpretation...
    — J
    Better for what? Again, no absolute scale is available.
    Banno

    But as we've been discussing, we don't need an absolute scale in order to compare good and better. I'm saying that a literal interpretation of, e.g., the book of Genesis is not as good an interpretation as one that focuses on its metaphorical, mythical, or psychological meanings. If someone wanted to ask into what's "better" about this, I'd start with pointing out how difficult it is to believe things that couldn't be true.
  • frank
    17.9k
    Jenny says "the cat is on the mat"

    Jenny often uses "the cat" to talk about Jack
    Banno

    As you will recall, Davidson focuses on a situation where you don't know the language Jenny is speaking. You don't recognize any of the words. All you get is behavior and the assumption that she believes the same things you do.

    So how did you gather that Jenny uses "the cat" to talk about Jack? What behavior did you observe that caused you to conclude this?
  • Leontiskos
    5k
    But just as "the cat is on the mat" doesn't mean "I am speaking English", it also doesn't mean "I assert that the cat is on the mat".Michael

    But we are asking why, "I assert the cat is on the mat," cannot mean that one is asserting that the cat is on the mat. You are thinking of the claim, "[ I assert that] I assert the cat is on the mat," but that too is an arguably different claim.

    So with any such pair, we can assume that there is an implicit assertion or not, and we can identify the explicit assertion with that implicit assertion or not. Again, there is no special rule that tells us how to interpret such a thing.
  • Pierre-Normand
    2.7k
    Having said that, I should also say that I'm not very familiar with how computer programmers talk about their work. Is "inner state" a common term? If so, do you know what they're meaning to designate? Could there be a distinction between inner and outer, speaking strictly about the program?J

    In discussions about LLMs, machine learning, and artificial neural networks, the phrase "inner state" is hardly ever used. However, when the phrase is used to characterize the mental states of human beings—such as thoughts, beliefs, and intentions—it often involves a philosophically contentious understanding of what is "inner" about them. Is it merely a matter of the person having privileged epistemic access to these states (i.e., without observation)? Or is it, more contentiously, a matter of this privileged first-person access being infallible and not needing publicly accessible (e.g., behavioral) criteria at all?

    I think a Rylean/Wittgensteinian understanding of embodied mental life leaves room for the idea of privileged epistemic access, or first-person authority, without making mental states hidden or literally "inner." Such a view amounts to a form of direct-realist, anti-representationalist conception of mind akin to Davidson's: what we refer to when we speak of people's mental states (including our own) is a matter of interpreting the moves that they (and we) are making in language games that take place in the public world (and this world isn't describable independently of our understanding of those games).

    Turning to LLM-based conversational assistants (i.e., current chatbots), although the exact phrase "inner state" is seldom used, the idea that they have literally "internal" representations is seldom questioned, and so a representationalist framework is often assumed. What seems to come closest to a literal "inner state" in an LLM is a contextual embedding. While these embeddings are often explained as "representing" the meaning of words (or tokens) in context, in the deeper layers of a neural network they come to "represent" the contextual meaning of phrases, sentences, paragraphs, or even abstract ideas like "what Kant likely meant in the passage Eric Watkins discussed at the end of his second chapter."

    For what it's worth, I think the idea that contextual embeddings—which are specific vector representations—correspond to or are identical with what an LLM-based assistant "internally" represents to itself is as problematic as the idea of "inner states" applied to human beings. The reason this is problematic is that what determines what LLMs mean by their words is, just as in our case, the sorts of moves they have been trained to make in our shared language games. The content of their contextual embeddings merely plays a role in enabling their capacity to make such moves, just as patterns of activation in our cortical areas (such as Broca's and Wernicke's areas) enable our own linguistic capacities.

    All of this leaves out what seems to me the most salient difference between human beings and chatbots. This difference, I think, isn't most perspicuously highlighted by ascribing only to us the ability to have inner states, form intentions, or make meaningful assertions. It rather stems from the fact that—in part because they are not embodied animals, and in part because they do not have instituted statuses like being citizens, business partners, or family members—chatbots aren't persons. Not having personal stakes in the game radically limits the kinds of roles they can play in our language games and the sorts of moves they can make. We can transact in meanings with them, since they do understand what our words mean, but their words do not have the same significance and do not literally convey assertions, since they aren't backed by a personal stake in our game of giving and asking for reasons (over and above their reinforced inclination to provide useful answers to whoever happens to be their current user).
  • Michael
    16.4k
    But we are asking why, "I assert the cat is on the mat," cannot mean that one is asserting that the cat is on the mat.Leontiskos

    The grammar here is confusing.

    I am claiming these things:

    1. The assertions "the cat is on the mat" and "I assert that the cat is on the mat" mean different things and have different truth conditions, as shown by the fact that the latter can be true even if the former is false.

    2. In asserting "the cat is on the mat" one is asserting that the cat is on the mat.

    Do you object to either of these?

    So with any such pair, we can assume that there is an implicit assertion or not, and we can identify the explicit assertion with that implicit assertion or not.Leontiskos

    John believes that the cat is on the mat. Jane does not believe that the cat is on the mat.

    John asserts "the cat is on the mat".

    Jane asserts "I disagree".

    Jane is not disagreeing with the implicit assertion "I [John] assert that the cat is on the mat" because Jane agrees that John is asserting that the cat is on the mat. Jane is disagreeing with the explicit assertion "the cat is on the mat". As such, we should not identify the explicit assertion with the implicit assertion.
  • Michael
    16.4k
    I understood Tim to be arguing that it is convention that explains meaning. If that is so, it is hard to see how going against a convention, as in the case of malapropism, can be meaningful.Banno

    I don't think it's mutually exclusive. A malapropism, by definition, is a term used to mean something it doesn't normally mean. The "normal meaning" is explained by convention and the "abnormal meaning" is explained by intention.

    It certainly seems appropriate to tell someone "that's not what the word means" and that they "misspoke".

    As a comparison we could consider a table. We certainly could use it as a seat (and we may sometimes do if there are no chairs available) but its "correct" use is determined by convention; tables aren't seats.
  • Banno
    28.5k
    Davidson seems to be fine with settling for the physical state language game , without recognizing what he may be missing by excluding the other game.Joshs
    What game is he excluding? He gives quite a sophisticated account of intentionality.
  • J
    2.1k
    Really interesting and helpful, thanks.

    Couple of thoughts:

    the most salient difference between human beings and chatbots. . .
    stems from the fact that—in part because they are not embodied animals, and in part because they do not have instituted statuses like being citizens, business partners, or family members—chatbots aren't persons.
    Pierre-Normand

    I agree with you, as it happens, about personhood here, but we have to recognize that many proponents of a more liberal interpretation of "person" are going to regard this as mere stipulation. What, they will ask, does being an embodied animal have to do with personhood? etc. We can't very well just reply, "That's how we've always 'played that game.'" The US Supreme Court changed the game, concerning corporations and persons; why couldn't philosophers?

    My second thought is: Like just about everyone else who talks about AI, you're accepting the fiction that there is something called a chatbot, that it can be talked about with the same kind of entity-language we used for, e.g., humans. I maintain there is no such thing. What there is, is a computer program, a routine, a series of instructions, that as part of its routine can simulate a 1st-person point of view, giving credence to the idea that it "is ChatGPT." I think we should resist this way of thinking and talking. In Gertrude Stein's immortal words, "There's no there there."
  • Joshs
    6.3k


    We can transact in meanings with them, since they do understand what our words mean, but their words do not have the same significance and do not literally convey assertions, since they aren't backed by a personal stake in our game of giving and asking for reasons (over and above their reinforced inclination to provide useful answers to whoever happens to be their current user).Pierre-Normand

    In a letter in Trends in Cognitive Science (LLMs don't know anything: reply to Yildirim and Paul), Mariel K. Goddu, Alva Noë and Evan Thompson claim the following:

    The map does not know the way home, and the abacus is not clever at arithmetic. It takes knowledge to devise and use such models, but the models themselves have no knowledge. Not because they are ignorant, but because they are models: that is to say, tools. They do not navigate or calculate, and neither do they have destinations to reach or debts to pay. Humans use them for these epistemic purposes. LLMs have more in common with the map or abacus than with the people who design and use them as instruments. It is the tool creator and user, not the tool, who has knowledge.

    Linking empty tokens based on probabilities (even in ways that we are in a position to know does reflect the truth of a given domain, be it a summarization task, physics, or arithmetic) does not warrant attributing knowledge of that domain to the token generator itself.

    We said above that LLMs do not perform any tasks of their own, they perform our tasks. It would be better to say that they do not really do anything at all. Hence the third leap: treating LLMs as agents. However, since LLMs are not agents, let alone epistemic ones, they are in no position to do or know anything.
  • Leontiskos
    5k
    I am claiming these things:

    1. The assertions...
    Michael

    To just assume that we are talking about assertions seems to beg the question of the whole thread. For instance:

    The prefix, however we phrase it - "I hereby assert that..." [...] does seem to iterate naturally.

    ... A sentence is already an assertion sign. [...] How does it end up needing reinforcement?
    bongo fury

    You basically want to stipulate that everything we are talking about is an assertion. You could stipulate that, but it is contrary to the spirit of the thread because it moots the central question of the thread.

    John believes that the cat is on the mat. Jane does not believe that the cat is on the mat.

    John asserts "the cat is on the mat".

    Jane asserts "I disagree".

    Jane is not disagreeing with the implicit assertion "I [John] assert that the cat is on the mat" because Jane agrees that John is asserting that the cat is on the mat. Jane is disagreeing with the explicit assertion "the cat is on the mat". As such, we should not identify the explicit assertion with the implicit assertion.
    Michael

    Good. If this is right then @bongo fury is correct when he says, "A sentence is already an assertion sign."

    So we might then ask why anyone would ever make explicit their asserting. For example:

    • John: "The cat is on the mat."
    • Jane: "Oh, would you like to read some Dr. Suess?"
    • John: "No, I am asserting that the cat is on the mat."

    It seems that we make the implicit assertion explicit when someone misjudges our intent and thereby misjudges the fact that an implicit assertion is occurring. More generally, we make the species of our act explicit when we wish to clarify the kind of act that we are engaged in.

    Similarly, when someone says, "I hereby assert that...," they are generally broadcasting or communicating the fact of their assertion, and broadcasting/communicating is a bit different than asserting. This is why the flavor of asserting is less applicable to recursivity than, say, the flavor of judging. Recursivity requires a mixture of act and potency, and judgment does involve both whereas assertion really only involves the former. Hence assertion does not have the same degree of self-reflexivity as judgment.
  • J
    2.1k
    The letter you quote from makes an excellent case for why computer programs are not agents in anything like the sense a human is. Do you agree that we should try to avoid using language that appears to reify such programs as 1st-person entities? (or however you might phrase the latter idea)
  • Joshs
    6.3k
    ↪Joshs The letter you quote from makes an excellent case for why computer programs are not agents in anything like the sense a human is. Do you agree that we should try to avoid using language that appears to reify such programs as 1st-person entities? (or however you might phrase the latter idea)J

    Absolutely. I think of them as appendages or human-built niches, like a nest to a bird or a web to a spider.
  • Michael
    16.4k
    To just assume that we are talking about assertions seems to beg the question of the whole thread.

    ...

    You basically want to stipulate that everything we are talking about is an assertion. You could stipulate that, but it is contrary to the spirit of the thread because it moots the central question of the thread.
    Leontiskos

    I'm not.

    You told me that we were talking about assertions, and asked me about two such assertions:

    But what if we actually spoke about assertions rather than circumlocutions that may or may not indicate assertion? What about:

    "The cat is on the mat."
    "I assert the cat is on the mat."
    Leontiskos
  • Leontiskos
    5k
    and asked me about two such assertions:Michael

    No, I said we should talk about phrases like, "I assert the cat is on the mat," rather than, "I think the cat is on the mat." It doesn't mean either of the quotations is itself an assertion. It means we are talking about 'asserting' rather than 'thinking'.

    Again, if everything in question is stipulated to be an assertion then the whole question of the OP is mooted.
  • Michael
    16.4k


    Then it is still as I said from the start. The phrases "the cat is on the mat" and "I assert that the cat is on the mat" mean different things and have different truth conditions, given that the latter can be true even if the former is false.
  • Banno
    28.5k
    As you will recall, Davidson focuses on a situation where you don't know the language Jenny is speaking. You don't recognize any of the words. All you get is behavior and the assumption that she believes the same things you do.frank
    In the extreme case, yep.

    So how did you gather that Jenny uses "the cat" to talk about Jack? What behavior did you observe that caused you to conclude this?frank
    Extended empirical observation of Jenny's behaviour within the community in which she participates. Watching her pet the cat, buy cat food, chastise someone for not chasing the cat off the mat. A Bayesian analysis of behavioural patterns, perhaps, although we don't usually need to go so far in order to recognise patterns in the behaviour of others.

    The interpreter assumes that Jenny and the others in her community have much the same beliefs as the interpreter - that there are cats, bowls, mats, and so on to talk about.
  • Banno
    28.5k
    I don't think it's mutually exclusive.Michael
    I don't disagree. Although asking someone to "dance the flamingo" need not be an intentional malapropism, and yet still be understood as a request to dance.

    The issue is, which is to be master? We have the convention of treating tables differently to seats, but before we can do that, we needs must understand which is a table and which a seat. That's an interpretation.

    If someone asks us to "dance the flamenco" we must first interpret it as a request to dance the flamingo in order to recognise it as a malapropism. Recognising the breach of convention requires first recognising the convention - and hence first interpreting the utterance as an (illegitimate) instance of the convention.

    So it's not that language does not make use of convention, but that the recognition of convention is itself dependent on interpretation.
  • Leontiskos
    5k


    This is the same issue, for when you say that they "have different truth conditions," you are implying that they are both assertions. A locution intended to broadcast/communicate does not have a truth condition in the way that an assertion has a truth condition.
  • Pierre-Normand
    2.7k
    My second thought is: Like just about everyone else who talks about AI, you're accepting the fiction that there is something called a chatbot, that it can be talked about with the same kind of entity-language we used for, e.g., humans. I maintain there is no such thing. What there is, is a computer program, a routine, a series of instructions, that as part of its routine can simulate a 1st-person point of view, giving credence to the idea that it "is ChatGPT." I think we should resist this way of thinking and talking. In Gertrude Stein's immortal words, "There's no there there."J

    I don't quite agree with this, or with the position claimed by Goddu, Noë and Thompson in the passage quoted by @Joshs (although I'm sympathetic with the embodied and enactive cognition stances of Noë and Thompson, regarding human beings and animals.) Those skeptical positions seem to me to rest on arguments that are overly reductionistic because they are insensitive to the distinction of levels between enabling mechanisms and molar behaviors, and, as a result, misconstrue what kinds of entities AI chatbots are (or what kinds of acts their "outputs" are). I don't want to argue for this in the present thread, though (but I could do so elsewhere), since this isn't tied enough to the OP topic of assertions. I had only wished to highlight the one specific respect—personhood—in which I do agree AI chatbots don't really make assertions with the same sort of significance human beings do. I may comment a bit more on the issue of personhood as an instituted status, and what some Supreme Court might or might not be able to rule, since you raised this pertinent question, later on.
  • Wayfarer
    25.2k
    Salient essay on Philosophy Now at present Rescuing Mind from Machines, Vincent J. Carchidi.
  • J
    2.1k
    I may comment a bit more on the issue of personhood as an instituted status, and what some Supreme Court might or might not be able to rule, since you raised this pertinent question, later on.Pierre-Normand

    I hope you do -- always interested in your thoughts. And about the ontology of chatbots as well.
  • frank
    17.9k
    Extended empirical observation of Jenny's behaviour within the community in which she participates. Watching her pet the cat, buy cat food, chastise someone for not chasing the cat off the mat. A Bayesian analysis of behavioural patterns, perhaps, although we don't usually need to go so far in order to recognise patterns in the behaviour of others.

    The interpreter assumes that Jenny and the others in her community have much the same beliefs as the interpreter - that there are cats, bowls, mats, and so on to talk about.
    Banno

    I went a few steps down the rabbit hole of determining what role Davidson meant attribution of intentions to play in radical interpretation. I think the answer is that he left it unclear what evidence suffices for interpretation. This lack of clarity echoes his overall view of intention. He apparently travelled through a reductionist phase, eventually landing in acceptance of intentions due to the problem of thwarted efforts.

    An example would be, say Pedro has decided to climb Mt Everest. Along the way, he got lost, ran out of O2 and died. A reductionist view would say we should conclude that Pedro intended to get lost and die. That's absurd, though. We all know he intended to make it to the top. He held that intention, but just didn't quite make it. Intentions do not reduce to actions.

    So what would the older Davidson, the one who decided that we can't be reductionist about intentions, say about how they figure in radical interpretations? I don't know.
  • Michael
    16.4k
    This is the same issue, for when you say that they "have different truth conditions," you are implying that they are both assertions.Leontiskos

    I don't think I am. I'm sure many philosophers of language will say that sentences have truth values even if not asserted.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.