• InPitzotl
    880
    You seem to be contradicting yourself.Daemon
    I'm pretty sure if you understood what I was saying, you would see there's no contradiction. So if you are under the impression there's a contradiction, you're missing something.
    The other day you had a robot understanding things, now you say a computer doesn't know what a banana is.Daemon
    the CAT tool still wouldn't know what a banana is.InPitzotl
    Your CAT tool doesn't interact with bananas.
    I've been saying from the start that computers don't do things (like calculate, translate), we use them to do those things.Daemon
    What is this "do" you're talking about? I program computers, I go to the store and buy bananas, I generate a particular body temperature, I radiate in the infrared, I tug the planet Jupiter using a tiny force, I shake when I have too much coffee, I shake when I dance... are you talking about all of this, or just some of it?

    I assume you're just talking about some of these things... so what makes the stuff I "do" when I do it, what I'm "doing", versus stuff I'm "not doing"? (ETA: Note that experience cannot be the difference; I experience shaking when I have coffee just as I experience shaking when I dance).
  • Daemon
    591
    Both Siri and the kind stranger (seem to have) understood my question. A mini Turing Test.TheMadFool

    Yes. The Stanford Encyclopedia says that the Turing Test was initially suggested as a means to determine whether a machine can think. But we know how Siri works, and we know that it's not thinking in the way we think.

    When we give directions to a hotel we use a mental map based on our experiences.

    Siri uses a map based on our experiences. Not Siri's experiences. Siri doesn't have experiences. You know that, right?

    Me (to a stranger): Sir, can you give me the directions to the nearest hotel?

    Stranger (to me): Yeah, sure. Take this road and turn left at the second junction. There's a hotel there, a good one.
    TheMadFool

    Because the stranger understands the question in a way Siri could not, he is able to infer that you have requirements which your words haven't expressed. You aren't just looking for the nearest hotel, he thinks you will also want a good one. And he knows (or thinks he knows) what a good one is. Because of his experience of the world.

    That's what it's like when we think. We understand what "good" means, because we have experienced pleasure and pain, frustration and satisfaction.
  • Daemon
    591
    Your CAT tool doesn't interact with bananas.InPitzotl

    But neither does a robot.

    What is this "do" you're talking about? I program computers, I go to the store and buy bananas, I generate a particular body temperature, I radiate in the infrared, I tug the planet Jupiter using a tiny force, I shake when I have too much coffee, I shake when I dance... are you talking about all of this, or just some of it?InPitzotl

    I'm talking about acting as an agent. That's something computers and robots can't do, because they aren't agents. We don't treat them as agents. When your robot runs amok in the supermarket and tears somebody's head off, it won't be the robot that goes to jail. If some code you write causes damage, it won't be any good saying "it wasn't me, it was this computer I programmed". I think you know this, really.

    They aren't agents because they aren't conscious, in other words they don't have experience.
  • InPitzotl
    880
    But neither does a robot.Daemon
    You seem to be contradicting yourself.Daemon
    Just to remind you what you said exactly one post prior. Of course the robot interacts with bananas. It went to the store and got bananas.

    What you really mean isn't that the robot didn't interact with bananas, but that it "didn't count". You think I should consider it as not counting because this requires more caution. But I think you're being "cautious" in the wrong direction... your notions of agency fail. To wit, you didn't even appear to see the question I was asking (at the very least, you didn't reply to it) because you were too busy "being careful"... odd that?

    I'm not contradicting myself, Daemon. I'm just not laden with your baggage.
    What is this "do" you're talking about? I program computers, I go to the store and buy bananas, I generate a particular body temperature, I radiate in the infrared, I tug the planet Jupiter using a tiny force, I shake when I have too much coffee, I shake when I dance... are you talking about all of this, or just some of it?InPitzotl
    ...this is what you quoted. This was what the question actually was. But you didn't answer it. You were too busy "not counting" the robot:
    They aren't agents because they aren't conscious, in other words they don't have experience.Daemon
    I'm conscious. I experience... but I do not agentively do any of those underlined things.

    I do not agentively generate a particular body temperature, but I'm conscious, and I experience. I do not agentively radiate in the infrared... but I'm conscious, and experience. I do not agentively shake when I have too much coffee (despite agentively drinking too much coffee), but I'm conscious, and experience. I even am an agent, but I do not agentively do those things.

    There's something that makes what I agentively do agentive. It's not being conscious or having experiences... else why aren't all of these underlined things agentive? You're missing something, Daemon. The notion that agency is about being conscious and having experience doesn't work; it fails to explain agency.
    When your robot runs amok in the supermarket and tears somebody's head off, it won't be the robot that goes to jail.Daemon
    Ah, how human-centric... if a tiger runs amok in the supermarket and tears someone's head off, we won't send the tiger to jail. Don't confuse agency with personhood.
    If some code you write causes damage, it won't be any good saying "it wasn't me, it was this computer I programmed"Daemon
    If I let the tiger into the shop, I'm morally culpable for doing so, not the tiger. Nevertheless, the tiger isn't acting involuntarily. Don't confuse agency with moral culpability.
    I think you know this, really.Daemon
    I think you're dragging a lot of baggage into this that doesn't belong.
  • Daemon
    591
    That isn't a difficult question. Only conscious entities can be agentive, but not everything conscious entities do is agentive.
  • InPitzotl
    880
    That isn't a difficult question.Daemon
    So answer it.

    The question is, why is it agentive to shake when I dance, but not to shake when I drink too much coffee? And this:
    Only conscious entities can be agentive, but not everything conscious entities do is agentive.Daemon
    ...doesn't answer this question.

    ETA: This isn't meant as a gotcha btw... I've been asking you for several posts to explain why you think consciousness and experience are required. This is precisely the place where we disagree, and where I "seem to" be contradicting myself (your words). The crux of this contradiction, btw, is that I'm not being "careful" as is "required" by such things. I'm trying to dig into what you're doing a bit deeper than this hand waving.

    I'm interpreting your "do"... that was your point... as being a reference to individuality and/or agency. So tell me what you think agency is (or correct me about this "do" thing).
  • Daemon
    591
    It might be better to take a clearer case, as you drinking the coffee is agentive, which muddies the water a little. So we'll ask "why is it agentive when you go to the store for bananas, but not agentive when you exert a tiny gravitational pull on Jupiter".

    Agency is the capacity of an actor to act. Agency is contrasted to objects reacting to natural forces involving only unthinking deterministic processes.
  • Daemon
    591
    What you really mean isn't that the robot didn't interact with bananas, but that it "didn't count".InPitzotl

    What I really mean is that it didn't interact with anything in the way we interact with things. It doesn't see, feel, hear, smell or taste bananas, and that is how we interact with bananas.
  • InPitzotl
    880
    It might be better to take a clearer case, as you drinking the coffee is agentive, which muddies the water a little.Daemon
    I've no idea why you think it muddies the water... I think it's much clearer to explain why shaking after drinking coffee isn't agentive yet shaking while I dance is. Such an explanation gets closer to the core of what agency is. Here (shaking because I'm dancing vs shaking because I drank too much coffee) we have the same action, or at least the same descriptive for actions; but in one case it is agentive, and in the other case it is not.
    Agency is the capacity of an actor to act. Agency is contrasted to objects reacting to natural forces involving only unthinking deterministic processes.Daemon
    Agentive action is better thought of IMO as goal directed than merely as "thought". In a typical case an agent's goal, or intention, is a world state that the agent strives to attain. When acting intentionally, the agent is enacting behaviors selected from schemas based on said agent's self models; as the act is carried out, the agent utilizes world models to monitor the action and tends to accommodate the behaviors in real time to changes in the world models, which implies that the agent is constantly updating the world models including when the agent is acting.

    This sort of thing is involved when I shake, while I'm dancing. It is not involved when I shake, after having drank too much coffee. Though in the latter case I still may know I'm shaking, by updating world models; I'm not in that case enacting the behavior of shaking by selecting schemas based on my self models in order to attain goals of shaking. In contrast, in the former case (shaking because I'm dancing), I am enacting behaviors by selecting schemas based on my self model in order to attain the goal of shaking.

    So, does this sound like a fair description of agency to you? I am specifically describing why shaking because I've had too much coffee isn't agentive while shaking because I'm dancing is.
  • Daemon
    591
    Agentive action is better thought of IMO as goal directed than merely as "thought".InPitzotl

    But where do the goals come from, if not from "mere thought"?
  • GraveItty
    311
    "The important question of what understanding is."

    Why classify this question as important? Is as important or non-important as any other question. I'm not trying to troll or provoke here but what if you have the answer? Then you understand what it is. Makes this people more understanding? Knowing what understanding presupposes a theoretical framework to place the understanding in. Understanding this is more important than knowing what it is inside this framework. Of course it's important to understand people. It connects us. Makes us love and hate. The lack of it can give rise to loneliness. Though it's not a sine-qua-non for a happy life. You can be in love with someone you can't talk to, because of a language barrier. Though harder, understanding will find a way. No people are completely non-understandable. Only truly irrational ones, but these people are mostly imaginary., although I can throw a stone over the water without any reason. You can push the importance of understanding but at the same time non-understanding is important as well. Like I said, knowing the nature of understanding doesn't help in the understanding itself. It merely reformulated it and puts in in an abstract formal scheme, doing injustice to the frame-dependent content. It gives no insight into the nature of understanding itself.
  • InPitzotl
    880
    But where do the goals come from, if not from "mere thought"?Daemon
    In terms of explaining agentive acts, I don't think we care. I don't have to answer the question of what my cat is thinking when he's following me around the house. It suffices that his movements home in on where I'm going. That is agentive action. Now, I don't think all directed actions are agentive... a heat seeking missile isn't really trying to attain a goal in an agentive way... but the proper question to address is what constitutes a goal, not what my cat is thinking that leads him to follow me.

    My cat is an agent; his eyes and ears are attuned to the environment in real time, from which he is making a world model to select his actions from schemas, and he is using said world models to modulate his actions in a goal directed way (he is following me around the house). I wouldn't exactly say my cat is following me because he is rationally deliberating about the world... he's probably just hungry. I'm not sure if what my cat is doing when setting the goal can be described of as thought; maybe it can. But I don't really have to worry about that when calling my cat an agent.
  • Daemon
    591
    But where do the goals come from, if not from "mere thought"? — Daemon

    In terms of explaining agentive acts, I don't think we care. I don't have to answer the question of what my cat is thinking when he's following me around the house. It suffices that his movements home in on where I'm going. That is agentive action. Now, I don't think all directed actions are agentive... a heat seeking missile isn't really trying to attain a goal in an agentive way.
    InPitzotl

    But a robot buying bananas is?
  • InPitzotl
    880
    But a robot buying bananas is?Daemon
    Why not?

    But I want you to really answer the question, so I'm going to carve out a criteria. Why am I wrong to say the robot is being agentive? And the same goes in the other direction... why are you not wrong about the cat being agentive? Incidentally, it's kind of the same question. I think it's erroneous to say my cat's goal of following me around the house was based on thought.

    Incidentally, let's be honest... you're at a disadvantage here. You keep making contended points... like the robot doesn't see (in the sense that it doesn't experience seeing); I've never confused the robot for having experiences, so I cannot be wrong by a confusion I do not have. But you also make wrong points... like that agentive goals require thought (what was my cat thinking, and why do we care about it?)
  • Daemon
    591
    Agentive action is better thought of IMO as goal directed than merely as "thought". In a typical case an agent's goal, or intention, is a world state that the agent strives to attain.InPitzotl

    You're wrong because the robot doesn't have a goal. We have a goal, for the robot.
  • InPitzotl
    880
    You're wrong because the robot doesn't have a goal.Daemon
    Ah, finally... the right question. But why not?

    Be precise... it's really the same question both ways. What makes what the robot not have a goal, and what by contrast makes what my cat have a goal?
  • Daemon
    591


    The cat wants something. The robot is not capable of wanting. The heat seeking missile is not capable of wanting.
  • InPitzotl
    880
    The cat wants something.Daemon
    I'm not sure what "want" means to the precision you're asking. The implication here is that every agentive action involves an agent that wants something. Give me some examples... my cat sits down and starts licking his paw. What does my cat want that drives him to lick his paw? It sounds a bit anthropomorphic to say he "wants to groom" or "wants to clean himself".

    But it sounds True Scotsmanny to say my cat wants to lick his paws in any sense other than that he enacted this behavior and is now trying to lick his paws. If there is such another sense, what is it?

    And why should I care about it in terms of agency? Are cats, people, or dragon flies or anything capable of "trying to do" something without "wanting" to do something? If so, why should that not count as agentive? If not, then apparently the robot can do something us "agents" cannot... "try to do things" without "wanting" (whatever that means), and I still ask why it should not count as agentive.
    The robot is not capable of wanting.Daemon
    The robot had better be capable of "trying to shop and get bananas", or it's never going to pull it off.
  • Varde
    326
    Understanding is what you are given whilst knowing is what you give, concerning intellect.

    An apple ~can be red is an intellectual statement, and I am giving it to you.

    I am given a kiwi, and hypothetically I know nothing about it, I have nothing to give, but I understand it, in so much as I have a sense of it.
  • Daemon
    591
    I'm not sure what "want" means to the precision you're asking.InPitzotl

    Because you're an experiencing being you know what "want" means. Like the cat you know what it feels like to want food, or attention.

    This is the Philosophy of Language forum. My contention is that a computer or a robot cannot understand language, because understanding requires experience, which computers and robots lack.

    Whether a robot can "try" to do something is not a question for Philosophy of Language. But I will deal with it in a general way, which covers both agency and language.

    In ordinary everyday talk we all anthropomorphise. The thermostat is trying to maintain a temperature of 20 degrees. The hypothalamus tries to maintain a body temperature around 37 degrees. The modem is trying to connect to the internet. The robot is trying to buy bananas. But this is metaphorical language.

    The thermostat and the robot don't try to do things in the way we do. They are not even individual entities in the way we are. It's experience that makes you an individual.
  • InPitzotl
    880
    It's experience that makes you an individual.Daemon
    No, being agentively integrated is what makes me (and you) an individual. We might could say you're an individual because you are "of one mind".

    For biological agents such as ourselves, it is dysfunctional not to be an individual. We would starve to death as Buridan's asses; we would waste energy if we all had Alien Hand Syndrome. Incidentally, a person with AHS is illustrative of an entity where the one-mindedness breaks down... the "alien" so to speak in AHS is not the same individual. Nevertheless, an alien hand illustrates very clear agentive actions, and suggests experiencing, which draws in turn a question mark over your notion that it's the experiencing that makes you an individual.
    In ordinary everyday talk we all anthropomorphise. The thermostat is trying to maintain a temperature of 20 degrees. The hypothalamus tries to maintain a body temperature around 37 degrees. The modem is trying to connect to the internet. The robot is trying to buy bananas. But this is metaphorical language.Daemon
    Not in the robot case. This is no mere metaphor; it is literally the case that the robot is trying to buy bananas.

    Imagine doing this with a wind up doll (the rules are, the wind up doll can do any choreography you want, but it only does that one thing when you wind it up... so you have to plan out all movements). If you try to build a doll to get the bananas, you would never pull it off. The slightest breeze turning it the slightest angle would make it miss the bananas by a mile; it'd be lucky to even make it to the store... not to mention the fact that other shoppers are grabbing random bananas while stockers are restocking with bananas in random places, shoppers constantly are walking in the way and what not.

    Now imagine all of the possible ways the environment can be rearranged to thwart the wind up doll... the numbers here are staggeringly huge. Among all of these possible ways not to get a banana, the world is and would evolve to be during the act of shopping a particular way. There does exist some choreography of the wind up doll for this particular way that would manage to make it in, and out, of the store with the banana in hand (never mind that we expect a legal transaction to occur at the checkout). But there is effectively no way you can predict the world beforehand to build your wind up doll.

    So if you're going to build a machine that makes it out of the store with bananas in hand with any efficacy, it must represent the teleos of doing this; it must discriminate relevant pieces of the environment as they unpredictably change; it must weigh this against the teleos representation; and it must use this to drive the behaviors being enacted in order to attain the teleos. A machine that is doing this is doing exactly what the phrase "trying to buy bananas" conveys.

    I'm not projecting anything onto the robot that isn't there. The robot isn't conscious. It's not experiencing. What I'm saying is there is something that has to genuinely be there; it's virtually a derived requirement of the problem spec. You're not going to get bananas out of a store by using a coil of two metals.
    My contention is that a computer or a robot cannot understand language, because understanding requires experience, which computers and robots lack.Daemon
    And my contention has been throughout that you're just adding baggage on.

    Let's grant that understanding requires experience, and grant that the robot I described doesn't experience. And let's take that for a test run.

    Suppose I ask Joe (human) if he can get some bananas on the way home, and he does. Joe understands my English request, and Joe gets me some real bananas. But if I ask Joe to do this in Dutch, Joe does not understand, so he would not get me real bananas. If I ask my robot, my robot doesn't understand, but can get me some real bananas. But if I ask my robot to do this in Dutch, my robot doesn't understand, so cannot get me real bananas. So Joe real-understands my English request, and can real-comply. The robot fake-understands it but can real-comply. Joe doesn't real-understand my Dutch request, so cannot real-comply. The robot doesn't fake-understand my Dutch request but this time cannot real-comply. Incidentally, nothing will happen if I ask my cable modem or my thermostat to get me bananas in English or Dutch.

    So... this is the nature of what you're trying to pitch to me, and I see something really weird about it. Your experience theory is doing no work here. I just have to say the robot doesn't understand because it's definitionally required to say it, but somehow I still get bananas if it doesn't-understand-in-English but not if it doesn't-understand-in-Dutch, just like with Joe, but that similarity doesn't count because I just have to acknowledge Joe experiences whereas the robot doesn't, even though I'm asking neither to experience but just asking for bananas. It's as if meaning isn't about meaning any more; it's about meaning with experiences. Meaning without experiences cannot be meaning, even though it's exactly like meaning with experiences save for the definitive having the experiences part.

    Daemon! Can you not see how this just sounds like some epicycle theory? Sure, the earth being still and the center of the universe works just fine if you add enough epicycles to the heavens, but what's this experience doing for me other than just mucking up the story of what understanding and agency means?

    That is what I mean by baggage, and you've never justified this baggage.
  • Daemon
    591
    Thanks for the thoughtful and thought-provoking response InPitzotl.

    We might say you're an individual because you are "of one mind".InPitzotl

    That's broadly what I meant when I said that it's experience that makes you an individual, but you seem to think we disagree.

    For me "having a mind" and "having experiences" are roughly the same thing. So I could say having a mind makes you an individual, or an entity, or a person. Think about the world before life developed: there were no entities or individuals then.

    The robot is trying to buy bananas. But this is metaphorical language. — Daemon

    Not in the robot case. This is no mere metaphor; it is literally the case that the robot is trying to buy bananas.
    InPitzotl

    Suppose instead of buying bananas we asked the robot to control the temperature of your central heating: would you say the thermostat is only metaphorically trying to control the temperature, but the robot is literally trying? Could you say why, or why not?

    For me, "literally trying" is something only an entity with a mind can do. They must have some goal "in mind".

    It's as if meaning isn't about meaning any more; it's about meaning with experiences. Meaning without experiences cannot be meaning, even though it's exactly like meaning with experiences save for the definitive having the experiences part.InPitzotl

    The Oxford English Dictionary's first definition of "mind" is: "The element of a person that enables them to be aware of the world and their experiences".

    The word "meaning" comes from the same Indo-European root as the word "mind". Meaning takes place in minds.

    The robot has no mind in which meaning could take place. Any meaning in the scenario with the bananas is in our minds. The robot piggybacks on our understanding, which is gained from experience. If nobody had experienced bananas and shops and all the rest, we couldn't program the robot to go and buy them.
  • InPitzotl
    880
    That's broadly what I meant when I said that it's experience that makes you an individual, but you seem to think we disagree.Daemon
    My use of mind here is metaphorical (a reference to the idiom "of one mind").

    Incidentally, I think we do indeed agree on a whole lot of stuff... our conceptualization of these subjects is remarkably similar. I'm just not discussing those pieces ;)... it's the disagreements that are interesting here.
    Think about the world before life developed: there were no entities or individuals then.Daemon
    I don't think this is quite our point of disagreement. You and I would agree that we are entities. You also experience your individuality. I'm the same in this regard; I experience my individuality as well. Where we differ is that you think your experience of individuality is what makes you an individual. I disagree. I am an individual for other reasons; I experience my individuality because I sense myself being one. I experience my individuality like I see an apple; the experience doesn't make the apple, it just makes me aware of the apple.

    Were "I" to have AHS, and my right hand were the alien hand, I would experience this as another entity moving my arm. In particular, the movements would be clearly goal oriented, and I would pick up on this as a sense of agency behind the movement. I would not in this particular condition have a sense of control over the arm. I would not sense the intention of the arm through "mental" means; only indirectly through observation in a similar manner that I sense other people's intentions. I would not have a sense of ownership of the movement.
    Suppose instead of buying bananas we asked the robot to control the temperature of your central heating: would you say the thermostat is only metaphorically trying to control the temperature, but the robot is literally trying?Daemon
    Yes; the thermostat is only metaphorically trying; the robot is literally trying.
    Could you say why, or why not?Daemon
    Sure.

    Consider a particular thermostat. It has a bimetallic coil in it, and there's a low knob and a high knob. We adjust the low knob to 70F, and the high to 75F. Within range nothing happens. As the coil expands, it engages the heating system. As the coil contracts, it engages the cooling system.

    Now introduce the robot and/or a human into a different environment. There is a thermometer on the wall, and a three way switch with positions A, B, and C. A is labeled "cool", B "neutral", and C "heat.

    So we're using the thermostat to maintain a temperature range of 70F to 75F, and it can operate automatically after being set. The thermostat should maintain our desired temperature range. But alas, I have been a bit sneaky. The thermostat should maintain that range, but it won't... if you read my description carefully you might spot the problem. It's miswired. Heat causes the coil to expand, which then turns on the heating system. Cold causes the coil to contract, which then turns on the cooling system. Oops! The thermostat in this case is a disaster waiting to happen; when the temperature goes out of range, it will either max out your heating system, or max out your cooling system.

    Of course I'm going for an apples to apples comparison, so just as the thermostat is miswired, the switches are mislabeled. A actually turns on the heating system. C engages the cooling system. The human and the robot are not guaranteed to find out the flaw here, but they have a huge leg up over the thermostat. There's a fair probability that either of these will at least return the switch to the neutral position before the heater/cooler maxes out; and there's at least a chance that both will discover the reversed wiring.

    This is the "things go wrong" case, which highlights the difference. If the switches were labeled and the thermostat were wired correctly, all three systems would control the temperature. It's a key feature of agentive action to not just act, but to select the action being enacted from some set of schemas according to their predicted consequences in accordance with a teleos; to monitor the enacted actions; to compare the predicted consequences to the actual consequences; and to make adjustments as necessary according to the actuals in concordance with attainment. This feature makes agents tolerant against the unpredicted. The thermostat is missing this.

    ETA:
    The word "meaning" comes from the same Indo-European root as the word "mind". Meaning takes place in minds.Daemon
    The word "atom" comes from the Latin atomus, which is an indivisible particle, which traces to the Greek atomos meaning indivisible. But we've split the thing. The word "oxygen" derives from the Greek "oxys", meaning sharp, and "genes", meaning formation; in reference to the acidic principle of oxygen (formation of sharpness aka acidity)... which has been abandoned.

    Meaning is about intentionality. In regard to external world states, intentionality can be thought of as deferring to the actual. This is related to the part of agentive action which not only develops the model of word states from observation, and uses that model to align actions to attain a goal according the predictions the model gives, but observes the results as the actions take place and defers to the observations in contrast to the model. In this sense the model isn't merely "about" itself, but "about" the observed thing. That is intentionality. Meaning takes place in agents.
  • Joshs
    5.6k
    Meaning is about intentionality. In regard to external world states, intentionality can be thought of as deferring to the actual. This is related to the part of agentive action which not only develops the model of word states from observation, and uses that model to align actions to attain a goal according the predictions the model gives, but observes the results as the actions take place and defers to the observations in contrast to the model. In this sense the model isn't merely "about" itself, but "about" the observed thing. That is intentionality. Meaning takes place in agents.InPitzotl

    This may be off topic , but that’s one definition of intentionality, but not the phenomenological one.
    In phenomenology , objects are given through a
    mode of givenness, so the model participates in co-defining the object.
  • InPitzotl
    880
    This may be off topic , but that’s one definition of intentionality, but not the phenomenological one.Joshs
    I don't think it's off topic, but just to clarify, I am not intending to give this as a definition of intentionality. I'm simply saying there's aboutness here.
  • Daemon
    591
    This feature makes agents tolerant against the unpredicted. The thermostat is missing this.InPitzotl

    I've looked at this many times, and thought about it, but I just can't see why you think it is significant. Why do you think being tolerant against the unpredicted makes something an agent, or why do you think being tolerant against the unpredicted means that the robot is trying.

    But also there's a more fundamental point that I don't believe you have addressed, which is that a robot or a computer is only an "entity" or a "system" because we choose to regard it that way. The computer or robot is not intrinsically an entity, in the way you are.

    I was thinking about this just now when I saw this story "In Idyllwild, California a dog ran for mayor and won and is now called Mayor Max II".

    The dog didn't run for mayor. The town is "unincorporated" and doesn't have its own local government (and therefore doesn't have a mayor). People are just pretending that the dog ran for mayor.

    In a town which was incorporated and did have a mayor, a dog would not be permitted to run for office.
  • InPitzotl
    880
    I've looked at this many times, and thought about it, but I just can't see why you think it is significant.Daemon
    I think you took something descriptive as definitive. What is happening here that isn't happening with the thermostat is deference to world states.
    But also there's a more fundamental point that I don't believe you have addressed, which is that a robot or a computer is only an "entity" or a "system" because we choose to regard it that way.Daemon
    You're just ignoring me then, because I did indeed address this.

    You've offered that experience is what makes us entities. I offered AHS as demonstrating that this is fundamentally broken. An AHS patient behaves as multiple entities. Normative human subjects by contrast behave as one entity per body. Your explanation simply does not work for AHS patients; AHS patients do indeed experience, and yet, they behave as multiple entities. The defining feature of AHS is that there is a part of a person that seems to be autonomous yet independent of the other part. This points to exactly what I was telling you... that being an entity is a function of being agentively integrated.

    So I cannot take it seriously that you don't believe I have addressed this.
    The computer or robot is not intrinsically an entity, in the way you are.Daemon
    I think you have some erroneous theories of being an entity. AHS can be induced by corpus callosotomy. In principle, given a corpus callosotomy, your entity can be sliced into two independent pieces. AHS demonstrates that the thing that makes you an entity isn't fundamental; it's emergent. AHS demonstrates that the thing that makes you an entity isn't experience; it's integration.
    I was thinking about this just now when I saw this story "In Idyllwild, California a dog ran for mayor and won and is now called Mayor Max II".Daemon
    Not sure what you're trying to get at here. Are you saying that dogs aren't entities? There's nothing special about a dog-not-running for mayor; that could equally well be a fictional character or a living celebrity not intentionally in the running.
  • Daemon
    591
    You're correct to point out that I ignored your discussion of AHS etc, apologies. I ignored it because it doesn't stand up, but I should have explained that.

    Cutting the corpus callosum creates two entities.
  • InPitzotl
    880
    Cutting the corpus callosum creates two entities.Daemon
    So if experience is what makes us an entity, how could that possibly happen?

    If integration makes us an entity, you're physically separating two hemispheres. That's a sure fire way to disrupt integration. The key question then becomes, if cutting the corpus callosum makes us two entities, why are we one entity with an intact corpus callosum, as opposed to those two entities?

    Incidentally, we're not always that one entity with an intact corpus callosum. A stroke can also induce AHS.
  • Daemon
    591
    So if experience is what makes us an entity, how could that possibly happen?InPitzotl

    Two experiencing entities.

    But I don't really think the effects of cutting the corpus callosum are as straightforward as they are sometimes portrayed, for example, the person had a previous life with connected hemispheres.

    But also there's a more fundamental point that I don't believe you have addressed, which is that a robot or a computer is only an "entity" or a "system" because we choose to regard it that way.Daemon

    I don't think the "corpus callosum/AHS" argument addresses this.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.